The Resurrection

We moved to the new server back in October, and my shell account on the new box doesn’t allow me to install custom Python modules, like the xmltramp utility I’d previously employed to generate the self-updating index page.

It’s back now, though. Given that I didn’t want to do low-level XML parsing, I kludged together a module that parses the RSS feeds from MMH and 21CDB using Python’s HTMLParser class. It seems to work fine… for now. It gets updated at the top and bottom of the hour.

Also raised from the dead are my custom and Register RSS feeds. Nothing spectacular had to be done; I simply had to make some adjustments to account for different time zones and absolute paths on the Linux file system… and add them to the crontab. If you have the old URLs in your news aggregator, those will still work, as I’ve created symbolic links to the XML files’ new locations. Just to review: RSS Feed
Old URL:
New URL:

Register RSS Feed
Old URL:
New URL:

Again, all of those locations should still serve up the good stuff, and they get updated every fifteen minutes starting at the top of the hour. I guess things are starting to return to normal around here, huh? Of course, I still have to rewrite the old Webware code (FARK parser, music reviews, email spambot thwarter) to use straight CGI… but that’s going to have to wait a bit longer.


One thought on “The Resurrection

  1. hey, it’s tiff, from “hey gary you suck” fame…

    i used to work for the guys who wrote/invented/whatever python. i only mention it because you’re the first person outside of their team that i’ve heard of who uses it. it’s kind of neat.

Comments are closed.