Well, it’s finally done, or at least a state where I can release it to the world.  I’ve written a clone of the original website, with the same postgres + portage backend that GPNL uses, and it’s now available online here:

I’ve been working on this thing non-stop for the past two weeks at least, and I’m really excited to have it done and up and running.  I loved the original design and site, and used it quite a lot.  It’s a great resource to just randomly find packages that are in the tree if you ever browse it, and find new stuff to try out.  Plus, it’s a great way to search the portage tree as well!

The search is one thing I’m pretty excited about.  Just like eix, it will accept regular expressions as input.  So if you want to search for an exact package name, try ^portage$  If you want to search on more than one word, seperate them with pipes: foo|bar  There’s lost of stuff you can do, so have fun. :)  I’ll have an advanced search sooner or later.

Speaking of feature requests, I have also setup a Trac project page for the backend, where you can see which bugs I’m going to be working on.  You’ll notice that getting some Atom and RSS feeds is on that list.  I’m sure people are going to want that.

Also, one small drawback, is that it only updates once a day right now.  The only reason for that is because my server is just a small little Athlon, and while it could handle the load of updating more regularly, I don’t want to put too much of a strain on an already overworked little system.  I’m going to look into ways of optimizing things so that I can get updates out the door more frequently, but for now, it’s going to have to wait.

As always, feedback, comments and suggestions are more than welcome.  Please let me know if you find any bugs, too.   Enjoy. :D

I’ve just updated my apache config, and and now have a permanent redirect to

There’s no need to update your RSS readers if it handles redirects okay, but eventually the content for the main domain name will change — not anytime soon, though, that’s for sure, I have zero plans right now. Even when I do change to something else, I’ll still have the RSS feeds redirect so as not to break anyone’s feed.

Anyway, you’ll see why I’m moving to a subdomain here in a day or so.

Edit: More spring cleaning, now Universe has its own subdomain as well:

prepared statements and stored procedures

I’m still working on cleaning up the import scripts for GPNL, and I’m going to have to start using PHP’s PDO database layer to connect to an SQLite3 database at one point.

I haven’t used it yet, but I had heard it was coming in PHP 5 for a while. Personally, I’ve always used PEAR::DB and was quite happy with that.

I’m still not sold on using the new layer anyway, but I figured I’d do some reading while I am getting ready to use it in this very small instance that I’m implementing.

On the docs page, I found a great summary of why prepared statements and stored procedures are handy and helpful. In short: they save you time for queries you have to repeat a lot, by pre-compiling the preparation that is common to all the queries, so that the database is really only processing the new data, and thus using less resources.

Prepared statements I haven’t played with much before until a few weeks ago, but I’ve slowly started using them in my import scripts. Performance-wise, I’ve only seen about a 15 to 20 percent speed increase. The thing I like the most about them, though, is that I don’t have to escape my strings anymore. That’s a nice little advantage I can live with.

Anyway,’s PDO documentation page has a nice writeup as well, and instead of trying to summarize it myself any more, I’ll just quote it verbatim:

Many of the more mature databases support the concept of prepared statements. What are they? You can think of them as a kind of compiled template for the SQL that you want to run, that can be customized using variable parameters. Prepared statements offer two major benefits:

  • The query only needs to be parsed (or prepared) once, but can be executed multiple times with the same or different parameters. When the query is prepared, the database will analyze, compile and optimize it’s plan for executing the query. For complex queries this process can take up enough time that it will noticeably slow down your application if you need to repeat the same query many times with different parameters. By using a prepared statement you avoid repeating the analyze/compile/optimize cycle. In short, prepared statements use fewer resources and thus run faster.
  • The parameters to prepared statements don’t need to be quoted; the driver handles it for you. If your application exclusively uses prepared statements, you can be sure that no SQL injection will occur. (However, if you’re still building up other parts of the query based on untrusted input, you’re still at risk).

nice mysql vs postgres summary

I was googling for a postgresql image I could use when I found this page, a nice short summary on the differences between MySQL and PostgreSQL with an emphasis on development policy.

I should mention that I’m linking to it because I agree with the author and also because I’m biased towards PostgreSQL. I prefer postgres not because of fanboyism, but because of experience and years of using both databases.

I was actually lucky enough to be trained to use PostgreSQL as the first database I ever used, and everything after that has never been able to duplicate its feature set. Since my first tech job, I’ve worked with Access, MySQL, SQL Server 2000 and SQLite.

Anyway, I love postgres. If you’ve never given it a chance, and you are looking for more advanced features, check it out. It’s all that and a box of girl scout cookies. I tell you what.

preg_replace in php

I love working with regular expressions. I had to use a preg function tonight that I haven’t called up in a long time. Using preg_replace, you can match for patterns in a string and then have them set to variables, and then modify them and do whatever you’d like with them. Read the actual documentation for a better explanation.

Anyway, here’s how I used it. I am working on parsing ChangeLogs, and on the page I display them, I want to replace any mention of ‘bug <bug number>’ with an actual href link to the bugzilla. Regular expressions makes it happen, baby!

A sample string, then, might be something like this: “Fixed everything. I rock. See bug #12345.”

Here’s my pattern: $pattern = ‘/(bug)( +\D)(\d+)/’;

You need to know your regex syntax, but what this does is sets the word ‘bug’ to the first variable. The second variable is a space, repeated once or more, followed by one non-numeric character. The third variable is the bug #, or more accurately, any string that is only digits with a length of one or more characters. It just occured to me while writing this that I could have crammed the first and second together, but in the chance that I wanted to standardize or re-format the display, then I could play around with $2. In my case, though, I’m just going to leave it alone.  Also, the second variable doesn’t need the space identifer if you don’t want, since \D would catch that, but you can do exactly what you want to make sure you get correct matches.

Using that pattern with preg_replace, PHP is going to assign the matching results to variables, numbered incrementally starting with 1. So, going back to my string, $1 would be ‘bug’, $2 would be ‘ #’, and $3 would be ‘12345’.

Now that I have my variables, I just create one more string to create my hyperlink. Here it is: $replacement = “<a href=’$3′>$1$2$3</a>&#8221;.

The actual code would be this: $str = preg_replace(‘/(bug)( +\D)(\d+)/’, “<a href=’$3′>$1$2$3</a>&#8221;, “Fixed everything. I rock. See bug #12345.”);

That would then return the original string with ‘bug #12345’ being a hyperlink.

Pretty cool stuff. I love working with regular expressions, and while this example is incredibly simple, there’s so much stuff you can do with it. Good times.

openchrome in portage

Just another Gentoo PSA, portage now has the latest driver for the openChrome X11 drivers for VIA chipsets (x11-drivers/xf86-video-openchrome). These drivers are really nice because they support more chipsets than the standard VIA ones do.

I shouldn’t get any credit for this one. I’ve actually had an ebuild for this for like 4 months, and procastinated putting it into the tree. In fact, Donnie (dberkholz) beat me to to it — thanks, man. All I really did was clean up the ebuild and do some testing. Also, of course, thanks to upstream for actually working on the project and getting something successful out the door. Much love.

So there ya go. With that little driver you should be able to do some cool stuff with those Mini-ITX’s that you’ve been waiting to convert into a PVR. Rock on.

temporary backend breakage, lotgd restarted

One other thing I forgot, was that in moving around all the database stuff this weekend, I also took care of some maintenance on alan-one, the server that hosts, and  I of course broke the recent scripture atom feeds I was talking about at the same time, but that’s fixed now as well.

One thing I restored at the same time was I finally got Legend of the Green Dragon back up and running.  The old MySQL database got corrupted somehow.  But, it’s been upgraded and the database is good to go once more.  Unfortunately, I lost the old setup and characters so you’d have to start from scratch again if you were playing.

new gpnl backend

Nothing new on the frontend, but the backend for GPNL has been almost completely replaced now with my updated changes.  The difference in speed alone is pretty nice.  Where it took about 30 to 45 minutes to import the entire tree, now it only takes around 5.  Plus, I’m importing more stuff this time.  That means a lot more flexibility on my end to play around with the code and do some debugging if I have to.

There’s still some minor sorting issues to worry about, but I’ll have that worked out by today.  I’m also working on a really cool proof-of-concept using the postgres database as a backend, that will hopefully be ready soon within a few days.

The next stop is to get GPNL to do some more reporting on QA checks.  I need some ideas of what to look for, but already there’s a few things I can do just by running system-wide queries against the entire tree.  One issue that has come up recently is the splitting of DEPEND and RDEPEND correctly for building binary packages.  Given a list of packages to look out for, I can easily report that.  Another one that phreak came up with was to look at redundant data in the metadata.xml files.  I can also duplicate that with a simple query, so that’ll be easy to track it’s progress in getting cleaned up.

Anyway, I could use some more ideas.  It’s one thing to have all this data gathered, archived and indexed, but it’s another to come up with creative ideas on how to use it.  Looking through the QA bugs usually helps to give me an idea.

random book of mormon chapter atom feeds

I got a poke yesterday on my original random book of mormon chapter post about creating some feeds so that people can pull them themselves. I’ve been meaning to do that for a long time, but always put it off because I’ve never written any dynamic RSS or Atom feeds before. I finally sat down and and figured it out this morning. It took me about an hour and a half to do, and I don’t think my XML is perfectly formed, but at least it works. I’ll clean it up when I have more time.

Here’s the new feeds right here. I have one for every volume of scriptures, from the Old Testament to the Pearl of Great Price, it’s all there. I think it’d be fun to add some for the Gospels and Psalms.

Right now the feed will update every time you check it, though I’ll probably come back later and change it to only update every 5 minutes or every hour or something.

To be honest, I’m not real proud of the quality of this thing right now, and I’d like to do it a lot better since I think it has some potential (like anonymous user preferences, or something), but the fact is I’ve been putting it off for way too long and I wanted to get something out the door. Aside from that, it helps me to read the scriptures more often by adding a bit of novelty to the mix.

Some other things I want to do for the feed is to have a link to the MP3 that is offered by the LDS Church on their scriptures website. Each entry feed will already link directly to the chapter page, but the naming scheme for the MP3s is slightly different, so I’ll have to do a bit of poking around before I can throw that together.

Another idea I’ve been toying with for a long time has been a simple “chapter a day” RSS feed, but with a few options for users. For instance, it’d be trivial to add features like number of chapters or verses per day, the update interval, and where to start reading.

Anyway, there’s a lot of really cool potential things to do, and I’m open to suggestions if anyone has ideas. With the database nicely normalized (and still lacking a formal release, sheesh I’m behind) it makes getting the data really simple and easy to work with.