ways to make upgrading an old gentoo box fun — or at least entertaining

When you think of Gentoo, you think of bleeding edge, right? The latest, greatest and shiniest? Dag straight! Well, you can also be bleeding, as in losing blood and about to asplode if you don’t get it some first aid right away. I kind of like both versions.

dlna upgrade

Well then.

I’ve used Gentoo a lot over the years, and I’ve fixed a lot of old installations. Keeping an old box up and running isn’t necessarily impossible, and I really don’t recommend keeping something longer than a year outdated. I do have one of my desktops that I kind of “snapshot” it. I’ll get it to a point where I like it … and then just leave it there.

What usually happens to trigger me into upgrading is either there’s a new package out there that I want to try and requires a lot of newer libs, or even something more dramatic, I get bored.

This is just a quick howto of how I do it though (and am doing it).

I should mention at this point that this article matches closely the title … making upgrading a box *entertaining* and *fun*. Yee.

Step one: install eix and sync the tree that way.

# emerge eix
# eix-sync

Next, tell portage to just keep going and do whatever you have to, and don’t bother me with the little details, and just change as much as you can on your own and I might look in every once in a while if it looks like you’re changing too much:

EMERGE_DEFAULT_OPTS="--keep-going --autounmask --autounmask-write"

Before jumping into world upgrades, always start with the system updates first, but do actually keep close eye on these, especially if gcc needs to be upgraded (like it did in my case). So, pause a moment, see what needs to get up:

# eix -Iuc --system

Get the really important ones out of the way (glibc, binutils, gcc, etc.). Listing the installed system ones seriously takes less than one console screen. At least look at the thing.

dlna installed system

Once those are safely out of the way, do a quickpkg on everything that’s installed so you have fancy little tarballs for when (not if) something breaks. As in, breaks for you. I’m pro at this. I’ve broken dozens of systems before .. uh .. yeah (and fixed them, too!).

# quickpkg --include-cfg=y `eix -I# --system`

Actually, while you’re at it, make sure portage is always saving tarballs of your installed packages in make.conf:


Once you’re feeling confident that the very basics are installed so you can do whirl-o-fun, this is where it gets really interesting. Tell portage to do upgrades in a random order, but to be nice, ask you if you want to do that package or not:

# for x in `eix -Iu# | sort -R`; do emerge -uq --ask $x; done;

Then just check in on your box every few whenevers to see how it’s doing, and approve or skip updates. That’s it.

I mean for this post to be kind of stupid and it turned into being somewhat serious, so here’s something even more serious: don’t do this because this is how I do it and the last thing you want is to have to search google for “that one weird gentoo dude who told me how to break my box” because I won’t be there to save you.

To discredit me even more, I’m also that guy who will throw this in his make.conf:

FEATURES="--jobs 4"

So, yeah, go crazy. But remember, I’ve been breaking stuff much longer than you have … and fixing them too.

Have funnnnnnnnnnnnnnnnnnnnn!

Final notes: If you’re determined to get something useful out of this post, here it is:

  • Always use “quickpkg” if you think you’re about to do something risky.
  • Having “buildpkg” in your make.conf is best practice. Everything goes in “/usr/portage/packages”
  • “eix -I#” — display installed packages, name only. Use –system for system, and –world for all.
  • If you ask for support from someone, they’re going to tell you to do a new install most likely. Use eix to get you a list of installed packages so you know what to port over.
  • I can speak from experience that upgrading boxes even years old is possible … depending on how much patience you have. It’s possible. I really have done it lots of times.

life of a linux multimedia nerd … it’s fun

I’ve been thinking for a couple of weeks now that maybe it’d be good to post some stuff about multimedia on Linux again. Good for *you* that is, o snap!

Just kidding. I do have a ton of stuff floating around in my head though, that’s not all that organized, or helpful if it doesn’t have any real way to explain how to use. Case in point my awesome (awesome to me, not you, z snap!) DVDs wiki where I have a looootttt of stuff. A wiki is great for braindumps, but not for writing articles.

I’ve toyed with the idea on and off, and one of the reasons I don’t want to post stuff on my wiki is the likelihood of someone finding it pretty low. Another big reason is that I don’t want my braindump wiki to begin to have *any* sort of expectation or order or other-party direction … the site is mainly for me, and you’ll see that browsing around that a lot of it is “oh yeah, here’s the encoding settings for this random thing and a flag you won’t need to forget.” So I guess I can’t keep it all in my head.

I don’t feel really qualified to comment on the state of multimedia in Linux too much, not nearly as much as I could years ago when I was doing a lot more development in Gentoo. The reason being is that I’ve gone completely in one direction, and one only — DVDs and media centers.

My old MythTV setup from who knows how long ago

I’ve had this dream for years — fifteen to be exact — of how I wanted my multimedia setup to be. It was when I first found out about MythTV and got into it. I loved the idea of it, but I hated the implementation of it. I could never get it quite the way I wanted to. But that’s okay. What really grew out of those years of experience was learning a lot about multimedia in general. That’s what I find really fascinating, is chasing the dream of how to get where I want, and all the little stops on the way … an example being of how I play with containers and codecs that I have no intention of actually using, just so I can see what it does and doesn’t have.

My happy little media center was always just slightly out of reach, and my library of TV shows and movies grew and grew over the years as well. The thing that made it finally happen was a drop in prices of hardware.

current hardware setup

You have to realize that ten years ago, when you have 250 DVDs, and are lucky to have 250 GB of hard drives (combined), that doesn’t leave you a lot of options. And from day one, my goal has always been to have **all** of my library available in **one** place. No way that was gonna happen. So my original goal could never have been met early on simply by logistics alone (to say nothing of encoding).

I am also really, really pick about how I like things to be, and so it legit does take me years to get used to the ideas of some things. For instance, it wasn’t until about a year ago that I finally conceded that re-encoding my DVDs from MPEG2 to MPEG4 was okay. I’ve always held firm on the idea that I prefer, above all, quality as close to the original as possible — the holy grail being that it’s not reencoded *at all*. Remuxing I was always totally fine with — as long as it was a container I was fond of, but that’s a whole nother rabbit hole.

Interestingly enough, what finally convinced me that encoding was okay (going in this random direction) was that there is source material that *needs* it before it can look presentable in how I like it. Some sources are interlaced or telecined, and some need to be cropped. And that’s about it. But what really got me changing my mind and direction was … again … hardware.

I bought a new Sony TV a few years back (about three I think) that was running Android TV on it. I remember I got it for fairly cheap, brand new, at about half the retail cost everything else was going for. It’s also a 4k display. Well, long story short, I didn’t like it. It was too bright, mostly, and the idea that I had to boot up my TV and *wait for it* was beyond unacceptable. What’s the point of having a multimedia library on demand, if I couldn’t watch it when I demanded to?? (I have since realized that it only needs to boot up once, and then stays up … but hey .. first impressions are everything, and I was not impressed)

So, I would use it, and then not use it, try it, and then not try it, on and off for three years. It was too big to be a monitor, and it was smaller than my rear projection TV, and so, there was no real actual practical purpose for it.

I call situations like that “a solution without a problem.” I had it, but I didn’t need it, and when I tried to create a need for it, it wasn’t able to duplicate what I wanted, or do anything better.

The whole point of this though, is how it’s actually hardware that has changed everything for me.

Jumping back a bit to the storage problem. Hard drives just kept getting cheaper and cheaper, and I kind of woke up one day and had 1 TB of hdd space that I could use. Not bad! However, that was not nearly enough to reach my golden dream of having all my content untouched MPEG2 video and Dolby Digital audio. Nope. As my friends can attest, getting me to change my mind is incredibly difficult, if not impossible. My media library was huge — just storing Star Trek: The Next Generation was massive. To be specific, all seven seasons are 384,352 MB. That’s a third of one series just to have it on there. Hmm. Nope.

So, I had a couple of choices: encode my video (bleh), store only parts of it (bleh), or buy more hard drive space (nopes). Once again, I find myself pretty frustrated.

My personal website to manage all my DVDs.

I also need to interject in here that it was about two years ago that I got diagnosed with OCD [Edit: I realized later that since I reference this a few times, it’d make sense to talk about it in detail, so added more info at the end of the post]. By two different professionals over a six-month period. The first time I heard it, I scoffed at the idea and said there’s no way that would apply to me. The second time, I still scoffed. But as I thought more about it over time, I realized how many ways it affected my life. Typically once I see how things *are* then I’m able to deal with them much better. I realized that it was my brain that wanted to be completely rigid, and that there were other options out there that could meet my basic requirements, even if my goal of perfection is not possible.

“If you’re watching, it’s working,” is a motto I use a lot when it comes to testing and setting up my media library. It basically means, that none of the chasing of ideals is relevant in the least if I’m not actually using the tools I’m building. It’s like building a perfect gravestone for yourself, but never considering that you’d die and be buried there. There was no practical point to it, other than to have it exactly how you want it. That’s what I was doing — perfecting my little graveyard so that everything is shiny. Media libraries are meant to be watched though, not put on display (another thing I had to learn over the years), and since I wasn’t reaching my goal, it was time to try something different. More compromises. Now that I realized I had OCD, though, it became much simpler.

Back to hardware. It played another big part when I got a four-core desktop about five years or so ago. I finally had something that could encode video at a decent rate, using x264. I think it ran at around 60 fps or so to encode video. That was awesome. That meant that I could wait about double the length of the video, and come back and it’d be done. Before this time, I’d only had dual cores, and encoding at 120 to 150 fps is kind of … a buzzkill … if you start doing the math on how long it’s going to take to encode one series.

The better hardware helped out not because I could encode my library, but because I could *test* encoding multimedia. I have spent years tinkering with everything, my idea of perfection evolving, and having intense fanatical debates in myself of why I shouldn’t use this codec, container, profile, setting, and so on (which, I realize now, is also caused by OCD). So while I couldn’t necessarily have the *library* I wanted, I could easily chase down the dream of the *standard* I wanted. And that took a really, really, really, really long time. Once again, though, it was hardware that saved the day! But not without wrecking it first.

For some reason, over all this time, it never actually occurred to me to get better, new hardware to *watch* my media. I had my trusty old Sony DVD player that could remember the playback position of six discs (SIX!!), and I loved it because one of the requirements of my perfect box was that I’d be able to resume playback of my media. Early, early on, back in the 2002s, I was already writing my own scripts for resuming playback on both a video and a series. I’d have MythTV call a wrapper script, and it’d resume perfectly. Great idea, one of my goals met, but … I wasn’t watching media. So it kind of died on the vine.

My trusty old (22 years old) Sony CRT TV with the Blu-ray player on top.

You think after reading this far that it looks like I’ve gone down the rabbit hole, believe me we have not even begun. This whole thing is a high level description.

Maybe it’d be better if I didn’t go into details, and just list out some of the things I played with:

* Hacking an open-source DLNA server so that it could resume playback
* Using the Opera store on my Blu-ray player so that I could stream video that way
* Beginning to write my own website that could stream video to my PSP
* Writing code that would detect audio breaks and commercial breaks in TV shows so that I could generate chapter points (this was actually an amazing project)

Ben 10 PSP
This video wasn’t streamed, it’s sitting on the memory card. I really did start working on a website though where I could stream video from its web browser.


All of that is insane, but again OCD is all about being *locked* into one thing. Part of the mental circus is that there is only *one* solution. Also, that list above is not complete. But that one solution features only the hardware that I really like. Things have to be exactly the way I want, even if that means using an old Blu-ray player from five years ago that can play videos in a Matroska container with VobSub and chapters, but only if it’s plugged into a USB (I legit got an entire series encoded and put on a thumb drive just so I could watch it this way .. I didn’t).

While this may all seem like a huge waste of time, the reality is that there are huge payoffs — I learn what I want by learning what is out there, researching those paths, accepting or rejecting them. Because I’m so picky, every piece of new information I picked up on, might be the solution I was looking for, and so it was scrutinized and tested. So, I learned a lot, and that’s where all my multimedia know how has come from, either from things going the wrong direction I want, or usually more because it’s fun to tinker with and see what I can do. In fact, one of my favorite hobbies is to get a device and then in a lot of detail figure out *exactly* what video specs it contains (for example, did you know that the PS3 supports one kind of video if it’s on USB vs streaming over the network?). The point is that even if nothing was what I wanted, I knew all the reasons why. That’s knowledge, yo.

My old Samsung phone

Remember how I bought that Sony TV and then never really found a use for that? That’s a common story with me — I’ll get *some* hardware because it looks fun (and it is), but ultimately end up with something I can’t really use. So I had this tiny little Zotac PC that I bought years ago, but couldn’t really do anything with. It didn’t have wireless, so I couldn’t just put it anywhere (which was an oversight on my part … when I bought it I told myself “nah, I’d never need that !), it only had one Ethernet port, so I couldn’t use it as a router, but it *did* have a sexy nvidia card on it that could do hardware decoding. And it was fanless, which made it even sexier. Still, though, a solution without a problem. The problem I had, again, was that there was no storage space … or not enough.

I’d grown up with desktops so of course all I ever had were the huge 3.5” drives. I had accumulated more than a terabyte by this point on my life (probably more like two, spread out across a lot of disks), but it didn’t really *do* me much good. Sure I could store video, but if I was going to have something that’d do video playback for me on my media center, the idea of throwing a desktop behind my TV and hearing the fan whir was not my idea of an enjoyable media experience (Ironically, part of my enjoyable experience is to turn up the volume so loud that I wouldn’t have been able to hear the moon fall from the sky, much less my case fan).

What happened though somewhere along the line (this part is a bit fuzzy, so I’m guessing a bit at this point) is I wanted to swap out my PS3 for an SSD. I had already upgraded the drive in there from the original 250 or so GB to a 750 GB. I was hardly using *any* space on it at all though .. maybe 200 max … if that. Basically, I had all this storage that I wasn’t using, and hey … why not throw that into my little Zotac box? I did that, and then it hit me … I suddenly had a lot of space on a tiny little fanless box that can sit behind my TV and not make noise or annoy me, and it had enough space on it to hold some media, *and* to make it even better … it had a sexy nvidia card on it that could do hardware decoding of MPEG4 *AND* MPEG2 … and … hmm … is this possibly the birth of my new media center? Maybe? Maybe …. ??

It wasn’t.

There was one problem. A remote control.

You see, another part of my big perfection problem is that I had to have only *one* remote that I could use. Ever. One! Just one. The end. No bargaining on that point. There was no way to control that little bugger, even though I could put media on it. This was where I came back to looking at DLNA servers a bit more closely, and started delving into what possibilities it’d hold.

I liked it, and I didn’t like it. I liked it because it had all my folders and I could easily *browse* my library, but it just wasn’t … fancy. There was no cover art, and resuming playback with my DLNA clients only worked in that powered up session. So. It was kind of a pain. It was close! And it was very close, and for the first real time I could actually get a lot of media on there. So that was a good start. I had a proof of concept, at least.

Resuming playback again had always been a big goal of mine, and I could at this point sacrifice continuing on a series, because I’d just remember where I left off, and I could just select the next one. That’s one concession I’d be willing to make, but there was also one workaround I could do as well — for the individual episodes, I could split the media files into chapters and use that as a poor man’s playback system. It was a good idea, in theory, and I probably would have pursued it further until I ran into another problem — a lot of shows on DVD don’t have chapter markers … which means that I’d have to go through them and find them myself. What about if I just cut it in five minutes increments though? No. Maybe the show has the same commercial breaks in every episode. Nope. Maybe, test, maybe, test, maybe. Nope, nope, nope. Nothing was working out.

Oh, well. I hadn’t had a solution that really met my needs in a long time, so I wasn’t really disappointed per se. I had just found another solution that didn’t fix my problem. It was fun to play with, though, and I did a lot of documentation on what would work streaming vs. local playback (another fun intricacy, my Blu-ray player would support different formats + codecs over DLNA than it would on USB .. same with the PS3).

Blah de blah de blah. If you’re getting bored reading by this point, coincidentally I’m getting bored by writing it, so I’ll wrap it up.

At some point, I discovered Plex. I don’t remember all of the details about how I found it or got started with it, and I think it’s because it all happened *so* fast. It probably overnight became the solution I was looking for. Here were some of the features that just slapped me in the face:

* Automatically fetches metadata
* I can *override* the metadata and use mine if I wanted
* PS3 app
* Web frontend that was sexy-go-nice
* Most importantly, direct hardware playback … no encoding!

Since I had played with the PS3 and (by this time) my PS4 with DLNA so much by now, I had really warmed up to the idea of using my game controller as a second remote. It didn’t bother me. Plus I’d use it to stream Netflix and Amazon Video since I liked it better than my Blu-ray player, and it worked out well. So, the timing was perfect and everything just kind of fell into place. And that’s what I’ve been using since.

I’ll go into more details in other posts about *how* I use Plex and *how* I encode stuff so I get direct hardware playback and all that stuff, but this post is just all about the journey.

Suffice it to say, that after fifteen years, I finally got the media setup I wanted. And I’ll explain how.

“If I’m watching, it’s working.” Well, it’s working. :)

My current setup!

Edit: Life with OCD!

  1. I have meds now that help me not super-hyper-focus to a point where it’s detrimental in my daily life.
  2. Being labeled with a mental illness for me “gives a name to the beast,” and by so doing, it helps me realize the source behind some madness. That’s what happened here. I still “have it,” but have much better perspective knowing it’s there, and am able to circumvent it by acknowledging it’s presence and “input” on ideas.
  3. OCD stands for Obsessive Compulsive Disorder — however, did you know that those are two different disorders lumped into one? It’s not unusual for that to have a diagnosis cover a spectrum of issues. There’s the obsessive disorder, and there’s a compulsive disorder. Someone may have one, the other, or both. In my case, I have the first. I get obsessed with how things are in my life, the most common being trying to reach a level of perfection — this entire post illustrates that point. It doesn’t affect me too much other than that, since it’s highly personalized on what I’m obsessive about, which is almost always, the way things are set up for things that I highly care about. Meaning, not everything.
  4. It’s common when someone hears about OCD to picture a person scrubbing their hands multiple times when washing their hands when they bleed. That’s compulsive. I don’t have that, thank goodness, and I can only imagine how stressful for people it would be that do.

znurt.org cleanup

So, I finally managed to getting around to fixing the backend of znurt.org so that the keywords would import again.  It was a combination of the portage metadata location moving, and a small set of sloppy code in part of the import script that made me roll my eyes.  It’s fixed now, but the site still isn’t importing everything correctly.

I’ve been putting off working on it for so long, just because it’s a hard project to get to.  Since I started working full-time as a sysadmin about two years ago, it killed off my hobby of tinkering with computers.  My attitude shifted from “this is fun” to “I want this to work and not have me worry about it.”  Comes with the territory, I guess.  Not to say I don’t have fun — I do a lot of research at work, either related to existing projects or new stuff.  There’s always something cool to look into.  But then I come home and I’d rather just focus on other things.

I got rid of my desktops, too, because soon afterwards I didn’t really have anything to hack on.  Znurt went down, but I didn’t really have a good development environment anymore.  On top of that, my interest in the site had waned, and the whole thing just adds up to a pile of indifference.

I contemplated giving the site away to someone else so that they could maintain it, as I’ve done in the past with some of my projects, but this one, I just wanted to hang onto it for some reason.  Admittedly, not enough to maintain it, but enough to want to retain ownership.

With this last semester behind me, which was brutal, I’ve got more time to do other stuff.  Fixing Znurt had *long* been on my todo list, and I finally got around to poking it with a stick to see if I could at least get the broken imports working.

I was anticipating it would be a lot of work, and hard to find the issue, but the whole thing took under two hours to fix.  Derp.  That’s what I get for putting stuff off.

One thing I’ve found interesting in all of this is how quickly my memory of working with code (PHP) and databases (PostgreSQL) has come back to me.  At work, I only write shell scripts now (bash) and we use MySQL across the board.  Postgres is an amazing database replacement, and it’s amazing how, even not using it regularly in awhile, it all comes back to me.  I love that database.  Everything about it is intuitive.

Anyway, I was looking through the import code, and doing some testing.  I flushed the entire database contents and started a fresh import, and noticed it was breaking in some parts.  Looking into it, I found that the MDB2 PEAR package has a memory leak in it, which kills the scripts because it just runs so many queries.  So, I’m in the process of moving it to use PDO instead.  I’ve wanted to look into using it for a while, and so far I like it, for the most part.  Their fetch helper functions are pretty lame, and could use some obvious features like fetching one value and returning result sets in associative arrays, but it’s good.  I’m going through the backend and doing a lot of cleanup at the same time.

Feature-wise, the site isn’t gonna change at all.  It’ll be faster, and importing the data from portage will be more accurate.  I’ve got bugs on the frontend I need to fix still, but they are all minor and I probably won’t look at them for now, to be honest.  Well, maybe I will, I dunno.

Either way, it’s kinda cool to get into the code again, and see what’s going on.  I know I say this a lot with my projects, but it always amazes me when I go back and I realize how complex the process is — not because of my code, but because there are so many factors to take into consideration when building this database.  I thought it’d be a simple case of reading metadata and throwing it in there, but there’s all kinds of things that I originally wrote, like using regular expressions to get the package components from an ebuild version string.  Fortunately, there’s easier ways to query that stuff now, so the goal is to get it more up to date.

It’s kinda cool working on a big code project again.  I’d forgotten what it was like.

gentoo, openrc, apache and monit – proper starting and stopping

I regularly use monit to monitor services and restart them if needed (and possible).  An issue I’ve run into though with Gentoo is that openrc doesn’t act as I expect it to.  openrc keeps it’s own record of the state of a service, and doesn’t look at the actual PID to see if it’s running or not.  In this post, I’m talking about apache.

For context, it’s necessary to share what my monit configuration looks like for apache.  It’s just a simple ‘start’ for startup and ‘stop’ command for shutdown:

check process apache with pidfile /var/run/apache2.pid start program = “/etc/init.d/apache2 start” with timeout 60 seconds stop program = “/etc/init.d/apache2 stop”

When apache gets started, there are two things that happen on the system: openrc flags it as started, and apache creates a PID file.

The problem I run into is when apache dies for whatever reason, unexpectedly.  Monit will notice that the PID doesn’t exist anymore, and try to restart it, using openrc.  This is where things start to go wrong.

To illustrate what happens, I’ll duplicate the scenario by running the command myself.  Here’s openrc starting it, me killing it manually, then openrc trying to start it back up using ‘start’.

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 start
* WARNING: apache2 has already been started

You can see that ‘status’ properly returns that it has crashed, but when running ‘start’, it thinks otherwise.  So, even though an openrc status check reports that it’s dead, when running ‘start’ it only checks it’s own internal status to determine it’s status.

This gets a little weirder in that if I run ‘stop’, the init script will recognize that the process is not running, and reset’s openrc’s status to stopped.  That is actually a good thing, and so it makes running ‘stop’ a reliable command.

Resuming the same state as above, here’s what happens when I run ‘stop’:

# /etc/init.d/apache2 stop
* apache2 not running (no pid file)

Now if I run it again, it checks both the process and the openrc status, and gives a different message, the same one it would as if it was already stopped.

# /etc/init.d/apache2 stop
* WARNING: apache2 is already stopped

So, the problem this creates for me is that if a process has died, monit will not run the stop command, because it’s already dead, and there’s no reason to run it.  It will run ‘start’, which will insist that it’s already running.  Monit (depending on your configuration) will try a few more times, and then just give up completely, leaving your process completely dead.

The solution I’m using is that I will tell monit to run ‘restart’ as the start command, instead of ‘start’.  The reason for this is because restart doesn’t care if it’s stopped or started, it will successfully get it started again.

I’ll repeat my original test case, to demonstrate how this works:

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 restart
* apache2 not running (no pid file)
* Starting apache2 …

I don’t know if my expecations of openrc are wrong or not, but it seems to me like it relies on it’s internal status in some cases instead of seeing if the actual process is running.  Monit takes on that responsibility, of course, so it’s good to have multiple things working together, but I wish openrc was doing a bit more strict checking.

I don’t know how to fix it, either.  openrc has arguments for displaying debug and verbose output.  It will display messages on the first run, but not the second, so I don’t know where it’s calling stuff.

# /etc/init.d/apache2 -d -v start
<lots of output>
# /etc/init.d/apache2 -d -v start
* WARNING: apache2 has already been started

No extra output on the second one.  Is this even a ‘problem’ that should be fixed, or not?  That’s kinda where I’m at right now, and just tweaking my monit configuration so it works for me.

freebsd, quick deployments, shell scripts

At work, I support three operating systems right now for ourselves and our clients: Gentoo, Ubuntu and CentOS.  I really like the first two, and I’m not really fond of the other one.  However, I’ve also started doing some token research into *BSD, and I am really fascinated by what I’ve found so far.  I like FreeBSD and OpenBSD the most, but those two and NetBSD are pretty similar in a lot of ways, that I’ve been shuffling between focusing solely on FreeBSD and occasionally comparing at the same time the other two distros.

As a sysadmin, I have a lot of tools that I use that I’ve put together to make sure things get done quickly. A major part of this is documentation, so I don’t have to remember everything in my head alone — which I can do, up to a point, it just gets really hard trying to remember certain arguments for some programs.  In addition to reference docs, I sometimes use shell scripts to automate certain tasks that I don’t need to watch over so much.

In a typical situation, a client needs a new VPS setup, and I’ll pick a hosting site in a round-robin fashion (I’ve learned from experience to never put all your eggs in one basket), then I’ll use my reference docs to deploy a LAMP stack as quickly as possible.  I’ve gotten my methods refined pretty well so that deploying servers goes really fast — in the case of doing an Ubuntu install, I can have the whole thing setup close to an hour.  And when I say “setup” I don’t mean “having all the packages installed.”  I mean everything installed *and* configured and ready with a user shell and database login and I can hand over access credentials and walk away.  That includes things like mail server setup, system monitoring, correct permissions and modules, etc.  Getting it done quickly is nice.

However, in those cases of quick deployments, I’ve been relying on my documentation, and it’s mostly just copy and paste commands manually, run some sed expressions, do a little vim editing and be on my way.  Looking at FreeBSD right now, and wanting to deploy a BAMP stack, I’ve been trying things a little differently — using shell scripts to deploy them, and having that automate as much as possible for me.

I’ve been thinking about shell scripting lately for a number of reasons.  One thing that’s finally clicked with me is that my skill set isn’t worth anything if a server actually goes down.  It doesn’t matter if I can deploy it in 20 minutes or three days, or if I manage to use less memory or use Percona or whatever else if the stupid thing goes down and I haven’t done everything to prevent it.

So I’ve been looking at monit a lot closer lately, which is what I use to do systems monitoring across the board, and that works great.  There’s only one problem though — monit depends on the system init scripts to run correctly, and that isn’t always the case.  The init scripts will *run*, but they aren’t very fail-proof.

As an example, Gentoo’s init script for Apache can be broken pretty easily.  If you tell it to start, and apache starts running, but crashes after initialization (there’s specifics, I just can’t remember them off the top of my head) the init script thinks that the web server is running simply because it managed to run it’s own commands successfully.  So the init system thinks Apache is running, when it’s not.  And the side effects from that are that, if you try to automatically restart it (as monit will do), the init scripts will insist that Apache is already running, and things like executing a restart won’t work, because running stop doesn’t work, and so on and so forth.  (For the record, I think it’s fair that I’m using Apache as an example, because I plan on fixing the problem and committing the updates to Gentoo when I can.  In other words, I’m not whining.)

Another reason I’m looking at shell scripting as well is that none of the three major BSD distros (FreeBSD, NetBSD, OpenBSD) ship with bash by default.  I think all three of them ship with either csh or tcsh, and one or two of them have ksh as well.  But, they all have the original Bourne shell.  I’ve tried my hand and doing some basic scripting using csh because for FreeBSD, it’s the default, and I thought, “hey, why not, it’s best to use the default tools that it ships with.”  I don’t like csh, and it’s confusing to try and script for, so I’ve given up on that dream.  However, I’m finding that writing stuff for the Bourne shell is not only really simple, but it also adds on the fact that it’s going to be portable to *all* the distros I use it on.

All of this brings me back to the point that I’m starting to use shell scripts more and more to automate system tasks.  For now, it’s system deployments and system monitoring.  What’s interesting to me is that while I enjoy programming to fix interesting problems, all of my shell scripting has always been very basic.  If this, do that, and that’s about it.  I’ve been itching to patch up the init scripts for Gentoo (Apache is not the only service that has strange issues like that — again, I can’t remember which, but I know there were some other funky issues I ran into), and looking into (more) complex scripts like that pushes my little knowledge a bit.

So, I’m learning how to do some shell scripting.  It’s kind of cool.  People always talk about, in general, about how UNIX-based systems / clones are so powerful because of how shell scripting works .. piping commands, outputting to files, etc.  I know my way around the basics well enough, but now I’m running into interesting problems that is pushing me a bit.  I think that’s really cool too.  I finally had to break down the other day and try and figure out how in the world awk actually does anything.  Once I wrapped my head around it a bit, it makes more sense.  I’m getting better with sed as well, though right now a lot of my usage is basically clubbing things to death.  And just the other day I learned some cool options that grep has as well, like matching an exact string on a line (without regular expressions … I mean, ^ and $ is super easy).

Between working on FreeBSD, trying to automate server deployments, and wanting to fix init scripts, I realized that I’m tackling the same problem in all of them — writing good scripts.  When it comes to programming, I have some really high standards for my scripts, almost to the point where I could be considered obsessive about it.  In reality, I simply stick to some basic principles.  One of them is that, under no circumstances, can the script fail.  I don’t mean in the sense of running out of memory or the kernel segfaulting or something like that.  I mean that any script should always anticipate and handle any kind of arbitrary input when it’s allowed.  If you expect a string, make sure it’s a string, and that it’s contents are within the parameters you are looking for.  In short, never assume anything.  It could seem like that takes longer to write scripts, but for me it’s always been a standard principle that it’s just part of my style. Whenever I’m reviewing someone else’s code, I’ll point to some block and say, “what’s gonna happen if this data comes in incorrectly?” to which the answer is “well, that shouldn’t happen.”  Then I’ll ask, “yes, but what if it *does*?”  I’ve upset many developers this way. :)  In my mind, could != shouldn’t.

I’m looking forward to learning some more shell scripting.  I find it frustrating when I’m trying to google some weird problem I’m running into though, because it’s so difficult to find specific results that match my issue.  It usually ends up in me just sorting through man pages to see if I can find something relative.  Heh, I remember when I was first starting to do some scripting in csh, and all the search results I got were on why I shouldn’t be using csh.  I didn’t believe them at first, but now I’ve realized the error of my ways after banging my head against the wall a few times.

In somewhat unrelated news, I’ve started using Google Plus lately to do a headdump of all the weird problems I run into during the day doing sysadmin-ny stuff.  Here’s my profile if you wanna add me to your circles.  I can’t see a way for anyone to publicly view my profile or posts though, without signing into Google.

Well, that’s my life about right now (at work, anyway).  The thing I like the most about my job (and doing systems administration full time in general) is that I’m constantly pushed to do new things, and learn how to improve.  It’s pretty cool.  I likey.  Maybe some time soon I’ll post some cool shell scripts on here.

One last thing, I’ll post *part* of what I call a “base install” for an OS.  In this case, it’s FreeBSD.  I have a few programs I want to get installed just to get a familiar environment when I’m doing an install: bash, vim and sometimes tmux.  Here’s the script I’m using right now, to get me up and running a little bit.  [Edit: Upon taking a second look at this — after I wrote the blog post, I realized this script isn’t that interesting at all … oh well.  The one I use for deploying a stack is much more interesting.]

I have a separate one that is more complex that deploys all the packages I need to get a web stack up and running.  When those are complete, I want to throw them up somewhere.  Anyway, this is pretty basic, but should give a good idea of the direction I’m going.  Go easy on me. :)

Edit: I realized the morning after I wrote this post that not only is this shell script really basic, but I’m not even doing much error checking.  I’ll add something else in a new post.

# * Runs using Bourne shell
# * shells/bash
# * shells/bash-completion
# * editors/vim-lite

# Install bash, and set as default shell
if [ ! -e /usr/local/bin/bash ] ; then
	echo "shells/bash"
	cd /usr/ports/shells/bash
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	echo "shells/bash - found"
if [ $SHELL != "/usr/local/bin/bash" ] ; then 
	chsh -s /usr/local/bin/bash > /dev/null 2>&1 || echo "chsh failed"

# Install bash-completion scripts
if [ ! -e /usr/local/bin/bash_completion.sh ] ; then
	echo "shells/bash-completion"
	cd /usr/ports/shells/bash-completion
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	echo "shells/bash-completion - found"

# Install vim-lite
if [ ! -e /usr/local/bin/vim ] ; then
	echo "editors/vim-lite"
	cd /usr/ports/editors/vim-lite
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	echo "editors/vim-lite - found"

# If using csh, rehash PATH
if [ $SHELL = "/bin/csh" ] ; then

multimedia reference guide: x264

It seems a little weird to me to post something on my blog that I already posted on our blog at work, but whatever. I figured it’d get more visibility if I wrote about it, since I already cover multimedia stuff sometimes, plus I’m excited about this thing anyway. :)

At work, I get to do all kinds of stuff, and working with video is one of them. I threw together an x264 reference guide on my devspace for what the settings of each preset covers, compared to the defaults. I’ve even translated it to spanish! Vamos, che!

The thing I like about this, is that it helps me see which areas to start tweaking to get higher quality gains, and which ones to stay away from. It kind of sheds light on where the best places to start tweaking are. For instance, the settings that are changed on the ultrafast preset should never be messed with at all, if you want a good outcome. And on the flipside, the ones under the placebo preset are going to slow down the encode greatly if you start beefing them up.

Generally speaking, though, it’s a best approach to use presets set by developers. Every now and then I get the idea in my head that I can somehow make things better just by tweaking a few of the variables. That never works out too well. I always end up spending like 60 minutes to encode a 5 minute video, and then I can’t tell a difference after that. Whoopsie fail.

Next, I want to put together a similar type guide for Handbrake presets, both to compare their presets to each other, and then how to duplicate the same x264 settings using both the x264 cli encoder, and libav. The reason being that, a lot of times I really like the output that Handbrake delivers, and I want to duplicate that using other encoders, but I’m not sure how. That’s what I’m planning to target.

digital trike

So, I don’t normally talk about work on my blog, just because … hey, who wants to work? I’d rather surround myself with Reese’s cups and watch Roger Ramjet. I totally recommend it.

Anyway, at Digital Trike, my current depriver of candy and animated features, I’m doing full time systems administration. It turns out I enjoy doing that quite a bit. One thing they’ve let me start doing, is writing blog posts that are howtos covering topics related to Linux. I’m going to be doing mostly Gentoo posts, and some stuff related to CentOS as well, since we use both of them in development and production (yay, Gentoo!).

I just posted my first entry on their blog, which covers setting up collectd on both distros. I’ll warn you, it’s a bit lengthy, but I tried to cover most of the bases as well as I could, while keeping the setup pretty generic. It’s designed to be a two-parter, this being the first one, and I’ll cover CGP, a PHP frontend to actually see the stats probably next week sometime.

Lemme know what you guys think, I’d totally be up for some feedback. :)

git and acl effective mask

I have run into this funky problem with ACL and git at work, and I cannot for the life of me figure it out. I’m not sure if it’s a bug, wrong expectation on my part, or just plain ole user error.

I have a directory that is setting the default ACL permissions. Those are being inherited just fine by children (files and directories), including the effective mask. However, when I clone a new repository using git, the default effective mask is ignored. And I can’t figure out why.

Specifically, here’s what I’m looking at.

Setting the permissions:

# mkdir testing
# setfacl -m g:users:rwx testing
# setfacl -m d:g:users:rwx testing
# setfacl -m m:rwx testing
# setfacl -m d:m:rwx testing

The ACL permissions:

$ getfacl testing
# file: testing
# owner: root
# group: root

You can see that the default effective masks are properly set.

When I create a sub-directory, it’s ACL settings are inherited properly as well:

$ mkdir dir
$ getfacl dir
# file: dir
# owner: steve
# group: users

That works great and dandy and fine.

The problem I run into is when I use git to clone a repo:

$ git clone git@example.com:shell/shell.git
$ getfacl shell
# file: shell
# owner: steve
# group: users
group:users:rwx #effective:r-x

The effective mask and the default effective mask have dropped from the default (rwx) to something else (r-x), and I have *no* idea why.

Hopefully someone out there may have a clue. I’m stumped.

wrapper script for disc_id

I wrote a little wrapper script for disc_id tonight, available here. disc_id is a little binary that ships with libdvdread, or at least, it used to in older versions.

I use disc_id to give me a unique 32-character string of a DVD, so I have an identifier to track them by in my database of DVDs.

I don’t know if it’s just me or not, but my DVD drives have issues trying to poll the devices. Once I insert a disc, it will take a few seconds for it to register completely so I can access it. However, binaries that access it will think it’s ready to respond sooner than it is able, and will die unexpectedly. So what I needed was a way to get the disc id and not worry about whether or not the drive has finished registering or not.

I just call my little script dvd_id and it is simply a small wrapper that checks the exit code of the disc_id binary. If it doesn’t work the first time, it sleeps for one second and tries again, then repeats the process until it gets a successful exit code of zero.

That’s it. Pretty simple, but like all little scripts, you really tend to depend on them.
if [[ -z $DEVICE ]]; then

if [[ ! -b $DEVICE ]]; then
echo "Device $DEVICE doesn't exist" >&2
exit 1

while [[ $EXIT_CODE != 0 ]]; do
/usr/local/bin/disc_id $DEVICE 2> /dev/null

if [[ $EXIT_CODE != 0 ]]; then
sleep 1

web media frontend

I have always wanted to tweak my HTPC frontend quite a lot to add extra functionality, but the entry barrier to learning a GUI language has been way too high for me.  I’ve had success though, in patching MythFrontend to do some things a little better for me, but I’ve always wanted to get my own going if I could.

Recently, I was thinking about how LIRC can capture IR events and map them to X keyboard events.  Basically, you can control X applications with your remote control.  I started to reason that if that were possible, then I could just use my web development skills and create a webpage frontend for my HTPC that would run on a lightweight browser, and listen for keystrokes.

Just playing around with it tonight, I actually made some really great progress thanks to a combination of a good friend, my humble jQuery beginnings, and my laughable CSS skills.  This is the result so far. :)

I’m really stoked about the implementation so far.  You’ll most likely need Firefox to get that working properly.  It will capture the arrow key presses (up, down, left, right) and use that for navigation.  I realize that the beginnings are rather crude, but the fact that I could throw this together, so quickly, while I’m just barely learning my way around jQuery seems pretty impressive to me.  I’m actually quite proud, though, that I got the navigation to work properly, too, so wrapping around rows and columns works. :)

This is certainly going to be a fun project to hack on.  If I could get this working, this would open up all kinds of possibilities for me for displaying metadata and new options for navigation.

For comparison, here’s a screenshot of what my frontend looks like right now.  As you can see, I’m trying to imitate the style as closely as possible.

There’s a lot of advantages to having it web-based — not that I’m going to serve up anything remotely or anything, this is solely for my LAN.  It’ll just allow me to build out stuff much faster.

The hard part is going to be doing testing on the frontends.  They are both running off of tiny installations, and it’s not easy building and porting software to run on them.  Sounds like a challenge that’s extremely hard, going to take a lot of time, and will have marginal benefit and at the same time increase my workload and opportunity to own more of my software stack when things go wrong.  That’s just right up my alley. :)