freebsd, quick deployments, shell scripts

At work, I support three operating systems right now for ourselves and our clients: Gentoo, Ubuntu and CentOS.  I really like the first two, and I’m not really fond of the other one.  However, I’ve also started doing some token research into *BSD, and I am really fascinated by what I’ve found so far.  I like FreeBSD and OpenBSD the most, but those two and NetBSD are pretty similar in a lot of ways, that I’ve been shuffling between focusing solely on FreeBSD and occasionally comparing at the same time the other two distros.

As a sysadmin, I have a lot of tools that I use that I’ve put together to make sure things get done quickly. A major part of this is documentation, so I don’t have to remember everything in my head alone — which I can do, up to a point, it just gets really hard trying to remember certain arguments for some programs.  In addition to reference docs, I sometimes use shell scripts to automate certain tasks that I don’t need to watch over so much.

In a typical situation, a client needs a new VPS setup, and I’ll pick a hosting site in a round-robin fashion (I’ve learned from experience to never put all your eggs in one basket), then I’ll use my reference docs to deploy a LAMP stack as quickly as possible.  I’ve gotten my methods refined pretty well so that deploying servers goes really fast — in the case of doing an Ubuntu install, I can have the whole thing setup close to an hour.  And when I say “setup” I don’t mean “having all the packages installed.”  I mean everything installed *and* configured and ready with a user shell and database login and I can hand over access credentials and walk away.  That includes things like mail server setup, system monitoring, correct permissions and modules, etc.  Getting it done quickly is nice.

However, in those cases of quick deployments, I’ve been relying on my documentation, and it’s mostly just copy and paste commands manually, run some sed expressions, do a little vim editing and be on my way.  Looking at FreeBSD right now, and wanting to deploy a BAMP stack, I’ve been trying things a little differently — using shell scripts to deploy them, and having that automate as much as possible for me.

I’ve been thinking about shell scripting lately for a number of reasons.  One thing that’s finally clicked with me is that my skill set isn’t worth anything if a server actually goes down.  It doesn’t matter if I can deploy it in 20 minutes or three days, or if I manage to use less memory or use Percona or whatever else if the stupid thing goes down and I haven’t done everything to prevent it.

So I’ve been looking at monit a lot closer lately, which is what I use to do systems monitoring across the board, and that works great.  There’s only one problem though — monit depends on the system init scripts to run correctly, and that isn’t always the case.  The init scripts will *run*, but they aren’t very fail-proof.

As an example, Gentoo’s init script for Apache can be broken pretty easily.  If you tell it to start, and apache starts running, but crashes after initialization (there’s specifics, I just can’t remember them off the top of my head) the init script thinks that the web server is running simply because it managed to run it’s own commands successfully.  So the init system thinks Apache is running, when it’s not.  And the side effects from that are that, if you try to automatically restart it (as monit will do), the init scripts will insist that Apache is already running, and things like executing a restart won’t work, because running stop doesn’t work, and so on and so forth.  (For the record, I think it’s fair that I’m using Apache as an example, because I plan on fixing the problem and committing the updates to Gentoo when I can.  In other words, I’m not whining.)

Another reason I’m looking at shell scripting as well is that none of the three major BSD distros (FreeBSD, NetBSD, OpenBSD) ship with bash by default.  I think all three of them ship with either csh or tcsh, and one or two of them have ksh as well.  But, they all have the original Bourne shell.  I’ve tried my hand and doing some basic scripting using csh because for FreeBSD, it’s the default, and I thought, “hey, why not, it’s best to use the default tools that it ships with.”  I don’t like csh, and it’s confusing to try and script for, so I’ve given up on that dream.  However, I’m finding that writing stuff for the Bourne shell is not only really simple, but it also adds on the fact that it’s going to be portable to *all* the distros I use it on.

All of this brings me back to the point that I’m starting to use shell scripts more and more to automate system tasks.  For now, it’s system deployments and system monitoring.  What’s interesting to me is that while I enjoy programming to fix interesting problems, all of my shell scripting has always been very basic.  If this, do that, and that’s about it.  I’ve been itching to patch up the init scripts for Gentoo (Apache is not the only service that has strange issues like that — again, I can’t remember which, but I know there were some other funky issues I ran into), and looking into (more) complex scripts like that pushes my little knowledge a bit.

So, I’m learning how to do some shell scripting.  It’s kind of cool.  People always talk about, in general, about how UNIX-based systems / clones are so powerful because of how shell scripting works .. piping commands, outputting to files, etc.  I know my way around the basics well enough, but now I’m running into interesting problems that is pushing me a bit.  I think that’s really cool too.  I finally had to break down the other day and try and figure out how in the world awk actually does anything.  Once I wrapped my head around it a bit, it makes more sense.  I’m getting better with sed as well, though right now a lot of my usage is basically clubbing things to death.  And just the other day I learned some cool options that grep has as well, like matching an exact string on a line (without regular expressions … I mean, ^ and $ is super easy).

Between working on FreeBSD, trying to automate server deployments, and wanting to fix init scripts, I realized that I’m tackling the same problem in all of them — writing good scripts.  When it comes to programming, I have some really high standards for my scripts, almost to the point where I could be considered obsessive about it.  In reality, I simply stick to some basic principles.  One of them is that, under no circumstances, can the script fail.  I don’t mean in the sense of running out of memory or the kernel segfaulting or something like that.  I mean that any script should always anticipate and handle any kind of arbitrary input when it’s allowed.  If you expect a string, make sure it’s a string, and that it’s contents are within the parameters you are looking for.  In short, never assume anything.  It could seem like that takes longer to write scripts, but for me it’s always been a standard principle that it’s just part of my style. Whenever I’m reviewing someone else’s code, I’ll point to some block and say, “what’s gonna happen if this data comes in incorrectly?” to which the answer is “well, that shouldn’t happen.”  Then I’ll ask, “yes, but what if it *does*?”  I’ve upset many developers this way. :)  In my mind, could != shouldn’t.

I’m looking forward to learning some more shell scripting.  I find it frustrating when I’m trying to google some weird problem I’m running into though, because it’s so difficult to find specific results that match my issue.  It usually ends up in me just sorting through man pages to see if I can find something relative.  Heh, I remember when I was first starting to do some scripting in csh, and all the search results I got were on why I shouldn’t be using csh.  I didn’t believe them at first, but now I’ve realized the error of my ways after banging my head against the wall a few times.

In somewhat unrelated news, I’ve started using Google Plus lately to do a headdump of all the weird problems I run into during the day doing sysadmin-ny stuff.  Here’s my profile if you wanna add me to your circles.  I can’t see a way for anyone to publicly view my profile or posts though, without signing into Google.

Well, that’s my life about right now (at work, anyway).  The thing I like the most about my job (and doing systems administration full time in general) is that I’m constantly pushed to do new things, and learn how to improve.  It’s pretty cool.  I likey.  Maybe some time soon I’ll post some cool shell scripts on here.

One last thing, I’ll post *part* of what I call a “base install” for an OS.  In this case, it’s FreeBSD.  I have a few programs I want to get installed just to get a familiar environment when I’m doing an install: bash, vim and sometimes tmux.  Here’s the script I’m using right now, to get me up and running a little bit.  [Edit: Upon taking a second look at this — after I wrote the blog post, I realized this script isn’t that interesting at all … oh well.  The one I use for deploying a stack is much more interesting.]

I have a separate one that is more complex that deploys all the packages I need to get a web stack up and running.  When those are complete, I want to throw them up somewhere.  Anyway, this is pretty basic, but should give a good idea of the direction I’m going.  Go easy on me. :)

Edit: I realized the morning after I wrote this post that not only is this shell script really basic, but I’m not even doing much error checking.  I’ll add something else in a new post.

# * Runs using Bourne shell
# * shells/bash
# * shells/bash-completion
# * editors/vim-lite

# Install bash, and set as default shell
if [ ! -e /usr/local/bin/bash ] ; then
	echo "shells/bash"
	cd /usr/ports/shells/bash
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	echo "shells/bash - found"
if [ $SHELL != "/usr/local/bin/bash" ] ; then 
	chsh -s /usr/local/bin/bash > /dev/null 2>&1 || echo "chsh failed"

# Install bash-completion scripts
if [ ! -e /usr/local/bin/ ] ; then
	echo "shells/bash-completion"
	cd /usr/ports/shells/bash-completion
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	echo "shells/bash-completion - found"

# Install vim-lite
if [ ! -e /usr/local/bin/vim ] ; then
	echo "editors/vim-lite"
	cd /usr/ports/editors/vim-lite
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	echo "editors/vim-lite - found"

# If using csh, rehash PATH
if [ $SHELL = "/bin/csh" ] ; then


I’ve started looking at FreeBSD at work this week, because I was reading some blog posts about how MySQL performs well on a combination of that and ZFS together.  I haven’t gotten around to getting ZFS setup yet, but I have been looking into FreeBSD as an OS a lot, and so far, I like it.

This makes the second distro in the past year that I’ve really started to seriously look into, the other one being Ubuntu.  I’m still trying to wrap my head around the whole FreeBSD design structure and philosophy, and for now I’m having a hard time summing it up.  In my mind, it kind of feels like a mashup of functionality between Gentoo and Ubuntu.  I like that there is a set group of packages that are always there, kind of like Ubuntu, but that you can compile everything from source, like Gentoo.

What has really surprised me is how quickly I’ve been able to pick it up, understand it, and already work on getting an install up and running.  I think that having patience is probably the primary reason there.  Figuring out how things work hasn’t really been that hard, but I say that because of past Linux experience that has helped me figure out where to look for answers more easily.  That is, when I get stuck on something, I can usually figure it out just by guessing or poking around with little effort.

Years ago, if I would have looked at any BSD, I would have been asking “why?”  I still don’t know why I’m looking at it, other than I believe it’s not a good idea to put all your eggs in one basket.  At work we already support CentOS, Gentoo and Ubuntu, and it’d be awesome to add FreeBSD to the list.

I’m really enjoying it so far.  It’s easy to install packages using the ports system.  I tried going the route of binary packages at first, but that wasn’t working out so well for me.  Then I tried mixing ports and packages, and that wasn’t doing too great either, so I switched to just using ports for now.

The only thing I don’t like so far is how it’s kind of hard to find what I’m looking for.  I totally chalk that up to me being a noob, and not as any real flaw of the distro or it’s documentation — I just don’t know where to look yet.  Fortunately, ‘whereis’ has saved me a lot of time.

The system seems familiar enough and easy to use for me, coming from a Linux background.  In fact, I really can’t find many differences.  The things I have noticed are that it uses much less memory, even on old underpowered boxes, and that it is relatively quick out of the box.  I never would have guessed that.

I’m curious to see how ZFS integrates into the system, if at all.  I like the filesystem, and it’s feature set, but that’s about it for now (I got to play with it a bit as we had a FreeNAS install for a few months).  If it’s a major pain to integrate it, I’m probably not going to push for it right now — I’m content with riding out the learning curve until I feel more comfortable with the system.

So, all in all, it’s cool to find something different, that doesn’t feel too different, but still lets me get my head in there and figure out something new.

If you guys know of any killer apps to use on here, let me know.  I’m kind of wishing I had an easier way to install stuff using ports aside from tromping through /usr/ports manually looking for package names.

digital trike

So, I don’t normally talk about work on my blog, just because … hey, who wants to work? I’d rather surround myself with Reese’s cups and watch Roger Ramjet. I totally recommend it.

Anyway, at Digital Trike, my current depriver of candy and animated features, I’m doing full time systems administration. It turns out I enjoy doing that quite a bit. One thing they’ve let me start doing, is writing blog posts that are howtos covering topics related to Linux. I’m going to be doing mostly Gentoo posts, and some stuff related to CentOS as well, since we use both of them in development and production (yay, Gentoo!).

I just posted my first entry on their blog, which covers setting up collectd on both distros. I’ll warn you, it’s a bit lengthy, but I tried to cover most of the bases as well as I could, while keeping the setup pretty generic. It’s designed to be a two-parter, this being the first one, and I’ll cover CGP, a PHP frontend to actually see the stats probably next week sometime.

Lemme know what you guys think, I’d totally be up for some feedback. :)

git and acl effective mask

I have run into this funky problem with ACL and git at work, and I cannot for the life of me figure it out. I’m not sure if it’s a bug, wrong expectation on my part, or just plain ole user error.

I have a directory that is setting the default ACL permissions. Those are being inherited just fine by children (files and directories), including the effective mask. However, when I clone a new repository using git, the default effective mask is ignored. And I can’t figure out why.

Specifically, here’s what I’m looking at.

Setting the permissions:

# mkdir testing
# setfacl -m g:users:rwx testing
# setfacl -m d:g:users:rwx testing
# setfacl -m m:rwx testing
# setfacl -m d:m:rwx testing

The ACL permissions:

$ getfacl testing
# file: testing
# owner: root
# group: root

You can see that the default effective masks are properly set.

When I create a sub-directory, it’s ACL settings are inherited properly as well:

$ mkdir dir
$ getfacl dir
# file: dir
# owner: steve
# group: users

That works great and dandy and fine.

The problem I run into is when I use git to clone a repo:

$ git clone
$ getfacl shell
# file: shell
# owner: steve
# group: users
group:users:rwx #effective:r-x

The effective mask and the default effective mask have dropped from the default (rwx) to something else (r-x), and I have *no* idea why.

Hopefully someone out there may have a clue. I’m stumped.

my prototype cheat sheets: forms

I was having problems this morning with Prototype, in getting serialized forms, so I went back to this little cheat sheet that I wrote up once and updated it and figured out what the problem was.  I uploaded my cheat sheet to my website, if anyone wants to see it.

If you’ve used Prototype for JavaScript before, then you might know where I’m coming from.  In my opinion, the library is awesome, but the documentation is a little confusing in some places.  It could be that way for me only because I’m still so new to JavaScript.  Anyway.  I know for certain that writing this stuff out this way totally helps explain it for me, being a kinesthetic learner.

The problem I ran into recently, though, with Prototype was that it’s unclear what happens when you serialize an element.  The docs say that it returns an object … but it’s not a Prototype Object, meaning you can’t run functions on it that are attached to that.  It’s certainly not a Hash, either, since you can’t use those functions either.  Not knowing JavaScript much, I assume it’s just a regular JavaScript object.

Either way, to convert it to a JSON-formatted string, you need to cast the serialized element to an Object or a Hash of Prototype design first.  That’s what was tripping me up, and that’s the final section on that forms cheat sheet.

I’m using Prototype a lot more at work.  I’m building an intranet at work that is going to use a lot of AJAX, and so I really need to polish my skills.

Wow, this post is boring.  It needs some unicorns.

new cell phone: droid x

I got a new phone this week, the Motorola Droid X, running Android … woo!  Boy, that sentence is gonna attract a lot of spammers. :T

It’s an awesome phone.  I like it.  Well, actually, I dunno about the phone part, I think I’ve called one person.  I’ve been playing with all the cool apps.  This is my first smartphone, so having access to all this stuff is really quite a novelty to me.  Right now, I’m a big fan of Foursquare. :)  I think it’s handy, since I can see what’s nearby quickly.

The Google Voice integration is nice … I can call out directly using the number they provide instead of my carrier’s, and I can use it to send free text messages.  Although, I’ll admit, as applications go, the native Android text messaging software is much better.

I don’t have Android 2.2 on here, which is supposed to be the new hawtness, I hear.  I like everything so far.  My only complain right now is that there’s no port of Oregon Trail. :(

Alright, I really don’t have much more to say about it.  I’m excited. :)

I’ll admit I wrote this post just to use that LOLcat. :)

promises and deliverables

I was thinking about my earlier blog post about my ideas for the new packages site I’m still working on, and I realized that to a lot of people it must seem like I sure promise a lot of stuff, but then never get around to really completing it.  I wanted to address that a bit, since I imagine that at times I’m either confusing or frustrating some people.

First of all, I get a lot of ideas to do a lot of projects.  There’s lots of cool stuff I want to do, and I have a hard time saying to myself “I have enough projects already in the works to finish, better not start another one,” but I do anyway.  I tend to quickly overload myself sometimes that way, which can be bad for everything.  However, one thing I’m getting more strict on is only picking up projects that I’m sure I want to complete, that I’ll see through until the end.  I very rarely, if ever, completely drop a project that I’ve started.  I will tend to put them on hold for a while — sometimes years — but I’ll eventually revisit the idea (heck, the packages website is a perfect example of that).

I have a ton of projects I’m “working on,” though.  So many, that I’m honestly afraid to write them all down for fear of being totally overwhelmed by the responsibility I put on myself for them.  I do, however, plan on getting them all done, and they circle around in my head on a regular basis, and often times I think of ways to integrate two projects (for example, adding an option to search gentoo planet(s) from the packages site).  I get a lot of interesting ideas all the time, but I really have to be careful not to overextend myself.

One thing I’ve been trying to do recently (as in the past year) is slowly shutter off some of the support I’ve been providing for the Gentoo tree directly, and ebuilds / herds I’ve in the past taken close care of.  It occurred to me way back when that it’d be a more efficient use of my time if I built out some project websites (like the packages one) rather than trawling the tree looking for ebuilds to fix, bump and repair (for example).  Not that I mind doing that, mind you, in fact I find it rather relaxing at times, but what’s happened is that I’ve overextended my responsibilities again, and I’m trying to cut back.  Basically, my thought is that while I want to still work on Gentoo for a while, I don’t want to make a career out of it.

Oddly enough, though, part of the reason I’m doing these community projects is so that I can more efficiently do other ones.  For example, at times I like to go through the multimedia packages and just check them to make sure we aren’t missing version bumps, and go fix small bugs that I can take care of and just little stuff that isn’t really important (in a sense of package popularity) but still relevant to a few users.  Those are fun.  But it’d make my life easier if I could more quickly track what has been neglected, more easily see what available version bumps are available (I still wanna hook into GnomeFiles and track their changes, for example), and stuff like that.  A lot of the tree-fixing stuff in Gentoo development is just monotonous, which is why it’s hard to find volunteers to do it.  There’s a good chunk of it that is just boring work!  And I’d like to help streamline that a bit.  That’s one of my big goals.

With that goal in mind, a huge reason for doing the packages site was just so I can have a simple interface to get all the information I need, and finally a standardized set of data for categories, packages and versions.  That’s mostly done, or at least the framework is, so now I can get going on the *really* cool stuff.  What I’ve done so far is really just the tip of the iceberg.

Anyway, I didn’t wanna talk about just the packages site.  There’s lots of other stuff I have going on.  It’s interesting, even to me, to see which ones I’ll want to juggle at a time.  I switch between them on a regular basis.  Sometimes I’ll be working on the packages site, then my DVD ripper, then my scriptures stuff, then I’ll work on theology ebuilds, then sound ones, then I’ll look after ALSA, then mplayer, then I’ll go back to tweaking MythVideo a bit, and round and round and round it goes.  I’m always working on *some* project, that’s for sure.  It might do me some good to try and get a bit more organized, but I don’t even do a good job of keeping track of bugs in my own projects.  I just track them internally for the most part.

So, I apologize for the epic behind status that I’m always in.  I’m starting to recognize more and more how much I’m holding people up on some projects, so I’m doing my best to gracefully exit those areas so someone else can come in and take over.  I’m still fumbling a bit at the best way to do that, but at this point in my life I have at least recognized the few areas that I’m sure I’m not passionate about anymore, and shouldn’t be lazing around just pretending to commit once in a while — of which, there are actually really few.  In fact, I can only think of one off the top of my head.

One thing that might be cool that I just thought of — have a status indicator on my blog or something that displays the current project I’m working on.  That’d be fun. :)  Sounds like work, though.  I’m gonna go watch a movie.

adventures in a new job

You know, there are some really cool blogs out there.  The ones I like the most are the ones that simply tell the stories of life as they happen to them, and document them in a cool way.  This is not one of those blogs.  Unless you’re as obsessed as cartoons as I am, and I doubt it.

Anyway, reading one such blog tonight, it got me thinking that I should loosen up a bit and document more of my generic life stories sometimes.  I’ll think about it.

In the meantime, here’s something that happened at work today.

I’ve still been settling in (I started a week ago yesterday) at the new place, and it’s a little odd for me because I am the only IT guy there.  Everyone else is an engineer with more degrees than I knew existed.  I don’t think I’ve ever worked for a place that either wasn’t an IT shop or didn’t have an IT department before I came on board, so it’s all been just a little bit different. (See, this is why I don’t write general life stories … I’m already boring myself.)

When I first got there, the boss set me up with a laptop, which wasn’t bad, but it had an Intel graphics card on there, which makes me want to install Debian on babies.  He asked me what my ideal hardware was, so I told him, and we’re working on getting that, and using something else in the interim.  Anyway … where was I going with this … I had mentioned in passing that the Broadcom wifi chip on there was crap, and so he went online and got an Intel one instead for like $15 on ebay.  He brought it in today, and I got to pop it open and swap it out.  I had no idea that the onboard wifi cards were just using PCI Express Mini slots.  That is way cool.  So it took about five minutes to get the whole thing swapped out.  Pretty cool experience.

Oh, and for the record, the new Intel one worked out great.  Fired right up without any stupid issues (kernel or otherwise), though I still can’t ever get NetworkManager to even recognize any wifi for some dumb reason.  Oh well, wicd works fine, even though it’s bugly.

See?  This is why I don’t write life posts.  They’re not well formatted.  Meh.  I’m going back to blogging about cartoons.

closed captioning on dvds (and ripping them)

In ripping my DVDs, I try to future-proof it as much as I can, by putting in as many elements as I *think* I might need or want someday down the road.  One of those elements is subtitles.  There are three types of subtitles that can be on DVDs — VobSub, closed captioning and SDH — and the first two can be extracted fairly easily.  I have no idea how to access the SDH ones.  I think you need either a newer DVD player or a Blu-Ray one.

I’ve been ripping my TV shows, and so far I haven’t seen any really hard and fast rules on what to expect with them on DVD.   Part of the reason is that I just haven’t been paying much attention to subtitles until recently.

I was playing with ripping one show last night, and I saw the CC logo on the back of the case, so I went to check the rest of my library to see which other ones had it.  Nearly my entire library of Warner Bros. DVDs displayed the logo — even for much older cartoons (Looney Tunes, Scooby Doo) — once again staying consistent with the fact that the studio puts a lot of effort into the quality of their releases.


I just started playing with extracting CC though, and just barely wrote the code to my DVD ripper to extract them, so I have no idea what the other series are like, if they have subtitles or not — VobSub or CC.  I usually don’t find out until I actually go to rip them.

Extracting the closed captioning subtitles is a lot easier and faster than getting the VobSub streams.  For Linux (and Mac and Windows) there’s a nifty OSS program called ccextractor.  Once you have your VOB video file on your harddrive, just run that on the movie, and it will create an SRT subtitle file of the closed captioning text.  It’s great, and really fast, taking probably under a minute on a 60-minute video on my box.  Comparatively, when ripping a VobSub stream, you need to read the DVD directly which causes its own bottleneck, and then demux the entire stream.  It takes probably around 3 to 5 minutes for an episode of the same length.

Another thing I like about the closed captioning titles is that because they are extracted as SRT, it’s easy to look through them since they are just text files.  If you’re really anal, you can correct typos yourself.  The VobSub subtitles are all bitmaps.  I’ve also noticed that on some DVDs, where there were issues with framerates or something else, that the VobSub timestamps will be off … and sometimes either they will show up clumped together at the beginning of the film or the sync will be way off.  I think that this has to do with the dumping process, somewhere, but I’m not sure.  I’ve never really taken the time to pin down the source.

So, with closed captioning being easier and faster to extract, as well as editable and the timestamps haven’t had any issues for me (yet), it’s quickly becoming my preferred subtitle format.

There’s only one small issue with using ccextractor, and that is you won’t know if there are any captions in the VOB until after it’s made its trial run.  The program will create an .srt file regardless when you run it, but the file will be empty if it couldn’t find any.  That’s the only drawback.  With VobSub, you can know if there are subtitles just by probing the DVD using lsdvd or something similar.

Muxing it into matroska is simple, too.  Just pass it as a file argument and you’re done.

As a sidenote, while my bend application that I wrote and use to rip DVDs would be a major pain to setup for someone else, I’ve rewritten it recently so that it uses individual classes to access every object directly: DVD, DVD track, DVD VOB, Matroska file.  They are standalone classes written in PHP if anyone wanted to use them, feel free.  You would also need my tiny class of shell functions as well, since they all make calls to it.

The DVDVOB one makes it simple to extract the subtitle stream.  In fact, all the classes make things relatively simple.  They have made writing my code so much simpler.

firefox "find as you type" steals window focus

I’m posting this one hoping that someone can help me out, because it’s one of the few remaining reasons I don’t use Firefox as my main browser. I still use Seamonkey as my default, but the Javascript parsing is soo much slower than everything else, it’d be nice to switch.

Firefox has this find as you type feature, where if you hit / and then type in some words, it’ll search and highlight it on the page. Great. Lots of browsers have that. Spanky. But, the problem with firefox begins with this little toolbar at the bottom of the browser that pops up as you are typing the text. It has a little dialog box titled Quick Find which fills in with whatever you were searching for. The main issue is that that toolbar will close itself automatically, and when it does, it steals focus from X back to Firefox.

That’s particularly annoying for me because, in many instances, what will happen is I will search for something in Firefox using quick find, get what I’m looking for, and then switch to another program or window before the default timer has expired. If I start typing in that other window, when Firefox’s bar closes, X focuses back on Firefox and part of my text goes in there instead. Kind of frustrating.

I’ve tinkered around with the about:config page and haven’t found anything, and every now and then I check Google to see if I can find anyone else who has discovered a workaround, but I haven’t found anything, so now I’m just trying to see if anyone else knows a solution.

I’d be happy with either disabling the toolbar completely or not having it go away, or whatever. The only part that bothers me is it stealing focus again.

For what it’s worth, I’m on XFCE 4.4. No idea if it’s an issue with other WMs.