I just read Seth Godin's ebook Who's There?. It's a bit old (2005) but has some really great insights into how blogging is changing the way we view communication and the world around us. The beginning is a bit boring, but if you hang in there you won't be disappointed.
2008
HOWTO: Count Files Recursively with Exclusion on Linux
Find all files in this directory, including the files in sub-directories, and exclude all files that start with a period (dot files) and any directories named .thumbs. Then pass the list of results to the wc command to get a total count:
find . ! -name ".*" ! -path "*.thumbs*" -type f | wc -l
HOWTO: Make iTunes Read Ogg Files
After downloading the only available torrent of Hang Drum music I could find, I was shocked to discover that iTunes wouldn't read the Ogg files it contained. I was so close to losing a ton of respect for Apple until I searched Google for a solution. Hooray for the xiph.org open-source community! Simply visit their site and download QuickTime Components binary package. After opening the .dmg file (Windows users should be able to just download and run the .exe file), copy XiphQT.component to ~/Library/Components (user-only) or to /Library/Components (system-wide).
Update: Randy Cox noted in the comments that on Snow Leopard the path to copy the file is actually /Library/Quicktime/ If iTunes is open, restart it and viola! You've got .ogg support in iTunes!
Google's Growing Visual Clutter
Google's latest "feature" is nothing short of annoying. I fell in love with Google Search for the clean, textual layout of the search results. The colored text I can deal with, but not visual buttons next to every single result! To make matters worse, Google doesn't provide a way to disable this feature either, so your only two options are logging out of your Google account or installing a Greasemonkey extension.
Oh, and my rant doesn't end there. Another feature that was recently added, Google Suggest, has been more trouble than it has help. I can't even count how many times I've went to Google something only to have a big list of suggestions instantly erase the original search query from my head. There are hacky ways to disable that too, but come on Google! There should be options to disable this stuff!
Mounting HFS+ with Write Access in Debian
When I decided to reformat and install my Mac Mini with the latest testing version of Debian (lenny, at the time of this writing) I discovered that I couldn't mount my HFS+ OS X backup drive with write access:
erin:/# mount -t hfsplus /dev/sda /osx-backup
[ 630.769804] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only.
This warning puzzled me because I was able to mount fine before the reinstall and, since the external drive is to be used as the bootable backup for my MBP, anything with "at your own risk" was unacceptable.
I had already erased my previous Linux installation so I had no way of checking what might have previously given me write access to the HFS+ drive. A quick apt-cache search hfs revealed a bunch of packages related to the HFS filesystem. I installed the two that looked relevant to what I was trying to do:
hfsplus - Tools to access HFS+ formatted volumes
hfsutils - Tools for reading and writing Macintosh volumes
No dice. I still couldn't get write access without that warning. I tried loading the hfsplus module and then adding it to /etc/modules to see if that would make a difference. As I expected, it didn't. I was almost ready to give up but there was another HFS package in the list that, even though it seemed unrelated to what was trying to do, seemed worth a shot:
hfsprogs - mkfs and fsck for HFS and HFS+ file systems
It worked! I have no idea how or why (and I'm not interested enough to figure it out), but after installing the hfsprogs package I was able to mount my HFS+ partition with write access.
Update:
As Massimiliano and Matthias have confirmed in the comments below, the following solution seems to work with Ubuntu 8.04:
From Linux, after installing the tools suggested before, you must run:
mount -o force /dev/sdx /mnt/blablaOtherwise, in my fstab, I have an entry like this:
UUID=489276e8-7f9b-3ae6-8c73-69b99ccaab9c /media/Leopard hfsplus defaults,force 0 0
Romance, sadness, humor, and fear all rolled into one
I don't know how to feel about this series of comics. The feeling is that of romance, sadness, humor, and fear all rolled into one. Coincidentally, I learned about linked lists in C class a few weeks ago (got a 94 on the assignment!), so I was able to fully appreciate the last strip.
Understanding the Linux Load Averages
I have been using Linux for several years now and although I have looked at the load averages from time to time (either using top or uptime), I never really understood what they meant. All I knew was that the three different numbers stood for averages over three different time spans (1, 5, and 15 minutes) and that under normal operation the numbers should stay under 1.00 (which I now know is only true for single-core CPUs).
Earlier this week at work I needed to figure out why a box was running slow. I was put in charge of determining the cause, whether it be excessive heat, low system resources, or something else. Here's what I saw for load averages when I ran the top command on the box:
load average: 2.86, 3.00, 2.89
I knew that looked high, but I had no idea how to explain what "normal" was and why. I quickly realized that I needed a better understanding of what I was looking at before I could confidently explain what was going on. A quick Google search turned up this very detailed article about Linux load averages, including a look at some of the C functions that actually do the calculations (this was particularly interesting to me because I'm currently learning C).
To keep this post shorter than the aforementioned article, I'll simply quote the two sentences that gave me a clear-as-day explanation of how to read Linux load averages:
The point of perfect utilization, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. If there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilizing its processors perfectly for the last 60 seconds.
The machine I was checking at work was a single-core Celeron machine. This meant with a continuous load of almost 3.00 the CPU was being stressed much higher than it should be. Theoretically, a dual-core machine would drop this load to around 1.50 and a quad-core would drop it to 0.75.
There is a lot more behind truly understanding the Linux load averages, but the most important thing to understand is that they do not represent CPU usage. Rather they represent the load on the CPU by processes waiting for their chance to use the CPU. If you still can't get your brain away from thinking in terms of percentages, consider 1.00 to be 100% load for single-core CPU's, 2.00 to be 100% load for dual-core CPUs, and so on.
Update: John Gilmartin had some insightful feedback and shared a link to Understanding Load Averages where there's a nice graphical description for how load averages work.
RAAMAA License Plate
My brother-in-law snapped this picture of a NH license plate. Hey, that's my name!

A human being should be able to change a diaper, plan an invasion…
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." - Robert A. Heinlein
NetBeans for PHP
Sun Microsystems has added PHP support to their open-source Netbeans development IDE. I just tried the latest version (6.5) and I'm not impressed at all, at least with their OS X version: It's slow and the Open File dialog takes a good 45 seconds (!) to load.
Creating a Bootable OS X Backup on Linux: Impossible?
I've had plans for a while now to set up a backup system using a Debian Linux server and rsync to back up my MacBook Pro laptop. At first glance, it seemed like it would be pretty straight forward. I've been able to make a bootable copy of my entire MBP using nothing but rsync (thanks to some very helpful directions by Mike Bombich, the creator of the popular, and free, Carbon Copy Cloner software). And by bootable copy I mean I could literally plug in the USB drive and boot my MBP from the drive (hold down the Alt/Option key while booting). Restoring a backup is as simple as running the rsync command again, but in the reverse direction. I know this solution works because I used it when I upgraded to a 320GB hard drive.
To start, I needed to create a big enough partition on the external USB drive using Disk Utility (formatted with Mac OS Extended (Journaled)). I then made a bootable copy of my MBP with one rsync command:
sudo rsync -aNHAXx --protect-args --fileflags --force-change
--rsync-path="/usr/local/bin/rsync" / /Volumes/OSXBackup
But my dream backup system was more unattended. I wanted something that would periodically (a couple times a day) run that rsync command over SSH (in the background) and magically keep an up-to-date bootable copy of my MBP on a remote server.
I love Linux and I jump at any opportunity to use it for something new, especially in a heterogeneous network environment. So when I decided to set up a backup server, I naturally wanted to make use my existing Debian Linux machine (which just so happens to be running on an older G4 Mac Mini).
So, after making a bootable copy of my MBP using the local method mentioned above, I plugged the drive into my Linux machine, created a mount point (/osx-backup), and added an entry to /etc/fstab to make sure it was mounted on boot (note the filesystem type is hfsplus):
/dev/sda /osx-backup hfsplus rw,user,auto 0 0
All that's left to do now is to run the same rsync command as earlier but this time specifying the remote path in the destination (root@myserver.example.com:/osx-backup/). This causes rsync to tunnel through SSH and run the sync. Unfortunately, this is where things started to fall apart.
OS X uses certain file metadata which must be copied for the backup to be complete (again, we're talking about a true bootable copy that looks no different than the original). Several of the flags used in the rsync command above are required to maintain this metadata and unfortunately Linux doesn't support all the necessary system calls to set this data. In particular, here are the necessary flags that don't work when rsyncing an OS X partition to Linux:
-X (rsync: rsync_xal_set: lsetxattr() failed: Operation not supported (95))
-A (recv_acl_access: value out of range: 8000)
--fileflags (on remote machine: --fileflags: unknown option)
--force-change (on remote machine: --force-change: unknown option)
-N (on remote machine: -svlHogDtNpXrxe.iL: unknown option)
According to the man page for rsync on my MBP, the -N flag is used to preserve create times (crtimes) and the --fileflags option requires chflags system call. When I compiled the newer rsync 3.0.3 on my MBP, I had to apply two patches to the source that were relevant to preserving Mac OS X metadata:
patch -p1 <patches/fileflags.diff
patch -p1 <patches/crtimes.diff
I thought that maybe if I downloaded the source to my Linux server, applied those same patches, and then recompiled rsync, that it would be able to use those options. Unfortunately, those patches require system-level function calls (such as chflags) that simply don't exist in Linux (the patched source wouldn't even compile).
So I tried removing all unsupported flags even though I knew lots of OS X metadata would be lost. After the sync finished, I tried booting from the backup drive to see if everything worked. It booted into OS X, but when I logged into my account lots of configuration was gone and several things didn't work. My Dock and Desktop were both reset and accessing my Documents directory gave me a "permission denied" error. Obviously that metadata is necessary for a viable bootable backup.
So, where to from here? Well, I obviously cannot use Linux to create a bootable backup of my OS X machine using rsync. I read of other possibilities (like mounting my Linux drive as an NFS share on the Mac and then using rsync on the Mac to sync to the NFS share) but they seemed like a lot more work than I was looking for. I liked the rsync solution because it could easily be tunneled over SSH (secure) and it was simple (one command). I can still use the rsync solution, but the backup server will need to be OS X. I'll be setting that up soon, so look for another post with those details.
WHM Whitelist to Exclude from Exim Sender Verify Callbacks
Sender verification is an important feature used by email servers to help prevent spam. When sender verification is enabled, the receiving email server checks to make sure the sender exists. Various email servers have different ways of handling this feature. Exim, for example, uses a mechanism called 'sender callouts' or 'callbacks'. (When the sending server does not accept a verification request, it does not comply with RFC 2821.)
However, in the event that the network route from the receiving email server to the originating email server is broken (or a firewall blocks the connection), the result can be a bit confusing. The receiving email server treats a failed verification as a failed verification, regardless of whether or not it could even connect to the originating server. This means the email never comes through to the recipient. After all, as far as the email server knows, it's spam.
One of my hosting clients was experiencing this "lost email" problem and a quick grep at /var/log/exim_mainlog confirmed the problem (hosts and IPs changed for obvious reasons):
2008-11-17 15:02:27 [30121] H=relay1.example.com (qsv-spam1.example.com) [67.26.151.59]:36752 I=[69.161.211.25]:25 sender verify defer for: could not connect to customer.example.com [163.112.75.15]: Connection timed out
2008-11-17 15:02:27 [30121] H=relay1.example.com (qsv-spam1.example.com) [67.26.151.59]:36752 I=[69.161.211.25]:25 F=<administrator@customer.example.com> temporarily rejected RCPT <raam@mydomain.com>: Could not complete sender verify callout
2008-11-17 15:02:27 [30120] H=relay1.example.com (qsv-spam1.example.com) [67.26.151.59]:36751 I=[69.161.211.25]:25 incomplete transaction (RSET) from <administrator@customer.example.com>
As you can see, the email server was unable to connect to customer.example.com to verify the existence of the sender (administrator@customer.example.com). This doesn't mean the sending server doesn't verify callbacks, but rather that the network connection from my server to the sending server could not be established.
Most of the stuff I found online related to solving this problem on a server running WHM (here and here) explain how to modify exim.conf to add special whitelist rules. Luckily, my server is running WHM 11.23.2 and has a whitelist option that makes it really easy to exclude a particular IP address from sender verification without any manual changes to exim.conf:
1. Click
Service Configuration -> Exim Configuration Editor
2. UnderAccess Lists, find "Whitelist: Bypass all SMTP time recipient/sender/spam/relay checks" and click[EDIT]
3. Add the IP address for the sending server for which you wish to skip sender verification (as the note at the bottom explains, hosts cannot be used in this list)
4. ClickSave
5. ClickSaveagain near the bottom of the Exim Configuration Editor page
That's it! Now any emails from that IP that were failing to come through because of a sender verification failure will come through without a problem (again, you can watch /var/log/exim_mainlog to confirm).
Ubuntu Live-CD on G4 Mac Mini
I've been trying to create a new partition on the 250GB drive I installed in my G4 (PowerPC) Mac Mini but I could not for the life of me find a Live CD that would boot. Finally this helpful post pointed me to Ubuntu 6.06 (Dapper Drake). After downloading the 'Mac (PowerPC) desktop CD' and burning it, I was pleasantly surprised to see it boot the Mac Mini beautifully (I used the live-powerpc kernel at the boot: prompt). Apparently the later PowerPC distributions of Ubuntu don't come with the necessary ATI drivers for the G4 Mac Mini!
Modified Grid Focus WordPress Theme
I modified and applied the excellent Grid Focus WordPress theme by Derek Punsalan to my blog. His sense of style is very similar to my own and I think I have finally found a WordPress theme that I like enough to feel satisfied, at least for awhile.
There were a few things about the theme that didn't work for me, so I made the following modifications:
- I like a wide content column (at least 650px) so I removed the third column to make space.
- Added Widget support so I could utilize the Twitter Tools and Get Recent Comments widgets
- To support my use of Asides, I added some CSS styles to the stylesheet and modified index.php to check if the post belongs to the Asides category and style it accordingly
- Shaded the left column to give it some definition and help it stand apart from the main content
You can download my modified version of the theme here. I'm still working out some of the kinks (like blockquotes not being styled to my liking) but I'll update the theme zip file after I make any changes.
Current Version: 2009-02-04
Older Versions
Motorcycle School Photos
A few months ago when I got my motorcycle license, the instructors took photos during the biking sessions. They told us to wait a few weeks and look for them on their website, but I totally forgot. Check out the whole set on Flickr (it's a set of two separate classes, so only the first half of the photos are from my class) or just see the photos I copied to my gallery.
Energy follows thought
"Energy follows thought. We move towards, not beyond, what we imagine. By expanding our deepest beliefs of what is possible, we change our core experience of life." -Jane Roberts
My failed attempt to hack the AT&T free iPhone WiFi
You may remember that AT&T began offering free wifi for iPhone users earlier this year. Shortly thereafter they pulled the service. Why? Because someone discovered the security applied to the system was extremely weak: simply changing the User Agent of your browser to make it look like you were using an iPhone browser allowed you to gain free WiFi access on your laptop. This could easily be done using the Firefox User Agent Switcher extension, or by simply firing up Safari, enabling Developer mode (Safari->Preferences->Advanced->Show Develop menu), and selecting the iPhone User Agent (Develop->User Agent->Mobile Safari 1.1.3 - iPhone).
With the new service, you connect your iPhone to the wireless network, launch the browser, and get redirected to a page that displays a single field requesting you to enter your iPhone phone number. After submitting your phone number, you receive a (free) text message containing a URL. Loading this URL from your iPhone grants you free wifi access to the Internet.
When I tried the User Agent hack mentioned above from my laptop, I expected to at least get the box prompting me for my iPhone phone number. But to my surprise, all I got was a mobile-formatted page with options to purchase service.
So I suspected they were checking the MAC address of the computer connected to the router and checking if it looked like an iPhone MAC address. Luckily, spoofing the MAC address of my wifi card is easy on OSX:
sudo ifconfig en1 lladdr 00:21:E9:52:6A:E3
BAM! Now as far as the AT&T router can tell, my requests are coming from my iPhone. This time when I connected, I got the form asking me for my iPhone phone number. I submitted the number and a few seconds later received an SMS with a link.
I hoped that simply typing this URL in my laptop browser and visiting it would grant me free wifi access, but unfortunately it did not. Instead, it gave me an error saying that page doesn't exist.
A commenter on the original LifeHacker post describing the User Agent hack left this comment about the new security features applied by AT&T:
AT&T has locked out non iPhones by using an encrypted log on tied to each iphone number. The key is transmitted to the iPhone over the AT&T cell network a minute before login.
By using the AT&T network to transmit the key, they have definitely made it more difficult to gain free access from your laptop. I'm sure it's still possible (perhaps by sniffing the wifi traffic between the iPhone and the router after a successful connection), but I'm not sure it's worth the time and effort.
I heard that an official AT&T tethering option for the iPhone will be coming soon, so that might make this a moot point (assuming they make it a free option). Still, it seems only fair that existing iPhone users should be able to access the free wifi via their laptops. Transmitting a password via SMS seems like a safe way to guarantee the person connecting to the wifi actually has an iPhone.
Evolve or Die
I have long accepted my limited social abilities and, for lack of any good reason other than convenience, avoided any situations that may expose me to new social interactions. Limited social interaction alone would not normally be such a bad thing, but when it leads to neglecting interpersonal communication, especially with those you love, the end result can be disastrous and detrimental to life itself.
The Kalabarian analysis of my name says the following about my weaknesses:
Often I am so fired up about my own projects or goals that I inadvertently run over or ignore other people’s feelings and interests. Being receptive and appreciative of others’ contributions, ideas, and feelings would go a long way in improving my relationships.
Weaknesses should not be something to accept and ignore, but rather a guide for what needs the most attention! From this day onward, I will make a conscious effort to improve my interactions with others and learn to value any opportunities to improve my interpersonal communication skills.
To evolve or die means to learn to meet new challenges as they arise and overcome them or to remain stubborn and inflexible. We need to apply this lesson to our own self-imposed limitations in life. If we accept those limitations and let them define us, we exponentially decrease our potential for growth. We should not learn to accept who we are, but rather learn to accept that we are limitless beings.
Edit: I should mention that in this context to evolve means to continuously adapt and face challenges in life. To die means to live in a box and accept your perceived limitations.
Three hours to empty the truck…
Three hours to empty the truck. That makes it a total of twelve hours moving (including the commute). I am so hungry.
Moving Day

I finally gave in and decided to rent a truck for moving. I generally refuse to use anything but my own truck for moving, but I realized it would cost me more in gas to drive back and forth to the new place (~40 miles one-way) than it would to just rent a truck. Now hopefully this truck is big enough for one trip...