SSH Client Keys

SSH Client Keys allow you to quickly login to a remote server via SSH without typing your password. This is very useful if you login to a remote *nix server on a regular basis or if you want to automate scripts that need to remotely connect using SSH (using commands such as rsync or cvs over SSH).

I have used SSH keys for awhile now, but whenever I setup a new server I seem to draw a blank when trying to remember how to set them up. Each time I end up searching Google for "SSH Client Keys" and clicking on the excellent O'Reilly page "Quick Logins with ssh Client Keys". I really don't like duplicating information that is already available on the web, but I felt it was necessary to explain a couple of points the O'Reilly page misses, particularly about the authorized_keys2 file on the server.

Because I've followed the setup procedure so many times, I usually only need to glance at the directions to remember how its done. However, it was doing this that caused me much frustration today. I discovered that it is very important that the server-side ~/.ssh directory (and all files inside) are chmod 0700(!), otherwise this whole process is pointless!

Before I review how to setup SSH Client Keys, let me give a brief overview of the files involved:

Client-side:
~/.ssh/id_rsa (private key, chmod 0600)
~/.ssh/id_rsa.pub (public key, chmod 0655)

Server-side:
~/.ssh/authorized_keys2 (holds a list of public keys, chmod 0700)

Now that you know what files are needed, let me explain how to go about creating them. The procedure for getting SSH keys setup is rather straightforward. First of all, if you've never used SSH keys before you probably need to generate a public/private key pair on the client-side (your workstation). From your home directory, run the following command:

$ ssh-keygen -t rsa

When prompted, leave the default options as they are (that includes leaving the passphrase option blank) and simply press Enter until you're back at your command prompt. If you did not already have a ~/.ssh directory, this command will create the directory and place two files inside: Your private id_rsa and the public id_rsa.pub version of it to use on remote servers.

Now that you have the Client-side files you need, it's time to create the necessary server-side files and copy the contents of your public key file (id_rsa.pub) to authorized_keys2 on the server-side. The procedure outlined on the O'Reilly page assumes you don't already have any SSH keys setup on the remote server and simply replaces authorized_keys2 with the contents of id_rsa.pub. The two commands you are instructed to run are:

$ ssh server "mkdir .ssh; chmod 0700 .ssh"
$ scp .ssh/id_rsa.pub server:.ssh/authorized_keys2

While this is fine for those who are setting the keys up for the first time on a new server/account, it may slip up those who already use them. If the file already exists, it will overwrite any existing keys listed in the authorized_keys2 file. The authorized_keys2 file is simply a text file list of the public keys (the contents of ~/.ssh/id_rsa.pub on the client-side).The easiest thing to do if the file already exists on the server-side is to simply copy the contents of your ~/.ssh/id_rsa.pub file, SSH over to the server, open authorized_keys2, and paste your key at the bottom of the list.

Now you should be able to type ssh server and automatically login without typing a password!

Reversing WordPress Comments Order

After creating the Comment History page, I realized the Recent Comments Plugin was listing the comments in order from newer-to-older, instead of older-to-newer (which is how the comments on my post pages were ordered). I thought that I should probably change the ordering of the comments on my post pages to match the Comment History ordering.

The solution, I discovered, was rather simple. Just open the comments.php file in your theme's home directory, find this line:

<?php foreach ($comments as $comment) : ?>

and immediately above it add this:

<?php $comments = array_reverse($comments, true); ?>

Save the file and your comments will be ordered from newer-to-older. However, after making the change I realized my original older-to-newer ordering made more sense. Why? Because when someone is reading a post, the first comment they read might not make any sense unless they have read a previous comment.

So to make reading the post page more user-friendly, comments should be ordered from older-to-newer (the WordPress default). For a Comment History page however, ordering comments from newer-to-older makes more sense because the visitor is probably viewing the Comment History page to check for the most recent comments on a post they have been following.

Comment History with Get Recent Comments Plugin

My Dad and I have been going back and forth quite a bit in the comments on a recent post I wrote about Consumption. This filled up the Recent Comments list on the sidebar rather quickly and I wasn't able to see other recent comments. I realized a comment history or archive page, similar to my post archive page, would be very useful.

After looking around a bit, I found a really nice plugin by Krischan Jodies called Get Recent Comments. It has a ton of features and lots of configuration options. It has been updated as recently as last month and even supports the new widgets feature of WordPress 2.3 (it also works with older versions of WordPress as far back as 1.5).

By default, the instructions included with the plugin explain how to add recent comments to your sidebar. They don't, however, mention anything about creating a comment history page. In the instructions there is a snippet of PHP code which you are supposed to use in the sidebar.php file of your WordPress template. I thought great, I simply need to create a new page in WordPress and add that snippet of code to the page using the runPHP plugin to execute the PHP on that page. This worked, partially. At the top of my comment history was this error:

Parse error: syntax error, unexpected $end in /home/raamdev/public_html/blog/wp-content/plugins/runPHP/runPHP.php(410) : eval()’d code on line 1

I thought perhaps it was because my runPHP plugin was outdated, so I upgraded it to the latest version (currently v3.2.1). I still received the error, so I decided to play around with the snippet of PHP code provided by the Get Recent Comments plugin. I was able to modify it slightly to get rid of the error as well as output some additional text. Here is the snippet of code I use to create my new Comment History page:

<?php
if (function_exists('get_recent_comments')) {
   echo "(Showing 500 most recent comments.)";
   echo "<li><ul>".  get_recent_comments() ."</ul></li>";
}
?>

In the plugin options, I configured the plugin to group recent comments by post. This created a very readable Comment History page. After adding the ID of the new page to the exclude list in my header.php file to prevent the page from showing in the header (wp_list_pages('exclude=704&title_li=' )), I added a 'View comment history' link to the bottom of the Recent Comments list on the sidebar.

The Get Recent Comments plugin is really powerful and I'm a bit surprised that the plugin doesn't include basic instructions about how to create a comment history page. If you receive a decent amount of feedback from your visitors (in the form of comments), this is a great way to see all that feedback on a single page. If you have Trackback's and Pings enabled, this plugin can even show those.

Using wget to run a PHP script

wget is usually used to download a file or web page via HTTP, so by default running the wget http://www.example.com/myscript.php would simply create a local file called myscript.php and it would contain the contents of the script output. But I don't want that -- I want to execute the script and optionally redirect its output somewhere else (to a log file or into an email for reporting purposes). So here is how it's done:

$ wget -O - -q http://www.example.com/myscript.php >> log.txt

According to the wget man page, the "-O -" option is used to prevent wget from saving the file locally and instead simply outputs the result of the request. Also, wget normally produces it's own output (a progress bar showing the status of the download and some other verbose information) but we don't care about that stuff so we turn it off with the "-q" option. Lastly, the ">> log.txt" redirects the output of the script to a local file called log.txt. This could also be a pipe command to send the output as an email.

There is an incredible amount of power behind wget and there are a lot of cool things you can use it for besides calling PHP scripts from the command line. Check out this LifeHacker article for a bunch of cool uses.

Volunteer computing with BOINC

If you have heard of SETI@home, you'll probably have an idea of how volunteer computing works. Basically there are huge amounts of information to analyze from many different fields of research. It is neither cost-effective nor possible to have current supercomputers dedicated to all these fields of research at the same time. Since you're usually never using 100% of your CPU's processing power, why not contribute to the research? Software has been designed which allows you to do just that. It allows you to share your extra CPU cycles and contribute to different projects. For example, with SETI@home you can contribute to the Search for Extraterrestrial Intelligence (SETI).

You may be wondering how this actually works, so let me give a very brief explain. First you download and install software which allows you to join a particular project. The software then connects to a central server and downloads a "chunk" of data in which to process. While your computer is idle (or all the time, depending on how you configure the options) the software processes and analyzes that data in much the same way a supercomputer would. When your computer is done processing the small chunk of data, it sends the results back to a central server and requests another chunk. In this manner, thousands and thousands of individual computers can act as one gigantic computer, all processing little chunks, of a much larger chunk, of data.

BOINC

BOINC (Berkeley Open Infrastructure for Network Computing) was designed to replace the original SETI@home network (which was full of bugs and holes, allowing some users to report fake results to the network). BOINC was designed to support not only the SETI@home project, but any new projects as well. There are dozens of active BOINC projects which you can join, including Climateprediction.net, Rosetta@home (to predict and design protein structures to fight diseases), Einstein@home (to search for spinning neutron stars), and Malaria Control.

The computing power of this kind of network is incredible, and even more importantly, it is all voluntarily provided! The fastest supercomputer in the world is currently IBM's BlueGene/L, which can run at 360 TFlop/s (“teraflops” or trillions of calculations per second). BlueGene/L uses 1.5 megawatts of power and its hardware takes up 2,500 square feet. Now, compare that to the more than 430,000 active machines worldwide which make up the BOINC system, providing a whooping 663 TFlop/s!

Another very important result of this volunteer computing setup is the use of power, and I'm not talking about CPU power but rather electric power. All of the personal computers running BOINC would probably be turned on and consuming power regardless of whether or not they were running BOINC! This means that by using a volunteer "quasi-supercomputing" platform we not only have a greater potential for computing power but we save energy at the same time!

I envision a future where networks become so robust and fast that distributed applications and the "sharing" of CPU cycles is a normal, common place thing. Imagine for a second a world where you sit down at your computer and open a very CPU-intensive 3D modeling program -- so intensive in fact that it uses 100% of your PC's CPU power. But instead of the rendering software slowing down because of a lack of CPU power, it actually speeds up by grabbing CPU power from your neighbors computer, which is running but not being used (perhaps your neighbor went to the grocery store). While your 3D software is rendering, you decide to stream a video over the Internet. Again, your computer uses CPU cycles from other idle computers in the neighborhood to provide you with all the power you need.

I installed the BOINC Manager on my MacBook Pro the other day and joined a couple of projects. I plan to install it on all computers in my possession that run on a regular basis, linking all of them to my single BOINC account (this way I can monitor the combined work of my own mini-supercomputing network!). I can configure BOINC to not process data unless my computer has been idle for X number of minutes and give other applications a higher priority over BOINC in case they suddenly need CPU power. Basically my computers will run the same as they do now but will be contributing to scientific and/or medical research while they're idle!

Recursively Renaming Files – The One Liner

A couple of months ago I wrote about a solution to recursively rename multiple files on a Linux system. The problem with that solution was that the script needed to be saved as a file called rename and then chmod 755 to make it executable.

Today, while writing a script for my ASAP application, I found a much easier one line solution which uses commonly installed command line tools:

$ for i in `find . -name "*.php5"` ; do mv -v $i ${i/.php/.php5/}; done

This chain of commands searches for all files containing .php5 and renames them to .php. The most obvious limitation of this solution is that if a filename or directory contains .php5, it will also be renamed. So if, for some wacky reason, you had a directory called /my.php5.files/, that directory would be renamed to /my.php.files/. Similarly, a file named my.php5.example.php would be renamed to my.php.example.php.

For my application, this one liner worked fine as I simply added a warning to the top of my script. If anyone knows how I can easily modify that command to ignore all directories (I didn't see anything in the find command syntax that might help), I would greatly appreciate the information!

How I started programming

I learned about programming when I was 12, three years after I began building computers. I asked my Dad one day (at the time he was working at Digital as a technical writer) how the games and programs on the computer were created. He didn't know a whole lot about programming, but he knew of the BASIC programming language and told me I should get a book and learn it. So I bought a book and started using QBASIC on MS-DOS 6.22. From there I moved to Visual Basic (which I later realized was a big mistake). VB was very much like BASIC and making the transition was very easy. I coded "AOL apps" for awhile (for those who remember: Punters, Mail Bombs, ChatRoom Busters, etc.) until AOL started cracking down on such things.

A few years later, when programming for money came into view, I discovered that I should really be familiar with C or C++. So I glanced through a couple of C programming books and wow, what a difference from BASIC! My mental understanding of how programming languages worked had been spoiled by the simplistic syntax of BASIC and VB. It took many, many books to finally get a basic understanding of C and C++. I also flipped through a couple of Java books around this time because I heard the syntax was similar to C. Besides, who wouldn't want to learn a programming language called Java?

HTML is something I have almost always known how to use (I can't even remember when I first learned it) and I never really thought of it as a programming language. When I started to realize how important, and powerful, dynamic web applications were becoming, I decided to investigate what it was that made the HTML dynamic (after all, if you view the source of a dynamic web page, it usually just looks like plain HTML!). I discovered, almost accidentally, the open-source programming language PHP (PHP: Hypertext Preprocessor) and quickly started learning it. I later learned about ASP (Microsoft's Active Server Pages) and JSP (JavaServer Pages) (wow, am I glad I found PHP first!). Since then I have also learned a lot about databases, including database design and the basic principles of good database design. The most popular database used with PHP at the time was MySQL, so that is what I studied. I've also had some light exposure to MSSQL.

Currently, I am working at a software startup company called Aerva, Inc. in Cambridge, Massachusetts doing everything from software support and debugging to being the "company muscle" (I'm the only one with a truck). I have built, and currently manage, their Support Center using PHP & MySQL, although I have also had to write some bash and Perl (ugh!) scripts to interface with their software provisioning process. I continue to work on adding new features to the Support Center to help streamline regular processes while at the same time increasing my knowledge of various programming languages and the Linux operating system.

I code with PHP on a daily basis (on a MacBook Pro using MAMP and Eclipse) and I am currently working on several of my own web projects. I just finished working on a little application called ASAP - Automated Staging and Publishing, which allows me to automate the process of checking out a project from CVS and then rsync'ing it to a remote server for staging or publishing purposes.

Everyone says PHP is "easy" to learn, and although they are probably right they fail to realize that its simplicity is also its weakness. To code good and secure PHP you need to have a strong understanding of the language and how best to use its many features. In addition to continuing to perfect my understanding of the PHP programming language and related OOP (Object-Oriented Programming) technologies, I wish to learn more about Java, XML, and AJAX.