Yet another supercomputer

SciCortex is to launch their cluster-in-a-(lunch)-box with promo video and everything. Seems pretty nice but some things worries me a bit …

Of course a highly interconnected backpane and some smart shortest-path routing algorithms (probably not as good as Feynman’s) is much faster (and reliable?) than gigabit ethernet (myrinet also?). Of course, all-in-one chip technology is much faster and safer and more economic than any HP or IBM 1U node money can buy.

There are also some eye-candy like a pretty nice external case, dynamic resource partitioning (like VMS), native parallel filesystem, MPI optimized interconnection and so on… but do you remember Cray-1? It had wonderful vector machines but in the end it was so complex and monolithic that everyone got stuck with it and never used it anymore.

Assembling a 1024-node Linux cluster with PC nodes, Gigabit, PVFS, MPI etc is hard? Of course it is, but the day Intel stops selling PCs you can use AMD (and vice-versa) and you won’t have to stop using the old machines until you have a whole bunch of new ones up and running transparently integrated with your old cluster. If you do it right you can have a single cluster beowulf cluster running alphas, Intel, AMD, Suns etc, just bother with the paths and the rest is done.

I’m not saying it’s easier, nor cheaper (costs with air conditioning, cabling and power can be huge) but being locked to a vendor is not my favourite state of mind… Maybe if they had smaller machines (say 128 nodes) that could be assembled in a cluster and still allow external hardware to be connected having intelligent algorithms to understand the cost of migrating process to external nodes (based on network bandwidth and latency) would be better. Maybe it could even make their entry easier to existent clusters…

Middle Earth: Proxy

When updating the nodes I have to download several times (N for N nodes) the same packages, so a good idea is to have a proxy that would do it for me once and all nodes get from the local copy. For that we have the good old squid.

On the Master node:

$ sudo apt-get install squid

Than edit the config file. It’s rather huge but search for acl localhost and add the line below:

acl cluster src 192.168.2.0/24
http_access allow cluster

assuming your cluster is on that subnet.

Now, on each node (also on Master) set the environment variable (on .bashrc):

export http_proxy="http://master-node:3128/"
export ftp_proxy="http://master-node:3128/"

Also, a good idea is to increase the max cache object from 4Mb to, say 400M because the idea is to cache deb packages and not webpages. You can also limit the global size of the cache (like 1Gb) so old packages will be deleted.

# Per object (400MB)
maximum_object_size 409600 KB
minimum_object_size 64 KB
# Global (1GB)
cache_dir ufs /var/spool/squid 1000 16 256

Restart squid and you’re ready to go:

$ sudo /etc/init.d/squid restart

Middle Earth: shared disk

To stop copying everything all the time I needed a shared disk. Parallel Virtual File system was my parallel FS of choice but also I needed a quick and not so fast and reliable filesystem for tests. For that, I chose NFS. Later I can install PVFS if I need to.

Well, install NFS on Ubuntu is VERY simple!

Server:

Install the packages:

sudo apt-get install nfs-user-server nfs-common

Then, edit the /etc/exports file in the server:

/scratch/global frodo(rw) sam(rw) merry(rw) pippin(rw)

Create the directory, with permission to the group users:

mkdir /scratch/global/
chmod g+ws /scratch/global/
chgrp users /scratch/global/

and start the service:

sudo /etc/init.d/nfs-user-server restart

Client:

Install the package:

sudo apt-get install nfs-common

Edit the /etc/fstab and add the mount point:

gandalf:/scratch/global /scratch/global nfs rw 0 0

Create the directory and mount it:

sudo mkdir /scratch/global/
sudo mount /scratch/global/

That’s just it… really.

Open MPI

Open MPI is the new trend to MPI applications. It promise to deliver a high quality MPI1 and MPI2 compliant implementation substituting all other implementations to date.

Of course, this is far too much to assume for a new software even for such a big project. It not only lacks documentation and a step-by-step guide to use the system but it’s not MPI2 compliant yet and there are still many basic bugs unfixed.

But don’t think it’s bad because it’s not. The architecture was quite well planned, the code is being carefully written as far as I could see and it have many options for debug the server and running MPI programs. It also have a component system where you can add new functionalities without patching the main code, which is a great deal for programs that aim to be standard one day.

LAM is being deprecated because most of their team is working on OpenMPI which is almost what happened to Mozilla and Firefox. But they make a statement on their pages that’s not true: “Since it’s an MPI implementation, you should be able to simply recompile and re-link your applications to Open MPI — they should ‘just work.’ “.

Talking to a friend (the one who found a code that didn’t compile straight away) I found out that MPICH2 is still far better for performance and MPI2 compliance. Also, installing and running LAM here shown me that LAM is still more stable and easy to use than OpenMPI.

Let the time play it’s part and see what comes out of it…

Middle Earth: MPI

MPI stands for Message Passing Interface and is a system to execute programs across nodes in a cluster using a message passing library to enable communication among nodes. It’s a very powerful library and is now the standard for parallel programs.

Normally I’d choose LAM MPI as I always did in the past but I wanted to test MPICH, another very famous MPI implementation.

But what I found out was that the MPICH version for Ubuntu is rather old and the on line documentation is completely different from what I had and there was no documentation at all on any Ubuntu package I could find. (for instance, my config file was apache-like and the new is XML, so I couldn’t even start the service).

Well, I guess that the best always win and that’s the third time I choose LAM over MPICH exactly because of the same problem: installation and documentation.

Installing LAM MPI was very simple. On the master node (gandalf) I installed:

$ sudo apt-get install lam-runtime lam4c2 lam4-dev

And on the execution nodes, just the runtime:

$ sudo apt-get install lam-runtime lam4c2

MPEasy

A while ago I had developed a set of scripts to help running and syncing a LAM MPI cluster when you don’t have a shared disk yet to use within the cluster (my case yet) so it’s specially designed to home clusters and the start of a more serious cluster when you didn’t have time to setup a shared disk setup yet. πŸ˜‰

So, installing MPEasy is easy, download the tarball, explode it into some dir and set the env variable on your startup script:

On .bashrc:

export MPEASY=~/mpeasy
export PATH=$PATH:$MPEASY/bin

On .cshrc:

setenv MPEASY ~/mpeasy
setenv PATH $PATH:$MPEASY/bin

And put the node list, one per line, on $MPEASY/conf/lam_hosts. Afther that, just running:

$ bw_start

should start your mpi cluster. After that you can start some MPI tests. Go to the $MPEASY/devel/test directory and compile the hello.c.

$ mpicc -o hello hello.c

Than, you need to sync the current devel directory to all nodes:

$ bw_sync

And run:

$ bw_run 10 $MPEASY/devel/test/hello

You should be able to do the same to all other codes on it, just remember to sync before running, otherwise you’ll have an outdated version on the nodes and you’ll have problems. On a shared disk environment it wouldn’t be a problem, of course.

PI Monte Carlo – Distributed version

On April I published an entry explaining how to calculate PI using the most basic Monte Carlo algorithm and now, using the Middle Earth cluster I can do it parallelized.

Parallelizing Monte Carlo is a very simple task because of it’s random and independent nature and this basic monte carlo is even easier. I can just run exactly the same routine as before on all nodes and at the end sum everything and divide by the number of nodes. To achieve that, I just changed the main.cc file to use MPI, quite simple indeed.

The old main.cc just called the function and returned the value:

    area_disk(pi, max_iter, delta);
    cout << "PI = " << pi << endl;

But now, the new version should know whether it’s the main node or a computing node. After, all computing nodes should calculate the area and the main node should gather and sum.

    /* Nodes, compute and return */
    if (myrank) {
        area_disk(tmp, max_iter, delta);
        MPI_Send( &tmp, 1, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD );

    /* Main, receive all areas and sum */
    } else {
        for (int i=1; i < size; i++) {
            MPI_Recv( &tmp, 1, MPI_DOUBLE, i, 17, MPI_COMM_WORLD, &status );
            pi += tmp;
        }
        pi /= (size-1);
        cout << "PI = " << pi << endl;
    }

On MPI, myrank says your node number and size shows you the total number of nodes. On the most basic MPI program, if it’s zero you’re the main node, otherwise you’re a computing node.

All computing nodes calculate the area and MPI_Send the result to the main node. The main node waits for all responses on the main loop and sum the temporary result tmp to pi and at the end divide by the number of computing nodes.

Benefits:

This monte carlo is extremely basic and very easy to parallelize. As this copy is run over N computing nodes and there’s no dependency between them you should achieve an increase in speed of over N times the non-parallel one.

Unfortunately, this algorithm is so slow and inaccurate that even running on 9 computing nodes (ie. 9 times faster) it’s still wrong on the third digit.

The slowness is due to the algorithm’s stupidity but the inaccuracy is due to the lack of a really good standard random number generators. Almost all machines yielded results far from the 5-digit answer on M_PI macro of C standard library and the result was also far from it. Also, there are so many other ways of calculating PI that are so much faster that it wouldn’t be a good approach ever!

The good thing is that it was just to show a distributes monte carlo algorithm working… πŸ˜‰

Middle Earth: Moving freely

So, talking about the patch hack reminded me to say a word about an important thing when building clusters: moving around. If you have hundreds of nodes and have to update one config file, would you like to type your admin password hundreds of times ?

So, the simple way of doing it on a controlled environment is using passwordless SSH keys and passwordless sudo for certain tasks.

SSH Keys: When you SSH to another computer you normally have to type a password, but there’s another way of authenticating and it is a trusted DSA/RSA key. This key is created using the ssh-keygen tool:

$ ssh-keygen -t dsa -b 1024

It’ll ask for a passphrase and there is where you just type ENTER. This will create two files on your ~/.ssh directory: id_dsa and id_dsa.pub. The public file should be copied to all your node’s ~/.ssh directory and renamed as authorized_keys. That’s it, SSH to the node and check that it won’t ask you for a password.

$ ssh node mkdir .ssh  (type password)
$ scp .ssh/id_dsa.pub node:.ssh/  (type password)
$ ssh node mv ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys  (type password)
$ ssh node   (won't ask for a password)

Sudo rules: Sudo helps you to execute things as root without being root but the root must allow that and the way to allow that is to add you to the /etc/sudoers file. Ubuntu already sets you in sudoers if you provided your username on the installation. If not, you should be able to run the visudo application properly.

$ sudo visudo

The line should be something like that on Ubuntu’s sudoers:

%admin  ALL=(ALL) ALL

And the quick solution is to change to that:

%admin  ALL=NOPASSWD: ALL

When you save and close the editor (:x on vi) will update the sudoers and you’ll be able to run everything as root without typing a password. BEWARE! This approach is very very very insecure so make sure you have all your machines completely separated from your network otherwise it’ll compromise your entire network.

Disclaimer: I use that because, it won’t open any security hole on your machine and in the event of someone breaking into one of the machines via another security hole it’ll compromise all your nodes because they should have exactly the same configurations, so no point trying to make one node secure the other.

So, with all that said, it’s very simple to shutdown the cluster:

for node in `cat /etc/cluster`; do
    ssh $node sudo halt
done

no passwords, no words, just a quick halt.

Middle Earth: hacks

Ok ok, I admit… I tried to stay away from hacks and non-package things but I was just lazy about trying to find out the best configuration manager while I needed to distribute my files around and execute the same command on all nodes of the cluster so I cheated!

At least was a very small cheat and I still want to do the right way, someday…. πŸ˜€

I needed multiple execution and multiple copy, so I’ve created a file called /etc/cluster that contained all my node’s names:

frodo
sam
merry
pippin

Than, I made a extremely simple script to read it and execute a command on every node:

CMD="$*"
for node in `cat /etc/cluster`; do
    ssh -t $node $CMD
done

I did also the same using scp and the joy of the day: patch! Yes, I quite liked it, a script that you provide the original path on nodes and the patch file. It’ll copy to your home directory and apply the patch on the original file using both simple scripts previously made to copy and run.

Than it became easy to admin the config of the cluster:

$ diff -e /etc/mon/mon.cf /etc/mon/mon.cf.original > ~/mon.patch
$ cpatch /etc/mon/mon.cf ~/mon.patch
$ cexe /etc/init.d/mon restart

Ludicrously simple, isn’t it?!?! πŸ˜€

Of course, to do that I needed to create two things: passwordless ssh and passwordless sudo on each node. But that’s another story.

Middle Earth: Monitoring

My first choice was Ganglia as it’s used on Oscar and Rocks but the Ubuntu version is very old (2.5.7 while the current one is 3.0.3), the configuration file is completely different and the man page of 2.5.7 is empy. So, nagios and mon were my second choices but mon seemed much simpler and I had used nagios before and didn’t find it very straightforward to configure as well.

So I ended up with mon. Mon is a stable software, written in Perl and available from the kernel.org site. It’s so simple to configure and customize that I spent more time installing the packages.

So, on Ubuntu, what you need is:

$ sudo apt-get install mon fping

Edit the /etc/mon/mon.cf and put this:

hostgroup cluster [cluster IPs separated by space]
watch cluster
        service Ping
                interval 1m
                monitor fping.monitor -r 5 -t 2000
                period wd {Mon-Sun} hr {0am-24pm}
                        alertafter 1
                        alertevery 4h
                        alert mail.alert [your email]

And restart the service:

$ sudo /etc/init.d/mon restart

If you run “monshow –full” you’ll see the status of your Ping check. If you want a better (but not that much better) interface, you can run “monshow” as a CGI and for that you’ll need Apache.

$ sudo apt-get install apache

And then, symlink the monshow as a CGI from the cgi-bin dir on Apache’s tree:

$ sudo ln -s /usr/bin/monshow /usr/lib/cgi-bin/monshow.cgi

Then, just put the URL “http://[machine-ip]/cgi-bin/monshow.cgi” on your browser and it should be showing you some HTML with the status of your health checks. I changed monshow to always show me everything, so on the $CF variable, I put 1 (true) on both “full” and “show-disabled”.

Extending and customizing

You might check for all monitors at /usr/lib/mon/mon.d/ and even create your own if you know a little bit of Perl. Copy an existent monitor to your own and edit it to your needs. You can also have arguments just as fping have it’s own, it’s very simple.

Middle Earth: Ubuntu Installation

My machines were once computing nodes of a bigger cluster so none of them had CD drives or Floppy neither had a good way of inserting them easily. They also had nothing on their hard drives except temporary files and that was also barely used… they were almost diskless nodes booting over network, so to speak.

I had then two choices: try to put a CD drive and boot to install ubuntu or network boot with PXE, and the second one was my choice.

First, you’ll need a PXE boot boot server so your boxes can connect and receive the boot files. This is very simple because the PXE protocol is, in a nutshell, a very simple DHCP with an even simpler TFTP file server.

To setup the DHCP server you need to:

$ sudo apt-get install dhcp

and change the /etc/dhcpd.conf to contain just that:

subnet 0.0.0.0 netmask 0.0.0.0 {
      range 192.168.1.10 192.168.1.20;
      option routers 192.168.1.254;
      filename "pxelinux.0";
}

and remember to put the IP range of your network and your router correctly. Restart the DHCP daemon:

$ sudo /etc/init.d/dhcp restart

Now, you need to install the TFTP server. If you try yo get the regular TFTP package you’ll fail because it doesn’t implement correctly the PXE boot (missing some basic features), so use the TFTP HPA instead:

$ sudo apt-get install tftp-hpa xinetd

and add a file called “tftp” it to your /etc/xinetd.d/ with the following content:

service tftp
{
      socket_type             = dgram
      protocol                = udp
      wait                    = yes
      user                    = root
      server                  = /usr/sbin/in.tftpd
      server_args             = -s /srv/tftp
      disable                 = no
      per_source              = 11
      cps                             = 100 2
      flags                   = IPv4
}

and restart the service:

$ sudo /etc/init.d/xinetd restart

You’ll need to create that directory /srv/tftp in order to put there pxelinux.0 and all other files needed to boot:

$ sudo mkdir /srv/tftp

Ok, now we have both service working but no files!! Here’ s how to get them:

$ cd /srv/tftp
$ URL=http://archive.ubuntu.com/ubuntu \\
  URL=$URL/dists/dapper/main/installer-i386 \\
  URL=$URL/current/images/ \\
  sudo lftp -c "open $URL; mirror netboot/"
$ sudo mv netboot/* .
$ sudo rmdir netboot
$ sudo tar zxf netboot.tar.gz

Files in place, boot your node and you should have a nice ubuntu screen asking you to boot your installation of Ubuntu. Just type “server” and ENTER the realm of Ubuntu.

Some portions were taken from http://wiki.koeln.ccc.de/index.php/Ubuntu_PXE_Install
but couldn’t make it work just by following that steps. Maybe you’ll have to look here and there in order to make your system work properly.