How close is nano-computing?

In September, Sunny Bains wrote Why Nano still macro? and since then I’m thinking about it once in a while.

Recently, a study in the University of California showed how to create a demodulator using nanotubes. So far there were advances in memory containers such as this and that and also batteries but all of them, as Sunny remembers, trying to build small structures following the design of big things.

Quantum computation nowadays have exactly the same problem, quantum effects in a classic assembly, big, clumsy and very expensive. If it was required a quantum effect (the transistor) to make classical computational cheap and available what will be required to make quantum computers cheap? A SuperString effect? Something messing around with the Calabi–Yau shape of the 6 additional dimensions?

Anyway, back to nanotech, building a nano-battery is cool but using ATPs as the primary source for energy would be much cooler! Using the available nano-gears and nanotubes to make a machine is also cool but creating a single 2,3 Turing machine (recently proven to be universal) would be way better!

Once you have the extremely simple processor like that, a nano-modem, some storage and ATP as food you can do whatever you want for how long you like inside any living being on Earth. Add a few gears to make a propeller and you’re mobile! 😉

Of course it’s not that simple, but most of the time to state that something is viable means exactly the same as to say that it’s classic as in boring and clumsy and expensive and brute force… well, you got the idea…

Nvidia helps crackers?

Their long support for the minority is well appreciated for us, Linux users, but now they’re indirectly supporting the bad guys as well! Not to panic though, every major breakthrough comes with a proportional cost (ie. nuclear physics).

According to The Register, this company is using NVidia’s GPU to reduce the password cracking from months to days!

The new CUDA platform allows you to use the GPU for numeric processing, giving a big advantage over the too generic (and too complex) CPU.

Now, just between us, they can’t say they didn’t know it was going to happen, can they? No one said week password schemes (even with strong public encryption algorithm) were safe…

Recursive patents

IBM once had great innovators working for them, many holding Nobel prizes etc but for a while they haven’t had a great idea… until NOW!

It’s a genius idea that will revolutionize the whole patent scheme: They’re filling a patent on Getting money out of patents.

Quoting The Register: If Big Blue gets its way, Microsoft’s promises to Novell and Xandros not to sue over alleged infringements of its Windows patent portfolio ought to mean Redmond pays a kickback to IBM.

If that doesn’t change the completely stupid and out-of-this-world patent system in US, I don’t know what will…

LSF, Make and NFS

I use LSF at work, a very good job scheduler. To parallelize my jobs I use Makefiles (with -j option) and inside every rule I run the command with the job scheduler. Some commands call other Makefiles, cascading even more the spawn of jobs. Sometimes I achieve 200+ jobs in parallel.

Our shared disk BlueArc is also very good, with access times quite often faster than my local disk but yet, for almost two years I’ve seen some odd behaviour when putting all of them together.

I’ve reported random failures on processes that worked until then and, without any modifications, worked ever after. But not a long time ago I figured out what the problem was… NFS refresh speed vs. LSF spawn speed using Makefiles.

When your Makefile looks like this:

bar.gz:
    $(my_program) foo > bar
    gzip bar

There isn’t any problem because as soon as bar is created gzip can run and create the gz file. Plain Makefile behaviour, nothing to worry about. But then, when I changed to:

bar.gz:
    $(lsf_submit) $(my_program) foo > bar
    $(lsf_submit) gzip bar

Things started to go crazy. Once every a few months in one of my hundreds of Makefiles it just finished saying:

bar: No such file or directory
make: *** [bar.gz] Error 1

And what’s even weirder, the file WAS there!

During the period when these magical problems were happening, which I was lucky to streamline the Makefiles every day so I could just restart the whole thing and it went well as planned, I had another problem, quite common when using NFS: NFS stale handle.

I have my CVS tree under the NFS filesystem and when testing some perl scripts between AMD Linux and Alpha OSF machines I used to get this errors (the NFS cache was being updated) and had to wait a bit or just try again on most of the cases.

It was then that I have figured out what the big random problem was: NFS stale handle! Because the Makefile was running on different computers, the NFS cache took a few milliseconds to update and the LSF spawner, berzerk for performance, started the new job way before NFS could reorganize itself. This is why the file was there after all, because it was on its way and the Makefile crashed before it arrived.

The solution? Quite stupid:

bar.gz:
    $(lsf_submit) "$(my_program) foo > bar" && sleep 1
    $(lsf_submit) gzip bar

I’ve put it on all rules that have more than one command being spawned by LSF and never had this problem again.

The smart reader will probably tell me that it’s not just ugly, it doesn’t cover all cases at all, and you’re right, it doesn’t. NFS stale handle can take more than one second to update, single-command rules can break on the next hop, etc but because there is some processing between them (rule calculations are quite costy, run make -d and you’ll know what I’m talking about) the probability is too low for our computers today… maybe in ten years I’ll have to put sleep 1 on all rules… 😉

Officially supporting piracy

In an earlier post I said that Microsoft actively supports piracy to increase the market share and some readers tagged it as conspiracy theory or at least complete rubbish.

But coming from the devil itself this time, explorer 7 (previously locked to authentic owners) was release to the general (pirates) public.

It’s so obvious I can’t believe there is still people buying they’re really fighting piracy.

Yet another supercomputer

SciCortex is to launch their cluster-in-a-(lunch)-box with promo video and everything. Seems pretty nice but some things worries me a bit …

Of course a highly interconnected backpane and some smart shortest-path routing algorithms (probably not as good as Feynman’s) is much faster (and reliable?) than gigabit ethernet (myrinet also?). Of course, all-in-one chip technology is much faster and safer and more economic than any HP or IBM 1U node money can buy.

There are also some eye-candy like a pretty nice external case, dynamic resource partitioning (like VMS), native parallel filesystem, MPI optimized interconnection and so on… but do you remember Cray-1? It had wonderful vector machines but in the end it was so complex and monolithic that everyone got stuck with it and never used it anymore.

Assembling a 1024-node Linux cluster with PC nodes, Gigabit, PVFS, MPI etc is hard? Of course it is, but the day Intel stops selling PCs you can use AMD (and vice-versa) and you won’t have to stop using the old machines until you have a whole bunch of new ones up and running transparently integrated with your old cluster. If you do it right you can have a single cluster beowulf cluster running alphas, Intel, AMD, Suns etc, just bother with the paths and the rest is done.

I’m not saying it’s easier, nor cheaper (costs with air conditioning, cabling and power can be huge) but being locked to a vendor is not my favourite state of mind… Maybe if they had smaller machines (say 128 nodes) that could be assembled in a cluster and still allow external hardware to be connected having intelligent algorithms to understand the cost of migrating process to external nodes (based on network bandwidth and latency) would be better. Maybe it could even make their entry easier to existent clusters…