Uno score keeper

With the spring not coming soon, we had to improvise during the Easter break and play Uno every night. It’s a lot of fun, but it can take quite a while to find a piece of clean paper and a pen that works around the house, so I wondered if there was an app for that. It turns out, there wasn’t!

There were several apps to keep card game scores, but every one was specific to the game, and they had ads, and wanted access to the Internet, so I decided it was worth it writing one myself. Plus, that would finally teach me to write Android apps, a thing I was delaying to get started for years.

The App

Adding new players
Card Game Scores

The app is not just a Uno score keeper, it’s actually pretty generic. You just keep adding points until someone passes the threshold, when the poor soul will be declared a winner or a loser, depending on how you set up the game. Since we’re playing every night, even the 30 seconds I spent re-writing our names was adding up, so I made it to save the last game in the Android tuple store, so you can retrieve it via the “Last Game” button.

It’s also surprisingly easy to use (I had no idea), but if you go back and forth inside the app, it cleans the game and start over a new one, with the same players, so you can go on as many rounds as you want. I might add a button to restart (or leave the app) when there’s a winner, though.

I’m also thinking about printing the names in order in the end (from victorious to loser), and some other small changes, but the way it is, is good enough to advertise and see what people think.

If you end up using, please let me know!

Download and Source Code

The app is open source (GPL), so rest assured it has no tricks or money involved. Feel free to download it from here, and get the source code at GitHub.

What I don’t miss about Java

Disclaimer: This is not a rant

I spent my last year working with Java, and it was not at all bad. But while Java has its moments and shines, I always felt a bit out of place when using it. In fact, when I moved back to C++, contrary to when I moved to Java, I felt that I actually wasn’t missing much…

Last year, while writing in Java at work, I felt compelled more often than usual to write C++ programs at home. Even simple programs, that would do better with scripting languages, they all came in C++.

Recently, working full time with C++, I noticed I’m doing very little home development and definitely not doing any Java. So, what did I miss about C++ that I don’t miss about Java?

Expressiveness: While functional languages are much more expressive than C++, there are few languages less expressive than Java. Java encourages child-like programming like forcing to call everything by methods not operators. By not having explicit pointers, operator overload and other dangerous things from C++, you end up repeating yourself quite a lot and it’s very hard to understand the logic afterwards, when all you have is bloatware.

While Java designers tried to avoid pointers and operators, they couldn’t. We still have null references (throwing null pointer exceptions) and the fake operators (like toString(), hash(), compare()) that can easily be overridden to change the expected behaviour pretty much the same way as C++ operators, but in the “method” notation.

In the end, you can do some bad things, but not all. So, they took away dangers by taking away functionality, without a proper redesign of C++.

Abuse of Object Orientation: While in Ruby, everything is an object, in Java, almost everything can be. Every class derive from Object silently, but base types do not. So you have the basic objects (Integer et al) which get automatically converted into basic types in subtle ways it’s hard to predict and has a huge performance impact (see auto-boxing).

Not just performance, but the language design is, again, incomplete.

Most OO programmers (mainly Java ones) complain a lot about Perl OO. They say Perl (or Python for that matter) has no proper OO, since everything is a hash and there is no concept of protection.

While Java objects and members are strongly typed, and you have the concept of protection, it’s way too easy to transform Java OO into Perl OO with reflection.

Of course, with C++ you can cast things to void pointers, mess up in the memory and so on, but getting objects by name, removing the private protection in a safe way is simply wrong. It’s like giving loaded guns to children and telling them where the lock is.

Abuse of Design Patterns: Java developers are encourage to use design patterns, to the point of stupidity. The first thing I learnt from design patterns is that their misuse is actually an anti-pattern.

Properties are important when the requirements change too often, not when they’re static. Factories are used when the objects created may differ or be customized, not for never-changing one-object construction. Still, most libraries (all?) will have Factories, Properties and so on, just for the sake of Design Patterns Compliance ™.

Fact, one of the strengths of Java development is that every one is encouraged to do things the same way. No Larry Wall style, all factory workers, doing their share in the big picture. While this is good for big, quick projects on companies with high turn-over (like consultancy companies), it’s horrible for start-ups or more creative development.

Half-implemented features: Well, templates is an issue. There is no template mechanism in Java. With the so-called Generics (like cheap version of meds), there is no type safety at all, it’s just syntax sugar for lists of Objects.

That generates a lot of misunderstandings and bad code being generated when the syntax is obviously correct, that is, if the types were actually being checked.

Again, incomplete design for the sake of backward compatibility with old codes and VMs.

Performance: Running in a JVM is already a bad start for performance, but a good compiler and a well done JIT environment can take most of it away by intelligently removing unused code, re-optimizing most used code during run-time and using profiling results to change branch-prediction code.

While the JVM does some of it, it also introduces several problems that take away the advantage and put it back on the back of the class. Auto-boxing and generics create a lot of useless casts, that can be a huge performance hit. Very few Java programmers really care about it and the compiler doesn’t do a good job in reducing that impact or even warning the programmer.

I often see Java developer scorn at performance issues. The phrase used most is “a programmer shouldn’t care about memory footprint or performance, only about business logic”. That, together with the fact that almost all universities now are teaching Java in undergraduate courses, kinda frightens me a bit.

Strong dependency on IDEs: Borland made quite a lot of money out of C++ IDEs in the 90’s, but most C++ programmers I know still use VIM or Emacs. On the other hand, every Java programmer I know use Eclipse, IntelliJ or something of the sort.

This is not just ease of use (code completion, syntax colouring, hints, navigation), it’s all about speeding up the development process by taking away boiler-plate code generation and refactoring.

IDEs are capable of writing complete pieces of code, refactor and re-write things (even behind your back). The programmers don’t care about it, the code becomes bloated, unintelligible and forgotten. Not to mention the desire of IDEs and people following IDE-style to use certain patterns for everything, like using Properties where simple structures would suffice. (see above, Abuse of Design-Patterns).

False Guarantees: The big selling point of Java, besides cheap cross-platform development, is it’s apparent safety and ease of use. But it isn’t in so many levels…

The abuses and problems related above are only part of the story. The garbage collector is another…

Some good garbage collection routines can help the initial development of programs, and they do take away the job of the lazy programmers to manage their own memory, but the Java garbage collection became a beast, with incomprehensible command-line options, undefined behaviour and total lack of control over it. You’re rendered hostage to its desires.

Not to mention the complete memory management that won’t cope with dynamic memory allocation. I mean, if you want to make memory management easy for programmers (as they went to all that trouble for a garbage collection), you could have gone a bit further and actually figured out the available memory and used it politely.

Join those with the fact that pointers and operators are still available, and you have a language that is not so much simpler than C++, with a huge price in performance and weirdness.

Undocumented APIs: Java claims to be platform independent, but has quite a few available (but undocumented) APIs to use platforms specific functionality (like signals). Still, Sun (now Oracle) reserves the right to change whenever they wish and there’s little you (or anyone) can do about it.

And that takes us to the final point:

Standards (or lack thereof): Sun did a nice job at many things (mostly hardware and OS), but they screwed up neatly when it came to support software. There is no standard, IBM and even Microsoft created their own JVM (which was better than Sun’s, btw) without any final definition about the standard API. During the Java 1.1 days, it was possible to be platform agnostic but VM specific in the same platform!

Conclusion: Java was meant to be an easy language, but it turns out that it’s deceitful enough to be just as bad as any other. And recent changes are making it worse.

Programmers are loosing the ability to understand how the machine works, how their languages behave and, more importantly, to know the implications of their actions.

Why spend time understanding the fiddlings some people had with Java if you can spend the same time understanding how the machines actually work and therefore be able to use any programming language you want?

Some argue that Java is the new Cobol and will disappear the same way… I tend to agree…

Humble Bundle

I’m not the one to normally do reviews or ads, but this is one well worth doing. Humble bundle is an initiative hosted by Wolfire studio, in which five other studios (2D Boy, Bit Blot, Cryptic Sea, Frictional Games and the recently joined Amanita Design) joined their award-winning indie games into a bundle with two charities (EFF and Child’s Play) that you can pay whatever you want, to be shared amongst them.

All games work on Linux and Mac (as well as Windows), are of excellent quality (I loved them) and separately would cost around 80 bucks. The average buy price for the bundle is around $8.50, but some people have paid $1000 already. Funny, though, that now they’re separating the average per platform, and Linux users pay, on average, $14 while Windows users pay $7, with Mac in between. A clear message to professional game studios out there, isn’t it?

About the games, they’re the type that are always fun to play and don’t try to be more than they should. There are no state-of-the-art 3D graphics, blood, bullets and zillions of details, but they’re solid, consistent and plain fun. I already had World of Goo (from 2D Boy) and loved it. All the rest I discovered with the bundle and I have to say that I was not expecting them to be that good. The only bad news is that you have only one more day to buy them, so hurry, get your bundle now while it’s still available.

The games

World of Goo: Maybe the most famous of all, it’s even available for Wii. It’s addictive and family friendly, has many tricks and very clever levels to play. It’s a very simple concept, balls stick to other balls and you have to reach the pipe to save them. But what they’ve done with that simple concept was a powerful and very clever combination of physical properties that give the game an extra challenge. What most impressed me was the way physics was embedded in the game. Things have weight and momentum, sticks break if the momentum is too great, some balls weight less than air and float, while others burn in contact with fire. A masterpiece.

Aquaria: I thought this would be the least interesting of all, but I was wrong. Very wrong. The graphics and music are very nice and the physics of the game is well built, but the way the game builds up is the best. It’s a mix of Ecco with Loom, where you’re a sea creature (mermaid?) and have to sing songs to get powers or to interact with the game. The more you play, the more you discover new things and the more powerful you become. Really clever and a bit more addictive than I was waiting for… 😉

Gish: You are a tar ball (not the Unix tar, though) and have to go through tunnels with dangers to find your tar girl (?). The story is stupid, but the game is fun. You can be slippery or sticky to interact with the maze and some elements that have simple physics, which add some fun. There are also some enemies to make it more difficult. Sometimes it’s a bit annoying, when it depends more on luck (if you get the timing of many things right in a row) than actually logic or skill. The save style is also not the best, I was on the fourth level and asked for a reset (to restart the fourth level again), but it reset the whole thing and sent me to the first level, which I’m not playing again. The music is great, though.

Lugaru HD: A 3D Lara Croft bloody kung-fu bunny style. The background story is more for necessity of having one than actually relevant. The idea is to go on skirmishing, cutting jugulars, sneaking and knocking down characters in the game as you go along. The 3D graphics are not particularly impressive and the camera is not innovative, but the game has some charm for those that like a fight for the sake of fights. Funny.

Penumbra: If you like being scared, this is your game. It’s rated 16+ and you can see very little while playing. But you can hear things growling, your own heart beating and the best part is when you see something that scares the hell out of you and you despair and give away your hide out. The graphics are good, simple but well cared for. The effects (blurs, fades, night vision, fear) are very well done and in sync with the game and story. The interface is pretty simple and impressively easy, making the game much more fun than the traditional FPS I’ve played so far. The best part is, you don’t fight, you hide and run. It remembers me Thief, where fighting is the last thing you want to do, but with the difference is that in Thief, you could, in this one, you’re a puss. If you fight, you’ll most likely die.

Samorost 2: It’s a flash game, that’s all I know. Flash is not particularly stable on any platform and Linux is especially unstable, so I couldn’t make it run in the first attempt. For me, and most gamers I know, a game has to work. This is why it’s so hard to play early open source games, because you’re looking for a few minutes of fun and not actually fiddling with your system. I have spent more time writing this paragraph than trying to play Samorost and I will only try it again if I upgrade my Linux (in hoping the Flash problem will go away by itself). Pity.

Well, that’s it. Go and get your humble bundle that it’s well worth, plus you help some other people in the process. Helping indie studios is very important for me. First, it levels the play-field and help them grow. Second, they tend to be much more platform independent, and decent games for Linux are scarce. Last, they tend to have the best ideas. Most game studios license one or two game engines and create dozens of similar games with that, in hope to get more value for their money. Also, they tend to stick with the current ideas that sell, instead of innovating.

By buying the bundle you are, at the very least, helping to have better games in the future.

Barrelfish

Minix seems to be inspiring more operating systems nowadays. Microsoft Research is investing on a micro-kernel (they call it multi-kernel, as there are slight differences) called Barrelfish.

Despite being Microsoft, it’s BSD licensed. The mailing list looks pretty empty, the last snapshot is half a year ago and I couldn’t find an svn repository, but still more than I would expect from Microsoft anyway.

Multi-kernel

The basic concept is actually very interesting. The idea is to be able to have multi-core hybrid machines to the extreme, and still be able to run a single OS on it. Pretty much the same way some cluster solutions do (OpenMPI, for instance), but on a single machine. The idea is far from revolutionary. It’s a natural evolution of the multi-core trend with the current cluster solutions (available for years) and a fancy OS design (micro-kernel) that everyone learns in CS degrees.

What’s the difference, then? For one thing, the idea is to abstract everything away. CPUs will be just another piece of hardware, like the network or graphic cards. The OS will have the freedom to ask the GPU to do MP floating-point calculations, for instance, if it feels it’s going to benefit the total execution time. It’ll also be able to accept different CPUs in the same machine, Intel and ARM for instance (like the Dell Latitude z600), or have different GPUs, nVidia and ATI, and still use all the hardware.

With Windows, Linux and Mac today, you either use the nVidia driver or the ATI one. You also normally don’t have hybrid-core machines and absolutely can’t recover if one of the cores fail. This is not the same with cluster solutions, and Barrelfish’s idea is to simulate precisely that. In theory, you could do energy control (enabling and disabling cores), crash-recovery when one of the cores fail but not the other, or plug and play graphic or network cards and even different CPUs.

Imagine you have an ARM netbook that is great for browsing, but you want to play a game on it. You get your nVidia and a coreOcta 10Ghz USB4 and plug in. The OS recognizes the new hardware, loads the drivers and let you play your game. Battery life goes down, so once you’re back from the game, you just unplug the cards and continue browsing.

Scalability

So, how is it possible that Barrelfish can be that malleable? The key is communication. Shared memory is great for single-processed threaded code and acceptable for multi-processed OSs with little number of concurrent process accessing the same region in memory. Most modern OSs can handle many concurrent processes, but they rarely access the same data at the same time.

Normally, processes are single threaded or have a very small number of threads (dozens) running. More than that is so difficult to control that people usually fall back to other means, such as client/server or they just go out and buy more hardware. In clusters, there is no way to use shared memory. For one, accessing memory in another computer via network is just plain stupid, but even if you use shared memory in each node and client/server among different nodes, you’re bound to have trouble. This is why MPI solutions are so popular.

In Barrelfish there’s no shared memory at all. Every process communicate with each other via messages and duplicate content (rather than share). There is an obvious associated cost (memory and bus), but the lock-free semantics is worth it. It also gives Barrelfish another freedom: to choose the communication protocol generic enough so that each piece of hardware is completely independent of all others, and plug’n’play become seamless.

Challenges

It all seem fantastic, but there’s a long road ahead. First, message passing scales much better than shared memory, but nowadays there isn’t enough cores in most machines to make it worth it. Message passing also introduces some other problems that are not easily solvable: bus traffic and storage requirements increase considerably, and messages are not that generic in nature.

Some companies are famous for not adhering to standards (Apple comes to mind), and a standard hardware IPC framework would be quite hard to achieve. Also, even if using pure software IPC APIs, different companies will still use slightly modified APIs to suit their specific needs and complexity will rise, exponentially.

Another problem is where the hypervisor will live. Having a distributed control centre is cool and scales amazingly well, but its complexity also scales. In a hybrid-core machine, you have to run different instructions, in different orders, with different optimizations and communication. Choosing one core to deal with the scheduling and administration of the system is much easier, but leaves the single-point-of-failure.

Finally, going the multi-hybrid-independent style is way too complex. Even for a several-year’s project with lots of fund (M$) and clever people working on it. After all, if micro-kernel was really that useful, Tanembaum would have won the discussion with Linus. But, the future holds what the future holds, and reality (as well as hardware and some selfish vendors) can change. Multi-kernel might be possible and even easier to implement in the future.

This seems to be what the Barrelfish’s team is betting on, and I’m with them on that bet. Even if it fails miserably (as did Minix), some concepts could still be used in real-world operating systems (like Minix), whatever that’ll mean in 10 years. Being serious about parallelism is the only way forward, sticking with 40 years old concepts is definitely not.

I’m still aspiring for non-deterministic computing, though, but that’s an even longer shot…

Linux is whatever you want it to be

Normally the Linux Magazine has great articles. Impartial, informative and highly technical. Unfortunately, not always. In a recent article, some perfectionist zealot stated that Ubuntu makes Linux looks bad. I couldn’t disagree more.

Ubuntu is a fast-paced, fast-adapted Linux. I was one of the early adopters and I have to say that most of the problems I had with the previous release were fixed. Some bugs went through, of course, but they were reported and quickly fixed. Moreover, Ubuntu has the support from hardware manufacturers, such as Dell, and that makes a big difference.

Linux is everything

Linux is excellent for embedded systems, great for network appliances, wonderful for desktops, irreplaceable as a development platform, marvellous on servers and the only choice for real clusters. It also sucks when you have to find the configuration manually, it’s horrible to newbies, it breaks whenever a new release is out, it takes longer to get new software (such as Firefox) but also helps a lot with package dependencies. Something that neiter Mac nor Windows managed to do properly over the past decades.

Linux is great as any piece of software could be but horrible as every operating system that was release since the beginning of times. Some Linux distributions are stable, others not so. Debian takes 10 years to release and when it does, the software it contains is already 10 years old. Ubuntu tries to be a bit faster but that obviously breaks a few things. If you’re fast enough fixing, the early adopters will be pleased that they helped the community.

“Unfortunately what most often comes is a system full of bugs, pain, anguish, wailing and gnashing of teeth – as many “early” adopters of Karmic Koala have discovered.”

As any piece of software, open or closed, free or paid, free or non-free. It takes time to mature. A real software engineer should know better, that a system is only fully tested when it reaches the community, the user base. Google uses their own users (your granny too!) as beta testers for years and everyone seem to understand it.

Debian zealots hate Red Hat zealots and both hate Ubuntu zealots that probably hate other zealots anywhere else. It’s funny to see how opinions vary greatly from a zealot clan to the other about what Linux really is. All of them have a great knowledge on what Linux is comprised of, but few seems to understand what Linux really is. Linux, or better, GNU/Linux is a big bunch of software tied together with so many different points of view that it’s impossible to state in less than a thousand words what it really is.

“Linux is meant to be stable, secure, reliable.”

NO, IT’S NOT! Linux is meant to be whatever you make of it, that’s the real beauty. If Canonical thought it was ready to launch is because they thought that, whatever bug pased the safety net was safe enough for the users to grab and report, which we did! If you’re not an expert, wait for the system to cool down. A non-expert will not be an “early adopter” anyway, that’s for sure.

Idiosyncrasies

Each Linux has its own idiosyncrasies, that’s what makes it powerful, and painful. The way Ubuntu updates/upgrades itself is particular to Ubuntu. Debian, Red Hat, Suse, all of them do it differently, and that’s life. Get over it.

“As usual, some things which were broken in the previous release are now fixed, but things which were working are now broken.”

One pleonasm after another. There is no new software without new bugs. There is no software without bugs. What was broken was known, what is new is unknown. How can someone fix something they don’t know? When eventually the user tested it, found it broken, reported, they fixed! Isn’t it simple?

“There’s gotta be a better way to do this.”

No, there isn’t. Ubuntu is like any other Linux: Like it? Use it. Don’t like it? Get another one. If you don’t like the way Ubuntu works, get over it, use another Linux and stop ranting.

Red Hat charges money, Debian has ubber-stable-decade-old releases, Gentoo is for those that have a lot of time in their hands, etc. Each has its own particularities, each is good for a different set of people.

Why Ubuntu?

I use Ubuntu because it’s easy to install, use and update. The rate of bugs is lower than on most other distros I’ve used and the rate of updates is much faster and stable than some other distros. It’s a good balance for me. Is it perfect? Of course not! There are lots of things I don’t like about Ubuntu, but that won’t make me use Windows 7, that’s for sure!

I have friends that use Suse, Debian, Fedora, Gentoo and they’re all as happy as I am, not too much, but not too few. Each has problems and solutions, you just have to choose the ones that are best for you.

Gtk example

Gtk, the graphical interface behind Gnome, is very simple to use. It doesn’t have an all-in-one IDE such as KDevelop, which is very powerful and complete, but it features a simple and functional interface designer called Glade. Once you have the widgets and signals done, filling the blanks is easy.

As an example, I wrote a simple dice throwing application, which took me about an hour from install Glade to publish it on the website. Basically, my route was to apt-get install glade, open it and create a few widgets, assign some callbacks (signals) and generate the C source code.

After that, the file src/callbacks.c contain all the signal handlers to which you have to implement. Adding just a bit of code and browsing this tutorial to get the function names was enough to get it running.

Glade generates all autoconf/automake files, so it was extremely easy to compile and run the mock window right at the beginning. The rest of the code I’ve added was even less code than I would add if doing a console based application to do just the same. Also, because of the code generation, I was afraid it’d replace my already changed callbacks.c when I changed the layout. Luckily, I was really pleased to see that Glade was smart enough not to mess up with my changes.

My example is not particularly good looking (I’m terrible with design), but that wasn’t the intention anyway. It’s been 7 years since the last time I’ve built graphical interfaces myself and I’ve never did anything with Gtk before, so it shows how easy it is to use the library.

Just bear in mind a few concepts of GUI design and you’ll have very little problems:

  1. Widget arrangement is not normally fixed by default (to allow window resize). So workout how tables, frames, boxes and panes work (which is a pain) or use fixed position and disallow window resize (as I did),
  2. Widgets don’t do anything by themselves, you need to assign them callbacks. Most signals have meaningful names (resize, toggle, set focus, etc), so it’s not difficult to find them and create callbacks for them,
  3. Side effects (numbers appearing at the press of a button, for instance) are not easily done without global variables, so don’t be picky on that from start. Work your way towards a global context later on when the interface is stable and working (I didn’t even bother)

If you’re looking for a much better dice rolling program for Linux, consider using rolldice, probably available via your package manager.

The LLVM compilation infrastructure

I’ve been playing with LLVM (Low-Level Virtual Machine) lately and have produced a simple compiler for a simple language.

The LLVM compilation infrastructure (much more than a simple compiler or virtual machine), is a collection of libraries, methods and programs that allows one to create a simple, robust and very powerful compilers, virtual machines and run-time optimizations.

As GCC, it’s roughly separated into three layers: the front-end, which parses the files and produce intermediate representation (IR), the independent optimization layer, which acts on the language-independent IR and the back-end, which turns the IR into something executable.

The main difference is that, unlike GCC, LLVM is extremely generic. While GCC is struggling to fit broader languages inside the strongly C-oriented IR, LLVM was created with a very extensible IR, with a lot of information on how to represent a plethora of languages (procedural, object-oriented, functional etcetera). This IR also carries information about possible optimizations, like GCC’s but to a deeper level.

Another very important difference is that, in the back-end, not only code generators to many platforms are available, but Just-In-Time compilers (somewhat like JavaScript), so you can run, change, re-compile and run again, without even quitting your program.

The middle-layer is where the generic optimizations are done on the IR, so it’s language-independent (as all languages wil convert to IR). But that doesn’t mean that optimizations are done only on that step. All first-class compilers have strong optimizations from the time it opens the file until it finishes writing the binary.

Parser optimizations normally include useless code removal, constant expression folding, among others, while the most important optimizations on the back-end involve instruction replacement, aggressive register allocation and abuse of hardware features (such as special registers and caches).

But the LLVM goes beyond that, it optimizes during run-time, even after the program is installed on the user machine. LLVM holds information (and the IR) together with the binary. When the program is executed, it profiles automatically and, when the computer is idle, it optimizes the code and re-compile it. This optimization is per-user and means that two copies of the same software will be quite different from each other, depending on the user’s use of it. Chris Lattner‘s paper about it is very enlightening.

There are quite a few very important people and projects already using LLVM, and although there is still a lot of work to do, the project is mature enough to be used in production environments or even completely replace other solutions.

If you are interested in compilers, I suggest you take a look on their website… It’s, at least, mind opening.

40 years and full of steam

Unix is turning 40 and BBC folks wrote a small article about it. What a joy to remember when I started using Unix (AIX on an IBM machine) around 1994 and was overwhelmed by it.

By that time, the only Unix that ran well on a PC was SCO and that was a fortune, but there were some others, not as mature, that would have the same concepts. FreeBSD and Linux were the two that came into my sight, but I have chosen Linux for it was a bit more popular (therefore could get more help).

The first versions I’ve installed didn’t even had a X server and I have to say that I was happier than using Windows. Partially because of all the open-source-free-software-good-for-mankind thing, but mostly because Unix has a power that is utterly ignored by other operating systems. It’s so, that Microsoft used good bits from FreeBSD (that allows it via their license) and Apple re-wrote its graphical environment to FreeBSD and made the OS X. The GNU folks certainly helped my mood, as I could find all power tools on Linux that I had on AIX, most of the time even more powerful.

The graphical interface was lame, I have to say. But in a way it was good, it reminded me of the same interface I used on the Irix (SGI’s Unix) and that was ok. With time, it got better and better and in 1999 I was working with and using it at home full time.

The funny thing is that now, I can’t use other operating systems for too long, as I start missing some functionalities and will eventually get locked, or at least, extremely limited. The Mac OS is said to be nice and tidy, and with a full FreeBSD inside, but I still lacked agility on it, mainly due to search and installation of packages and configuration of the system.

I suppose each OS is for a different type of person… Unix is for the ones that like to fine-tune their machines or those that need the power of it (servers as well) and Mac OS is for those that need something simple, with the biggest change as the background colour. As for the rest, I fail to see a point, really.

What’s new on Windows 7?

After the buzz on Windows 7 I decided to take a look on a video posted (apparently by Microsoft itself) on youtube.

I was expecting to hear about the new Operating System, only to find out that everything that matters to MS is the Window Manager. No memory or CPU consumption reports, no filesystem or network configuration structure, nothing.

Anyway, I have to talk about the interface then… Now, is it only me or they’re copying what Gnome/Compiz is doing? Because, copying Apple it’s obvious, even Gnome/Compiz is at some extent.

First, window transparency and ALT-TAB with window thumbnails to select with your mouse. Done. Second, dragging icons to the taskbar, what’s new on THAT?!

But now, “something really cool we’re putting in Windows 7 is called ‘snap-to’ “?!?!? If I recall right, the graphical interface from the PARC team already had tiled/cascade window arrangement and undoubtedly Microsoft used that on Windows 3, so how is that cool in any way?

Well, they better have a much stabler environment and much lower footprint, otherwise they won’t have nothing really serious to show.

Vista is no more

It still hasn’t gone to meet it’s maker, but it was also not as bad as it could’ve been.

After Windows Vista was launched with more PR and DRM than any other, Microsoft hoped to continue its domination of the market. Maybe afraid of the steep Linux increase in desktops (Ubuntu has a great role in that) and other market pressures, they’ve rushed out Vista with so many bugs and security flaws, so slow and with such a big memory and CPU footprint that not many companies really wanted to change their whole infrastructure to see it drawn a little later.

China government ditched it for XP because it was not stable enough to run the Olympics, only to find out that the alternative didn’t help at all.

All that crap helped a lot Linux (especially Ubuntu) jump on the desktop world. Big companies shipping Linux on lots of desktops and laptops, all netbooks with Linux as primary option, lay people now using Linux as they would use any other desktop OS. So, is it just because Vista is so bad? No. Not at all. Linux got really user friendly over the last five to ten years and it’s now as easy as any other.

Vista is so bad that Microsoft had to keep supporting Windows XP, they’re rushing again with Windows 7 and probably (hopefully) they’ll make the same mistakes again. It’s got so bad that the Free Software Foundation’s BadVista campaign is officially is closing down for good. For good as in: Victory!

Yes, victory because in one year they could show the world how bad Vista really is and how good the other opportunities are. Of course, they were talking about Linux and all the free software around, including the new gNewSense platform they’re building, but the victory is greater than that. The biggest message is that Windows is not the only solution to desktops, and most of the time, it’s the worst.

In conjunction with the DefectiveByDesign guys, they also showed how Vista (together with Sony, Apple, Warner et al) can completely destroy your freedom, privacy and entertainment. They were so successful in their quest that they’re closing doors to spend time (and donors’ money) in more important (and pressing) issues.

Now, they’re closing down but that doesn’t mean that the problem is over. The idea is to stabilise the market. Converting all Windows and Mac users to Linux wouldn’t be right, after all, each person is different. But the big challenge is to have users that need (or want) a Mac, to use a Mac. Who needs Windows and can afford to pay all extra software to protect your computer (but not your privacy), can use it. For developers the real environment is Unix, they should be able to get a good desktop and good development tools as well. It’s, at least, fair.

But for the majority of users, what they really want is a computer to browse the web, print some documents, send emails and for that, any of the three is good enough. All three are easy to install (or come pre-installed), all three have all the software you need and most operations and configurations are easy or automatic. It’s becoming more a choice of style and design than anything else.

Now that Apple got rid of all DRM crap, Spore was a fiasco so EA is selling games without DRM, the word is getting out. It’s a matter of time it’ll be a minor problem, too. Would DefectiveByDesign retire too? I truly hope so.

As an exercise to the reader, go to Google home page and search for the terms: “windows vista“. You’ll see the BadVista website in the first page. If you search for “DRM” you’ll also see the DefectiveByDesign web page as well. This is big, it means that lots and lots of websites are pointing to those websites when they’re talking about those subjects!

If you care enough and you have a Google user and is using the personalised Google search, you could search for those terms and press the up arrow symbol on those sites to make them go even higher in the rank. Can we make both be the first? I did my part already.