LLVM Vectorizer

Now that I’m back working full-time with LLVM, it’s time to get some numbers about performance on ARM.

I’ve been digging the new LLVM loop vectorizer and I have to say, I’m impressed. The code is well structured, extensible and above all, sensible. There are lots of room for improvement, and the code is simple enough so you can do it without destroying the rest or having to re-design everything.

The main idea is that the loop vectorizer is a Loop Pass, which means that if you register this pass (automatically on -O3, or with -loop-vectorize option), the Pass Manager will run its runOnLoop(Loop*) function on every loop it finds.

The three main components are:

  1. The Loop Vectorization Legality: Basically identifies if it’s legal (not just possible) to vectorize. This includes checking if we’re dealing with an inner loop, and if it’s big enough to be worth, and making sure there aren’t any conditions that forbid vectorization, such as overlaps between reads and writes or instructions that don’t have a vector counter-part on a specific architecture. If nothing is found to be wrong, we proceed to the second phase:
  2. The Loop Vectorization Cost Model: This step will evaluate both versions of the code: scalar and vector. Since each architecture has its own vector model, it’s not possible to create a common model for all platforms, and in most cases, it’s the special behaviour that makes vectorization profitable (like 256-bits operations in AVX), so we need a bunch of cost model tables that we consult given an instruction and the types involved. Also, this model doesn’t know how the compiler will lower the scalar or vectorized instructions, so it’s mostly guess-work. If the vector cost (normalized to the vector size) is less than the scalar cost, we do:
  3. The Loop Vectorization: Which is the proper vectorization, ie. walking through the scalar basic blocks, changing the induction range and increment, creating the prologue and epilogue, promote all types to vector types and change all instructions to vector instructions, taking care to leave the interaction with the scalar registers intact. This last part is a dangerous one, since we can end up creating a lot of copies from scalar to vector registers, which is quite expensive and was not accounted for in the cost model (remember, the cost model is guess-work based).

All that happens on a new loop place-holder, and if all is well at the end, we replace the original basic blocks by the new vectorized ones.

So, the question is, how good is this? Well, depending on the problems we’re dealing with, vectorizers can considerably speed up execution. Especially iterative algorithms, with lots of loops, like matrix manipulation, linear algebra, cryptography, compression, etc. In more practical terms, anything to do with encoding and decoding media (watching or recording videos, pictures, audio), Internet telephones (compression and encryption of audio and video), and all kinds of scientific computing.

One important benchmark for that kind of workload is Linpack. Not only Linpack has many examples of loops waiting to be vectorized, but it’s also the benchmark that defines the Top500 list, which classifies the fastest computers in the world.


So, both GCC and Clang now have the vectorizers turned on by default with -O3, so comparing them is as simple as compiling the programs and see them fly. But, since I’m also interested in seeing what is the performance gain with just the LLVM vectorizer, I also disabled it and ran a clang with only  -O3, no vectorizer.

On x86_64 Intel (Core i7-3632QM), I got these results:

Compiler    Opt    MFLOPS  Diff
   Clang    -O3      2413  0.0%
     GCC  -O3 vec    2421  0.3%
   Clang  -O3 vec    3346 38.6%

This is some statement! The GCC vectorizer exists for a lot longer than LLVM’s and has been developed by many vectorization gurus and LLVM seems to easily beat GCC in that field. But, a word of warning, Linpack is by no means representative of all use cases and user visible behaviour, and it’s very likely that GCC will beat LLVM on most other cases. Still, a reason to celebrate, I think.

This boost mean that, for many cases, not only the legality if the transformations are legal and correct (or Linpack would have gotten wrong results), but they also manage to generate faster code at no discernible cost. Of course, the theoretical limit is around 4x boost (if you manage to duplicate every single scalar instruction by a vector one and the CPU has the same behaviour about branch prediction and cache, etc), so one could expect a slightly higher number, something on the order of 2x better.

It depends on the computation density we’re talking about. Linpack tests specifically the inner loops of matrix manipulation, so I’d expect a much higher ratio of improvement, something around 3x or even closer to 4x. VoIP calls, watching films and listening to MP3 are also good examples of densely packet computation, but since we’re usually running those application on a multi-task operating system, you’ll rarely see improvements higher than 2x. But general applications rarely spend that much time on inner loops (mostly waiting for user input and then doing a bunch of unrelated operations, hardly vectorizeable).

Another important aspect of vectorization is that it saves a lot of battery juice. MP3 decoding doesn’t really matter if you finish in 10 or 5 seconds, as long as the music doesn’t stop to buffer. But taking 5 seconds instead of 10 means that on the other 5 seconds the CPU can reduce its voltage and save battery. This is especially important in mobile devices.

What about ARM code?

Now that we know the vectorizer works well, and the cost model is reasonably accurate, how does it compare on ARM CPUs?

It seems that the grass is not so green on this side, at least not at the moment. I have reports that on ARM it also reached the 40% boost similar to Intel, but what I saw was a different picture altogether.

On a Samsung Chromebook (Cortex-A15) I got:

Compiler   Opt    MFLOPS   Diff
  Clang    -O3       796   0.0%
    GCC  -O3 vec     736  -8.5%
  Clang  -O3 vec     773  -2.9%

The performance regression can be explained by the amount of scalar code intermixed with vector code inside the inner loops as a result of shuffles (movement of data within the vector registers and between scalar and vector registers) not being lowered correctly. This most likely happens because the LLVM back-end relies a lot on pattern-matching for instruction selection (a good thing), but the vectorizers might not be producing the shuffles in the right pattern, as expected by each back-end.

This can be fixed by tweaking the cost model to penalize shuffles, but it’d be good to see if those shuffles aren’t just mismatched against the patterns that the back-end is expecting. We will investigate and report back.


Got results for single precision floating point, which show a greater improvement on both Intel and ARM.

On x86_64 Intel (Core i7-3632QM), I got these results:

Compiler   Opt   MFLOPS  Diff
   Clang   -O3     2530   0.0%
     GCC -O3 vec   3484  37.7%
   Clang -O3 vec   3996  57.9%

On a Samsung Chromebook (Cortex-A15) I got:

Compiler   Opt   MFLOPS  Diff
   Clang   -O3      867   0.0%
     GCC -O3 vec    788,  9.1%
   Clang -O3 vec   1324  52.7%

Which goes on to show that the vectorizer is, indeed, working well for ARM, but the costs of using the VFP/NEON pipeline outweigh the benefits. Remember than NEON vectors are only 128-bits wide and VFP only 64-bit wide, and NEON has no double precision floating point operations, so they’ll only do one double precision floating point operations per cycle, so the theoretical maximum depends on the speed of the soft-fp libraries.

So, in the future, what we need to be working is the cost model, to make sure we don’t regress in performance, and try to get better algorithms when lowering vector code (both by making sure we match the patterns that the back-end is expecting, and by just finding better ways of vectorizing the same loops).


Without further benchmarks it’s hard to come to a final conclusion, but it’s looking good, that’s for sure. Since Linpack is part of the standard LLVM test-suite benchmarks, fixing this and running it regularly on ARM will at least avoid any further regressions… Now it’s time to get our hands dirty!


K-means clustering

Clustering algorithms can be used with many types of data, as long as you have means to distribute them in a space, where there is the concept of distance. Vectors are obvious choices, but not everything can be represented into N-dimensional points. Another way to plot data, that is much closer to real data, is to allow for a large number of binary axis, like tags. So, you can cluster by the amount of tags the entries share, with the distance being (only relative to others) the proportion of these against the non-sharing tags.

An example of tag clustering can be viewed on Google News, an example of clustering on Euclidean spaces can be viewed on the image above (full code here). The clustering code is very small, and the result is very impressive for such a simple code. But the devil is in the details…

Each red dots group is generated randomly from a given central point (draws N randomly distributed points inside a circle or radius R centred at C). Each centre is randomly placed, and sometimes their groups collide (as you can see on the image), but that’s part of the challenge. To find the groups, and their centres, I throw random points (with no knowledge of the groups’ centres) and iterate until I find all groups.

The iteration is very simple, and consists of two steps:

  1. Assignment Step: For each point, assign it to the nearest mean. This is why you need the concept of distance, and that’s a tricky part. With Cartesian coordinates, it’s simple.
  2. Update Step: Calculate the real mean of all points belonging to each mean point, and update the point to be at it. This is basically moving the supposed (randomly guessed) mean to it’s rightful place.

On the second iteration, the means, that were randomly selected at first, are now closer to a set of points. Not necessarily points in the same cluster, but the cluster that has more points assigned to any given mean will slowly steal it from the others, since it’ll have more weight when updating it on step 2.

If all goes right, the means will slowly move towards the centre of each group and you can stop when the means don’t move too much after the update step.

Many problems will arise in this simplified version, for sure. For instance, if the mean is exactly in between two groups, and both pull it to their centres with equally strong forces, thus never moving the mean, thus the algorithm thinks it has already found its group, when in fact, it found two. Or if the group is so large that it ends up with two or more means which it belongs, splitting it into many groups.

To overcome these deficiencies, some advanced forms of K-means take into account the shape of the group during the update step, sometimes called soft k-means. Other heuristics can be added as further steps to make sure there aren’t two means too close to each other (relative to their groups’ sizes), or if there are big gaps between points of the same group, but that kind of heuristics tend to be exponential in execution, since they examine every point of a group in relation to other points of the same group.

All in all, still an impressive performance for such a simple algorithm. Next in line, I’ll try clustering data distributed among many binary axis and see how k-means behave.

FreeCell puzzles solver API

This is a little pet project I did a while ago. It’s a FreeCell puzzle‘s solver API.

The idea is to provide a basic validation engine and board management (pretty much like my old chess validation), so people can write FreeCell solvers on top of it. It has basic board setup (of multiple sizes), movement legalisation, and a basic Solver class, which you must derive to create your own solvers.

There’s even a BruteFroceSolver that can solve a few small boards, and that gives you an idea on how to create your own solvers. However, the API is not clear enough yet that children could start playing with it, and that’s the real goal of this project: to get kids interested in solving complex optimisation problems in an easy way.

Freecell is a perfect game for it. Most boards can be solved (only a handful of them were proven – by exhaustion – not solvable), some movements can be rolled back and you can freely re-use cards that have already been placed into the foundations (their final destination) back in the game again.

It’s out of the scope of this project to produce a full-featured graphic interface for kids, but making the API easy enough so they understand the concepts without dragging themselves into idiosyncrasies of C++ is important.

Compiler optimisations

The reason why I did this was to make some of the optimisations compiler engineers have to do more appealing to non-compiler engineers or children with a taste for complex problems. But what does this have to do with compilers? The analogy is a bit far-fetching and somewhat reverse, but it’s interesting nevertheless and it was worth the shot.

Programming languages are transformed into graphs inside the compiler, which should represent the intentions of the original programmer. This graphs are often optimised multiple times until you end up with a stream of instructions representing the source code in the machine language.

Ignore for now the optimisations on those graphs, and focus on the final part: selecting machine instructions that can represent that final graph in the machine language (assembly). This selection can pick any assembly instruction at will, but it has to put them into a very specific order to represent the same semantics (not just syntactic) of the original program. Since many instructions have side effects, pipeline interactions, special flags set or cleaned up, it’s not trivial to produce correct code if you don’t check and re-check all conditions every time. This is a known complex optimisation problem and can be responsible for changes in speed or code size in orders of magnitude.

What does it have to do with the Freecell puzzle? Well, in Freecell, you have a number of cards and you have to put them in a specific order, just like assembly instructions. But in this case, the analogy is reverse: the “sequence” is trivial, but the “instructions” are hard to get.

There are other similarities. For example, you have four free cells, and they can only hold one value at a time. They are similar to registers, and manipulating them, gives you a good taste of how hard it is to do register scheduling when building the assembly result. But in this case, it’s much harder to spill (move the data back to memory, or in this case, card back to cascades), since there are strict rules on how to move cards around.

Reusing cards from the foundations is similar to expanding single instructions into a sequence of them in order to circumvent pipeline stalls. In real compilers you could expand a multiply+add (very useful for digital signal processing) into two instructions: multiply and add, if that gives you some advantage on special cases on special chips. In Freecell, you can use a 9 on top of a 10, to move an 8 from another cascade and free up a card that you need to clean up your freecells (registers).

I’m sure you can find many more similarities, even if you have to skew the rules a bit (or completely reverse them), but that’s not the point. The point is to interest people into complex optimisation techniques without the hassle of learning a whole new section of computer science, especially if that section puts fear in most people in the first place.

C++ class sizes

In a recent struggle to define the class size in C++, I thought would be good to share some of my findings, since there isn’t much about it on the net. The empty class size is all over the place, neatly put in Stroustroup’s FAQ, but the other weird cases were less common to find.

For obvious reasons, the C++ standard is pretty vague about class sizes and memory layout. This is more of an ABI issue than a language standard, but the problem is that even the Itanium C++ ABI is a bit vague on that point.

The C++ standard allows the memory layout to be defined by the ABI and position the class members on where it fits more naturally on the target, but there is one specific clause: an object cannot have zero size. While most layout decisions are target specific, this is strictly a language decision. If objects were allowed to have zero bytes in size, an array of 1000 zero-sized objects would occupy no space at all and it would be impossible to choose one from the other.

Empty classes

First, a background on empty classes. An empty class is somewhat useless in most cases, but there is one case in which it plays an important role: type safety.

If you want to group objects that have nothing in common (to use in an algorithm), or to force a specific class of template parameters, or even differentiate between exceptions, even if they don’t have any data on it (just to catch them differently), you can use an empty class and it does the job pretty well. Thus, using and deriving from empty classes is somewhat a common enterprise in C++.

Empty class size

So, what happens when you declare an empty type?

class Empty {

What is the size that objects of this type will have? It cannot be zero, as the standard forbids, so it has to be something. The best choice is one byte, since no architecture will benefit from having less (please, don’t think of bit-fields!) and if alignment is a problem, most (all?) compilers will adjust the byte to a word aligned block in memory.

Obviously, an empty class that derives from another empty class also has to follow the same rule:

class Empty {

class NowWhat : public Empty {

In this case, both empty and NowWhat have size 1. But if NowWhat has data on it, will the extra byte still be there? On both standard and ABI, there is nothing saying that it shouldn’t, but also nothing saying it should. What’s important here is the reason why you need the extra byte: the address of different objects must be different.

When the derived class already has data, its objects will already be at different locations when laid out in memory, so there is no necessity of the extra byte.

class Empty {

class NowWhat : public Empty {
  int a;

Now, Empty still has 1 byte, but NowWhat has 4, not 5. Some people consider this an optimization, I just think it’s following the rules by what the standard requires.

The reverse case is much simpler. If the derived class is the empty one (for whatever reason), the size is already non-zero, so the classes below have both the same sizes (4):

class Full {
	int a;

class Empty : public Full {

Multiple Inheritance

Multiple inheritance brings a bit of colour to this problem. The C++ standard requires that two objects always occupy different memory locations (so they compare differently by their pointers and don’t allow non-empty zero-sized arrays). So what is the size of the Derived class below?

class Empty {

class M1 : public Empty {

class M2 : public Empty {

class Derived : public M1, M2 {

Before, we could make the NowWhat class with only one byte because that would be sufficient to discriminate it in an array, but what about comparison. If you cast a Derived object to Empty via M1 you get a different (logical) object than if you do it via M2, so they must compare differently. For that, you need to add another byte, and return the first address when via one class and the second via the other.

Empty members

With single and multiple inheritance nailed down, let’s think of a more complicated case.

class Empty {

class Another : public Empty {

class NotSoMuch : public Empty {
	Another a;

What happening here is that the first item of the derived class is, in fact, of an empty class type, that also derives from the same class as the type.

See, this last bit is important, because if you replace Another a; by another empty class type (say Empty2), the size of the derived class will still be 1. But in this case above, this is not true. The size of NotSoMuch is actually 2.

The field Another a; has size one, as any empty class objects, so what’s the second byte for? The second byte is a padding, at the beginning of the class, to avoid type conflicts.

Type conflicts

When deriving from empty classes, the ABI (2.4-II-3) states that you should always try to put the derived members at zero offset and, in case of type conflict, move down (alignment-wise) until you don’t have any more type conflicts. The question is, what is a type conflict in this case? In the multiple inheritance, the type conflict was clear (the diamond inheritance), but here, not so much.

Since both the class and its first member can be converted to Empty types (as pointers, for instance), you can have two Empty objects that, if compared must return different addresses. In the code below, n is (a pointer to) the derived object. Itself is converted to an Empty pointer, stored in en.

	NotSoMuch *n = new NotSoMuch();
	Empty *en = (Empty*)n;
	Empty *ea = (Empty*)&n->a;

If the member Another a; was in the offset zero of the class, the pointer ea would be in the same address as en, and on a comparison for equality, it’d return true.

The type conflict in this case is that, while both en and ea are pointers to an Empty type, the former gets there via NotSoMuch and the latter via Another. According to the C++ standard, they must be different, thus, return different addresses.

Again, if the empty class member is not the first element, none of that happens. The example below has size 4 (instead of 5), because the two similar (but different) Empty pointers would now be separated by 4 bytes, thus not violating the standard.

class Empty {

class Another : public Empty {

class NotSoMuch : public Empty {
	int i;
	Another a;


Template code is also susceptible to this problem, of course, and the iostream library is full of such examples. The problem is not so much for the abuse of empty classes (that doesn’t happen that much in STL), it’s just that non-virtual classes that only have member functions and no data fields are considered empty classes. And since templates are a good replacement for most virtual inheritance (and its inherent run-time complexity and slowness), basic libraries have to make extensive use of templates to avoid slowing down every single C++ program.

It’s true that the iostream library is particularly heavy and cumbersome, but it could be much worse.

The additional problem with templates (for the compiler, at least), is figuring out if there is a type conflict. While templates are already expanded by the time the basic C++ compiler kicks in, it’s complicated to carry the information on the source code about inheritance graph to every single template variation and specialization. Not to mention the new standard, where C++ is getting a variadic templates mechanism

I hope this is one more reason for you to avoid C-style pointer casts (especially void*) and only use C++ static_cast and dynamic_cast. Even reinterpret_cast is dangerous, but at least you can grep them on your source files and identify them one by one.

Zero-cost exception handling in C++

C++ exception handling is a controversial subject. The intricate concepts are difficult to grasp, even to seasoned programmers but, in complexity, nothing compares to its implementation. I can’t remember any real-world (commercial or otherwise) product that makes heavy use of exception handling. Even STL itself only throws a couple of them and only upon terminal failures.

Java exceptions, on the other hand, are much more common. Containers throw them at will (ex. OutOfBounds) and creating your own exceptions is very easy, developers are encouraged to use them. But still, some C++ projects’ coding standards hint (like LLVM’s) or specifically disallow (like Google’s) the use of exceptions. The Android NDK, for example, doesn’t even have (nor plans to) support exceptions.

Some of the cons commonly listed are the encouragement of bad style (throwing at will, like in Java), compilation problems with old code and all the gotchas that exception handling can bring to your code, making it a nightmare to understand the real flow of your program and to figure out the critical DON’Ts of exception handling (like throwing from inside a destructor). For a deeper discussion on that subject, read Sutter’s book, for the implementation details of exception handling from a compiler point of view, keep reading.

Three distant worlds

Exception handling is not just a language construction, like if, it’s an intricate interaction between three distant worlds. All the complications that exception handling brings to us have to be implemented by a three-layer infrastructure: the user code, the run-time library and the compiler. The communication between them has to be absolutely perfect, one bit different and the control flow goes south, you start executing random code and blow up your program for good.

Not only it’s three very different places, but the code is written by three different kinds of people, at different times. The only formal document that binds them together is the ABI (ex. the Itanium C++ ABI), which is, to say the least, vague. Luckily, the guys who write the run-time libraries are close enough to the guys writing the compiler (and hopefully, but not necessarily, the same writing the ABI).

In the end, if the compiler implementation is correct, the user only has to care about the rules imposed by the C++ standard. But that’s not too comforting, the C++ standard is complex and frequently driven by previous implementations, rather than a fix and clear rule of thumb. Most users don’t precisely understand why you can’t throw from inside a destructor or why throwing a newly created object is calling terminate on their code, they just avoid it at all costs.

Dirty and expensive

The easy exception handling implementation is called SetJump/LongJump. It literally sets jumps between your code and exception handling code (created by the compiler). If an exception was thrown, the flow jumps directly to the exception handling code. This is very easy, but extremely expensive. Usually, if you follow good practices, you’d expect that your program would spend most of the time doing real computation and only occasionally handling exceptions. But if for every action you create a bunch of exception handling data, you’re wasting a lot of resources to check for something that might not even happen during the whole run of your program.

When an exception happens, you need to go back down the call graph and inspect if any of those functions (including the one you just thrown) is catching that exception. This is called unwinding the stack, and in a SetJump/LongJump exception handling this is done by inspecting if any LongJump was called by a series of SetJump checks. The problem is, that those checks are done even if an exception was not thrown at all.


So, how is it possible to only execute exception handling code when an exception really happen? Simple, let the compiler/library do it for you. 😉

The whole idea of a zero-cost exception handling is to create a set of functions (like __cxa_throw, __cxa_begin_catch) that will deal with the exception handling flow by sending it to a different path, the EH path. Under normal situations, your code flow will be identical with or without exceptions. The flow could never reach the EH blocks and it’s only because the compiler knows those blocks are in the EH path that they don’t get discarded as unreachable code, since there is no explicit flow between user code and EH code.

Only the run-time library knows how to get there, and that’s helped by a table, normally in a read-only data section, specific to exception handling (ex. the .eh_frame ELF section), created by the compiler when reading the user’s exception flow. This is the core interaction between user, compiler and library codes.

When an exception is thrown via __cxa_allocate_exception + __cxa_throw, the first function allocates space and prepare the object to be thrown and the second starts reading the exception handling table. This table contains a list of exceptions that the particular piece of code is catching (often sorted by address, so it’s easy to do a binary search) and the code that will catch the exception.

The code that catches exceptions is called the personality routine. The personality routine library call enables you to use multiple zero-cost EH mechanisms in the same program. Normally, the table entry contains a pointer to the routine and some data it’ll use while unwinding, but some ABIs (like ARM’s EHABI) can add specially crafted tables for their purposes (like having the unwind code as 16-bits instructions embedded in the data section).

All in all, the general flow is the same: the personality routine will find the EH block (also called landing pads, unreachable from normal code) and jump there, where the catching happens. That code will get the exception allocated before the throw and make it available to the user code (inside the catch block) to do what the user requested. If there is no entry in the EH table that catches that exception, the stack will be unwound again and the library will read the caller’s EH table, until it reaches a table that does catch it or terminate if none does.


So far so good, but what happens with the local variables and the temporaries (stack-based data)? If they have non-trivial destructors, they must be called, which could chain a lot of cleaning up to do. Clean-up code is part in the unreachable blocks (when you’re cleaning up EH stuff) and part in normal code, for automatic destruction at the end of a lexical block. Although they do similar things, their purpose is completely different.

Normal flow clean-up blocks are always called and always clean up all objects. But EH clean-up code has to clean up only the objects that have been allocated so far, in the current lexical block or in any parent until the root block of the function. If you can imagine deeply nested blocks, creating objects at will, with a throw inside in the increment part of a for loop, where the object beeing incremented has a non-trivial destructor, you got the picture.

What happens if you throw an exceptions when other was being caught? Here is where the complexity met reality and the boundaries were settled. If you are unwinding one exception, and have a flow being dealt with by the run-time library, and another exception is thrown, you can’t distinguish which exception you are handling. Since there is space for one exception been unwounded at a time, whenever a second exception is thrown, you have to terminate.

Of course, you could argue to create multiple lanes for exception handling, where in every exception handling is handled separately, so you know that one exception was caused when throwing another. That may be easy to implement in Java (or other interpreted/bytecode languages), because the VM/interpreter has total control of the language, but as stated earlier, exception handling in C++ is just a contract between three parties: compiler, run-time library and the user, and often these three parties are completely different groups in completely different space-time realities.


This is just a very brief, shallow demonstration of how complex exception handling can be. Many other pitfalls appear, especially when dealing with specific user code running on a specific hardware platform with a specific binary configuration for the object file. It’s impossible to test every single combination (including room temperature, or annual rainfall forecast) to make it an implementation final and weird bugs creep up every once in a while.

Enhancing the ABI to allow that is possible, but the complexities of doing so, together with the very scarce use of exception handling in C++, makes it a very unlikely candidate to be changed in the near future. It’d have to be done in so many levels (C++ standard, ABIs, compilers, run-time libraries) that it’s best left alone.

Probabilistic processor and the end of locks

Every now and then, when I’m bored with normal work, I tend to my personal projects. For some reason, that tends to be mid-year, every year. So, here we go…

Old ideas, new style

This time was an old time project, to break the parallelization barrier without the use of locks. I knew simple boolean logic on regular Turing machines wouldn’t do, as they rely on the fact that the tape is only written by one head at a time. If you have more than one head writing to the same place, the behaviour is undefined. I’ve looked into several other technologies…

Some were just variations of Turing machines, like molecular, ternary and atomtronic computers. Others were simply interesting toys, like the programmable water, but there were two promising classes: quantum computers (taking a bit too long to impress) and non-deterministic Turing machine (NTM), that could get around some concurrency problems quite well.

Quantum computers are nice, but they’re more like vector computing on steroids. You can operate on several qbits at once, maybe even use non-locality to speed up communications, but until all that reaches the size of a small building, we need something different than yet-another n-core frying-pan Intel design. That may be non-deterministic machines…

Non-deterministic Turing machines are exactly like their deterministic counterparts, but the decision process can take different paths given the same input. So, instead of

if (a == 10) do_something;

you have

if (probability(a) > threshold) do_something;.

The problem with that is that you still can’t ignore different processes writing to the same data, since if a process happens to read it (even if by chance), the data you get from it is still being concurrently changed, therefore, still undefined. In other words, even if the state change is probabilistic, the data is still deterministic.

So, how can you make sure that, no matter how many processes are writing to a piece of data at random times, you still have meaningful data?

Well, if the reading part is not interested into one particular value of that data, but say, only in the probability of that data be a certain value, then it doesn’t matter in which order the writes are performed, as long as they’re all writing about the same thing. That can be viewed as a non-deterministic register, not a non-deterministic Turing machine. You can still have a deterministic machine reading and writing on non-deterministic registers, that you still won’t need locks for those values.

Probabilistic registers

Suppose you want to calculate an integral. You can use several very good deterministic algorithms (like the Romberg’s method), but if you create producers to get the values and consumers to reduce it to a sum, you’ll need a synchronized queue (ie. locks) to make sure the consumer is getting ALL the values. If you loose some values, you might get a completely wrong result.

If instead, you create random generators, getting the value of any (multi-dimensional) function from random values, and accumulate them into a non-deterministic register (say by updating the average of the last values, like recharging a capacitor that keeps discharging), when the consumer process read the value from it, it’ll get an average of the previously written values. Because the writes are all random, and if you keep only the average of the last N writes, you know how much data is being filled in and what’s the average value for that block.

In essence, you’re not storing any particular value, but information enough to re-create the original data. Since specific values are not important in this case, the order in what they appear (or if they appear at all) is completely irrelevant.

In a way, it’s similar to quantum mechanics, where you have the distribution of probabilities but not the values themselves. With a few tricks you can extract every information from those distributions. This is also similar to what Monte Carlo methods are. Taking the probabilistic side of computations in order to achieve a rough image of the problem quicker than by brute-force, when the complexity of the problem would be non-polynomial and the problem size would be big enough to need a few billion years to calculate.

Here are some examples of a non-det register.

A new logic

Obviously, current deterministic algorithms cannot run on such machine. All of them are based on the fact that specific data is stored in a specific place. If you compress a file and uncompress it back again, it’s just impossible to work with probable values, your original file will be completely undecipherable. Also, boolean logic cannot operate on such registers, Operating an AND between probabilities or a XOR of two averages doesn’t make any sense. You need a different logic, something that can make sense of probable inputs and analog streams. Fortunately, Bayesian logic is just the thing for such machines.

But the success of Bayesian logic in computing is not enormous. First, it’s not simple to create algorithms using a network of probabilities, and very few mundane algorithms can benefit from it to be worth the trouble. (Optimor labs is a good example). But with the advent of this new machine, a different logic has to be applied and new algorithms must be developed to make use of it.

Probably by mixing deterministic with non-deterministic hardware and creating a few instructions that would use the few non-deterministic registers (or accumulators) available. Most likely they would be used for inter-process communication in the beginning, but as you increase the number of those registers you can start building complex Bayesian networks or also neural networks in the registers themselves. That would open the door to much more complex algorithms, probably most of them unpredictable or found by trial and error (genetic approach?), but with time, people would begin to think that way and that would be the dawn of the non-deterministic computing.

Eureka moment

The probabilistic register was an Eureka moment, when I finally implemented it and saw that my simple monte-carlo or my neuron model was really a three-line code, but that didn’t last long…

A few days ago I read the news that DARPA has developed a fully featured probabilistic processor, taking random computation to the masses, even using Bayesian logic in place of boolean logic! The obvious place for a “public trial” was spam filtering (though I think the US military is not that interested into filtering spam…), but according to them, the sky is the limit. I do have to agree…

Anyway, I wouldn’t have 5 spare years nor $18mi in the bank to implement that project like they did, so in a way I’m glad that it turned out to be a good idea after all. I just hope that they make good use of it (ie. not use it for Carnivore 2.0), and they realize that not only the military have uses for it, or anti-spam filters, for that matter. We (them?) should develop a whole new set of algorithms to work on that logic and make good use of a hardware that is not a simple progression of current technology (to keep Moore’s law going), but could be a game changing in the next decades.

I could re-focus on defining the basic blocks for that processor’s new logic, but I’m sure DARPA has much better personnel to do that. So, what’s next, then? I’ll tell you next year…


Minix seems to be inspiring more operating systems nowadays. Microsoft Research is investing on a micro-kernel (they call it multi-kernel, as there are slight differences) called Barrelfish.

Despite being Microsoft, it’s BSD licensed. The mailing list looks pretty empty, the last snapshot is half a year ago and I couldn’t find an svn repository, but still more than I would expect from Microsoft anyway.


The basic concept is actually very interesting. The idea is to be able to have multi-core hybrid machines to the extreme, and still be able to run a single OS on it. Pretty much the same way some cluster solutions do (OpenMPI, for instance), but on a single machine. The idea is far from revolutionary. It’s a natural evolution of the multi-core trend with the current cluster solutions (available for years) and a fancy OS design (micro-kernel) that everyone learns in CS degrees.

What’s the difference, then? For one thing, the idea is to abstract everything away. CPUs will be just another piece of hardware, like the network or graphic cards. The OS will have the freedom to ask the GPU to do MP floating-point calculations, for instance, if it feels it’s going to benefit the total execution time. It’ll also be able to accept different CPUs in the same machine, Intel and ARM for instance (like the Dell Latitude z600), or have different GPUs, nVidia and ATI, and still use all the hardware.

With Windows, Linux and Mac today, you either use the nVidia driver or the ATI one. You also normally don’t have hybrid-core machines and absolutely can’t recover if one of the cores fail. This is not the same with cluster solutions, and Barrelfish’s idea is to simulate precisely that. In theory, you could do energy control (enabling and disabling cores), crash-recovery when one of the cores fail but not the other, or plug and play graphic or network cards and even different CPUs.

Imagine you have an ARM netbook that is great for browsing, but you want to play a game on it. You get your nVidia and a coreOcta 10Ghz USB4 and plug in. The OS recognizes the new hardware, loads the drivers and let you play your game. Battery life goes down, so once you’re back from the game, you just unplug the cards and continue browsing.


So, how is it possible that Barrelfish can be that malleable? The key is communication. Shared memory is great for single-processed threaded code and acceptable for multi-processed OSs with little number of concurrent process accessing the same region in memory. Most modern OSs can handle many concurrent processes, but they rarely access the same data at the same time.

Normally, processes are single threaded or have a very small number of threads (dozens) running. More than that is so difficult to control that people usually fall back to other means, such as client/server or they just go out and buy more hardware. In clusters, there is no way to use shared memory. For one, accessing memory in another computer via network is just plain stupid, but even if you use shared memory in each node and client/server among different nodes, you’re bound to have trouble. This is why MPI solutions are so popular.

In Barrelfish there’s no shared memory at all. Every process communicate with each other via messages and duplicate content (rather than share). There is an obvious associated cost (memory and bus), but the lock-free semantics is worth it. It also gives Barrelfish another freedom: to choose the communication protocol generic enough so that each piece of hardware is completely independent of all others, and plug’n’play become seamless.


It all seem fantastic, but there’s a long road ahead. First, message passing scales much better than shared memory, but nowadays there isn’t enough cores in most machines to make it worth it. Message passing also introduces some other problems that are not easily solvable: bus traffic and storage requirements increase considerably, and messages are not that generic in nature.

Some companies are famous for not adhering to standards (Apple comes to mind), and a standard hardware IPC framework would be quite hard to achieve. Also, even if using pure software IPC APIs, different companies will still use slightly modified APIs to suit their specific needs and complexity will rise, exponentially.

Another problem is where the hypervisor will live. Having a distributed control centre is cool and scales amazingly well, but its complexity also scales. In a hybrid-core machine, you have to run different instructions, in different orders, with different optimizations and communication. Choosing one core to deal with the scheduling and administration of the system is much easier, but leaves the single-point-of-failure.

Finally, going the multi-hybrid-independent style is way too complex. Even for a several-year’s project with lots of fund (M$) and clever people working on it. After all, if micro-kernel was really that useful, Tanembaum would have won the discussion with Linus. But, the future holds what the future holds, and reality (as well as hardware and some selfish vendors) can change. Multi-kernel might be possible and even easier to implement in the future.

This seems to be what the Barrelfish’s team is betting on, and I’m with them on that bet. Even if it fails miserably (as did Minix), some concepts could still be used in real-world operating systems (like Minix), whatever that’ll mean in 10 years. Being serious about parallelism is the only way forward, sticking with 40 years old concepts is definitely not.

I’m still aspiring for non-deterministic computing, though, but that’s an even longer shot…

SLC 0.2.0

My pet compiler is now sufficiently stable for me to advertise it as a product. It should deal well with the most common cases if you follow the syntax, as there are some tests to assure minimum functionality.

The language is very simple, called “State Language“. This language has only global variables and static states. The first state is executed first, all the rest only if you call goto state;. If you exit a state without branching, the program quits. This behaviour is consistent with the State Pattern and that’s why implemented this way. You can solve any computer problem using state machines, therefore this new language should be able to tackle them all.

The expressions are very simple, only binary operations, no precedence. Grouping is done with brackets and only the four basic arithmetic operations can be used. This is intentional, as I don’t want the expression evaluator to be so complex that the code will be difficult to understand.

As every code I write on my spare time, this one has an educational purpose. I learn by writing and hopefully teach by providing the source, comments and posts, and by letting it available on the internet so people can find it through search engines.

It should work on any platform you can compile to (currently only Linux and Mac binaries provided), but the binary is still huge (several megabytes) because they contain all LLVM libraries statically linked to it.

I’m still working on it and will update the status here at every .0 release. I hope to have binary operations for if statements, print string and all PHI calculated for the next release.

The LLVM compilation infrastructure

I’ve been playing with LLVM (Low-Level Virtual Machine) lately and have produced a simple compiler for a simple language.

The LLVM compilation infrastructure (much more than a simple compiler or virtual machine), is a collection of libraries, methods and programs that allows one to create a simple, robust and very powerful compilers, virtual machines and run-time optimizations.

As GCC, it’s roughly separated into three layers: the front-end, which parses the files and produce intermediate representation (IR), the independent optimization layer, which acts on the language-independent IR and the back-end, which turns the IR into something executable.

The main difference is that, unlike GCC, LLVM is extremely generic. While GCC is struggling to fit broader languages inside the strongly C-oriented IR, LLVM was created with a very extensible IR, with a lot of information on how to represent a plethora of languages (procedural, object-oriented, functional etcetera). This IR also carries information about possible optimizations, like GCC’s but to a deeper level.

Another very important difference is that, in the back-end, not only code generators to many platforms are available, but Just-In-Time compilers (somewhat like JavaScript), so you can run, change, re-compile and run again, without even quitting your program.

The middle-layer is where the generic optimizations are done on the IR, so it’s language-independent (as all languages wil convert to IR). But that doesn’t mean that optimizations are done only on that step. All first-class compilers have strong optimizations from the time it opens the file until it finishes writing the binary.

Parser optimizations normally include useless code removal, constant expression folding, among others, while the most important optimizations on the back-end involve instruction replacement, aggressive register allocation and abuse of hardware features (such as special registers and caches).

But the LLVM goes beyond that, it optimizes during run-time, even after the program is installed on the user machine. LLVM holds information (and the IR) together with the binary. When the program is executed, it profiles automatically and, when the computer is idle, it optimizes the code and re-compile it. This optimization is per-user and means that two copies of the same software will be quite different from each other, depending on the user’s use of it. Chris Lattner‘s paper about it is very enlightening.

There are quite a few very important people and projects already using LLVM, and although there is still a lot of work to do, the project is mature enough to be used in production environments or even completely replace other solutions.

If you are interested in compilers, I suggest you take a look on their website… It’s, at least, mind opening.

Object Orientation in C: Structure polymorphism

Listening to a podcast about the internals of GCC I’ve learnt that, in order to support object oriented languages in a common AST (abstract syntax tree), the GCC does polymorphism in a quite exquisite way.

There is a page that describes how to do function polymorphism in C but not structure polymorphism as it happens on GCC, by means of a union, so I decided that was a good post to write…


Like structs, you can create a list of things together in an union but, unlike structs, the things share the same space. So, if you create a struct of an int and a double, the size of the structure is the sum of both sizes. In an union, its size is the size of the biggest element and all elements are in the same area of memory, accessed from the first byte of the union.

Its usage is somewhat limited and can be quite dangerous, so you won’t find many C programs and rarely find any C++ programs using it. One of the uses is to (unsafely) convert numbers (double, long, int) to their byte representation by accessing as an array of chars. But the use we’ll see now is how to entangle several structures together to achieve real polymorphism.


In object oriented polymorphism, you can have a list of different objects sharing the same common interface being accessed by their interface’s structure. But in C you don’t have classes and you can’t build structure inheritance, so to achieve the same effect you need to put them all in the same box, but at the same time defining a generic interface to access their members.

So, if you define your structures like:

struct Interface {
    int foo;
    void (*bar)();
struct OneClass {
    int foo;
    void (*bar)();
struct TwoClass {
    int foo;
    void (*bar)();

and implement the methods (here represented by function pointers) like:

void one_class_bar () {
void two_class_bar () {

and associate the functions created to the objects (you could use a Factory for that), you have three different classes, still not connected. The next step is to connect them via the union:

typedef union {
    struct Interface i;
    struct OneClass o;
    struct TwoClass t;
} Object;

and you have just created the generic object that could hold both OneClass and TwoClass and be accessed via the Interface. Later on, when reading from a list of Objects, you can access through the interface (if you build your classes with parameters in the same order) and it’ll call the correct method (or use the correct variable):

Object list[2];
/* Setting */
list[0] = (Object) one;
list[1] = (Object) two;
/* Using */

Note that when iterating the list, it access the Object via the Interface (list[0].i) and not via OneClass or TwoClass. Although the result would be the same (as they share the same portion in memory, thus would execute the same method), it’s conceptually correct and compatible with object oriented polymorphisms.

The code above produces the following output:

$ ./a.out

You can get the whole example here. I haven’t checked the GCC code but I believe that they’ve done it in a much better and more stable way, of course, but the idea is probably be the same.

Disclaimer: This is just a proof-of-concept. It’s not nice, they (GCC programmers) were not proud (at least in the podcast) of using it and I’d not recommend anyone to use that in production.