header image
C++ class sizes
January 28th, 2011 under Algorithms, Devel, rengolin. [ Comments: none ]

In a recent struggle to define the class size in C++, I thought would be good to share some of my findings, since there isn’t much about it on the net. The empty class size is all over the place, neatly put in Stroustroup’s FAQ, but the other weird cases were less common to find.

For obvious reasons, the C++ standard is pretty vague about class sizes and memory layout. This is more of an ABI issue than a language standard, but the problem is that even the Itanium C++ ABI is a bit vague on that point.

The C++ standard allows the memory layout to be defined by the ABI and position the class members on where it fits more naturally on the target, but there is one specific clause: an object cannot have zero size. While most layout decisions are target specific, this is strictly a language decision. If objects were allowed to have zero bytes in size, an array of 1000 zero-sized objects would occupy no space at all and it would be impossible to choose one from the other.

Empty classes

First, a background on empty classes. An empty class is somewhat useless in most cases, but there is one case in which it plays an important role: type safety.

If you want to group objects that have nothing in common (to use in an algorithm), or to force a specific class of template parameters, or even differentiate between exceptions, even if they don’t have any data on it (just to catch them differently), you can use an empty class and it does the job pretty well. Thus, using and deriving from empty classes is somewhat a common enterprise in C++.

Empty class size

So, what happens when you declare an empty type?

class Empty {

What is the size that objects of this type will have? It cannot be zero, as the standard forbids, so it has to be something. The best choice is one byte, since no architecture will benefit from having less (please, don’t think of bit-fields!) and if alignment is a problem, most (all?) compilers will adjust the byte to a word aligned block in memory.

Obviously, an empty class that derives from another empty class also has to follow the same rule:

class Empty {

class NowWhat : public Empty {

In this case, both empty and NowWhat have size 1. But if NowWhat has data on it, will the extra byte still be there? On both standard and ABI, there is nothing saying that it shouldn’t, but also nothing saying it should. What’s important here is the reason why you need the extra byte: the address of different objects must be different.

When the derived class already has data, its objects will already be at different locations when laid out in memory, so there is no necessity of the extra byte.

class Empty {

class NowWhat : public Empty {
  int a;

Now, Empty still has 1 byte, but NowWhat has 4, not 5. Some people consider this an optimization, I just think it’s following the rules by what the standard requires.

The reverse case is much simpler. If the derived class is the empty one (for whatever reason), the size is already non-zero, so the classes below have both the same sizes (4):

class Full {
	int a;

class Empty : public Full {

Multiple Inheritance

Multiple inheritance brings a bit of colour to this problem. The C++ standard requires that two objects always occupy different memory locations (so they compare differently by their pointers and don’t allow non-empty zero-sized arrays). So what is the size of the Derived class below?

class Empty {

class M1 : public Empty {

class M2 : public Empty {

class Derived : public M1, M2 {

Before, we could make the NowWhat class with only one byte because that would be sufficient to discriminate it in an array, but what about comparison. If you cast a Derived object to Empty via M1 you get a different (logical) object than if you do it via M2, so they must compare differently. For that, you need to add another byte, and return the first address when via one class and the second via the other.

Empty members

With single and multiple inheritance nailed down, let’s think of a more complicated case.

class Empty {

class Another : public Empty {

class NotSoMuch : public Empty {
	Another a;

What happening here is that the first item of the derived class is, in fact, of an empty class type, that also derives from the same class as the type.

See, this last bit is important, because if you replace Another a; by another empty class type (say Empty2), the size of the derived class will still be 1. But in this case above, this is not true. The size of NotSoMuch is actually 2.

The field Another a; has size one, as any empty class objects, so what’s the second byte for? The second byte is a padding, at the beginning of the class, to avoid type conflicts.

Type conflicts

When deriving from empty classes, the ABI (2.4-II-3) states that you should always try to put the derived members at zero offset and, in case of type conflict, move down (alignment-wise) until you don’t have any more type conflicts. The question is, what is a type conflict in this case? In the multiple inheritance, the type conflict was clear (the diamond inheritance), but here, not so much.

Since both the class and its first member can be converted to Empty types (as pointers, for instance), you can have two Empty objects that, if compared must return different addresses. In the code below, n is (a pointer to) the derived object. Itself is converted to an Empty pointer, stored in en.

	NotSoMuch *n = new NotSoMuch();
	Empty *en = (Empty*)n;
	Empty *ea = (Empty*)&n->a;

If the member Another a; was in the offset zero of the class, the pointer ea would be in the same address as en, and on a comparison for equality, it’d return true.

The type conflict in this case is that, while both en and ea are pointers to an Empty type, the former gets there via NotSoMuch and the latter via Another. According to the C++ standard, they must be different, thus, return different addresses.

Again, if the empty class member is not the first element, none of that happens. The example below has size 4 (instead of 5), because the two similar (but different) Empty pointers would now be separated by 4 bytes, thus not violating the standard.

class Empty {

class Another : public Empty {

class NotSoMuch : public Empty {
	int i;
	Another a;


Template code is also susceptible to this problem, of course, and the iostream library is full of such examples. The problem is not so much for the abuse of empty classes (that doesn’t happen that much in STL), it’s just that non-virtual classes that only have member functions and no data fields are considered empty classes. And since templates are a good replacement for most virtual inheritance (and its inherent run-time complexity and slowness), basic libraries have to make extensive use of templates to avoid slowing down every single C++ program.

It’s true that the iostream library is particularly heavy and cumbersome, but it could be much worse.

The additional problem with templates (for the compiler, at least), is figuring out if there is a type conflict. While templates are already expanded by the time the basic C++ compiler kicks in, it’s complicated to carry the information on the source code about inheritance graph to every single template variation and specialization. Not to mention the new standard, where C++ is getting a variadic templates mechanism

I hope this is one more reason for you to avoid C-style pointer casts (especially void*) and only use C++ static_cast and dynamic_cast. Even reinterpret_cast is dangerous, but at least you can grep them on your source files and identify them one by one.

The Group
January 23rd, 2011 under Life, rengolin, Stories, World. [ Comments: none ]

As a postal worker, Mark had plenty of time to wonder in his head about things. Being in the post was not the most boring job ever, but wasn’t also complex that would put his brain cells to work that much. A bit of letter sorting and route planning was more than he needed to perform his job well and, even though he had a few neurons to spare, that actually didn’t help with his boss’ appraisal.

Not that Mark’s boss didn’t welcome a bit of thinking, it’s just that sometimes, too much thinking can do more harm than good. Nevertheless, Mark had that job for a few years now, and no plans to actually make a change. He had no family to care for, nor any massive debt to pay out and wasn’t particularly good looking to actually have a girlfriend.

But all that averageness wouldn’t help Mark to stop thinking about those things. Things that would make him loose his job. Things that always made him awkward when talking to women. Things that nobody else could understand, and nobody cared for that matter. Probably the very reason why he was thinking about it again this morning…


Between delivering some spam to a semi-detached family house and dropping a small box to a bungalow with lots of rubbish on the pavement, he thought how hard it is to do what people expect you to. Why do we have to deliver spam to half the country? Why can’t he just skip the spam, since nobody wants it anyway, and just deliver the good stuff? Would they really know if he’d delivered the spam in the first place?

For a few minutes that day, people walking down the pavement were somewhat annoyed with the presence of a motionless postman holding a few flyers. He was thinking… If they were actually paying attention, people on that street, that day, would see a perfectly regular postman sorting through his delivery quota in his bag with anger, until all the flyers were in his hand. He opened the green bin of that bungalow, and dropped them all in there.

To be honest, one mother coming down the high street, immediately after dropping her daughter at school (and the usual chat with other parents), actually saw all that happening. But her head was so full of problems, her daughter’s performance in school wasn’t that good and her husband, if you can call that husband, wasn’t being particularly nice that day. She dismissed the whole scene as another common madness of the world.

Mark was anxious, waiting for someone to say something, to reprehend him or to cheers for his bravery, but nothing really happened. It was exactly the same village as he was just a few minutes ago. A very radical move from his part had no damage whatsoever on the course of man kind. It was in that moment that he decided to do that every day.

For 3 years he put all the flyers in random bins (there weren’t that many, but he managed to hid some other on random places, too). To no surprise, absolutely nothing happened to any one. Local business were still working, Tesco was still full of people buying the same chicken wings on sale and the brand new chip shop had a very good clientèle, despite all their spam going to the bin every day.

With great power…

His success was a bit disappointing. Not only he managed to keep doing for so long, but nobody ever cared. Now, people were actually used to seeing him dropping flyers, no matter how extravagant were his moves around green bins. People would even greet him good morning while he was doing it. But he wasn’t a normal fellow, and his sense of righteousness put him on track to reform society. Small changes for a small man, but nevertheless, changes.

He decided to do every right thing where a wrong thing was expected. He delivered letters to doctors on the same day, even when a second class stamp was used. He’d slack off during most of the afternoon to deliver the big packages during the evening, when everybody was at home. He even delivered letters to people he knew while shopping and one day he replied to a letter himself.

It was a letter to a marriage lawyer firm on the postbox next to school. The letter was a bit crumpled and had a very shaky hand writing. He knew exactly from who that was and why. He replied:

Dear Mrs. Wife,

Your husband is a crook. He gambles the unemployment benefit, he hits your daughter and has an affair with more people that I’d dare to say.

You don’t need a lawyer, you need to slap him in the face and throw him out of your house.

The postman

If that ever helped, nobody knows, but how that made him feel better, is inexplicable. The good feeling was taking over his life. He was less tense, had a few dates with the bakery attendant and even sent a letter to his mother. But all that feeling was stopped dead by a call from his boss. Apparently there were some complaints that the postal service was a bit erratic and some letters were not reaching their destinations.

Mark’s boss reassured him that he trusted Mark, but wanted him to know that there would be some investigations and questions to all members of staff. As it turned out, another postman was unhappy about his work and stopped delivering anything and went to the pub for the few last days. After a weekend delivering more letters than usual, everything went back to normal.

Happiness is ethereal

During the next few months, Mark managed to have a sound relationship with Emma (the bakery attendant) and they were actually happy. After the year’s end, Mark got a raise and could now afford a cable TV subscription. He didn’t get the sports pack, since Emma wanted the entertainment one, but all was fine as long as she was there, with him.

However, as it couldn’t be different, Mark started to wonder… He was really happier now than some years ago. The whole city seemed to have accepted his behaviour, no matter how odd. Even Emma ignored the issue after Mark told her during one of their first dates. It really wasn’t that important. How is that possible?

Can he, then, do whatever he wants? To what extent will bending the laws imposed by the people actually go before people start noticing, and doing something about it? How can some people do so little and go to jail, while him, with such a radical take on life, gets completely ignored. What would he have to do to be noticed?

In whatever group you are, Mark realised, as long as you don’t interfere with its natural course, you will be ignored. He learnt from the one of the documentary channels that this is true with every animal. Man is not more than any other animal. Society is not more than any other group. Not only you can do whatever you want, as long as that doesn’t interfere with the group, but everything you do will be completely ignored and, when you die, forgotten.

Obviously, Mark’s new take on life put some dents in his relationship, but he managed to suppress his thoughts while Emma was around. He wouldn’t want to loose her, not after so much trouble to get her. He also agreed not to talk weird while her friends would come over, and that took their relationship to a marriage, and life went on as you know it.

To be honest, I never heard of a postman named Mark, but according to his own theories, he could very much have existed and you’ll never know it…

Dream Machine (take 2)
January 18th, 2011 under Computers, Gadgtes, Hardware, rengolin, Technology, Thoughts. [ Comments: none ]

More than three years ago I wrote about the desktop I really wanted… Now it’s time to review that and make some new speculations…

Back Then

The key issues I raised back then were wireless technology, box size, noise, temperature and the interface.

Wireless power hasn’t progressed as much as I’d like, but all the rest (including wireless graphic cards) are already at full steam. So, apart from power, you don’t need any cables. Also, batteries are getting a bit better (not as fast as I’d like, too), so there is another stop-gap for wireless power.

Box size has reduced dramatically since 2007. All the tablets are almost full computers and with Intel and ARM battling for the mid-size form-factor, we’ll see radical improvements with lower power consumption, smaller sizes, much cooler CPUs and consequently, no noisy fans. Another thing that is bound to reduce temperature and noise is the speed in which solid-state drives are catching up with magnetic ones.

But with regard to the interface, I have to admit I was a bit too retro. Who needs 3D glasses, or pointer hats to drive the cursor on the screen? Why does anyone needs a cursor in the first place? Well, that comes to my second dream machine.

Form Factor

I love keyboards. Writing for (int i=0; i<10; i++) { a[i] = i*M_PI; } is way easier than try to dictate that and hope it gets the brackets, increments and semi-colons correctly. Even if the dictation software was super-smart, I still would feel silly dictating that. Unless I can think and the computer creates the code for me the way I want, there no better interface than the keyboard.

Having a full-size keyboard also allows you to spare some space for the rest of the machine. Transparent CPUs, GPUs and storage are still not available (nor I think will be in the next three years), so putting it into the monitor is a no-go. Flat keyboards (like the Mac ones) are a bit odd and bad for ergonomics, so a simple ergonomic keyboard with the basic hardware inside would do. No mouse, of course, nor any other device except the keyboard.

A flat transparent screen, of some organic LED or electronic paper, with the camera built-in in the centre of the screen, just behind it. So, on VoIP conversations, you look straight into the eyes of the interlocutor. Also, transparent speakers are part of the screen, half-right and half-left are screen + speakers, with transparent wiring as well. All of that, wireless of course. It should be extra-light, so just a single arm to hold the monitor, not attached to the keyboard. You should be able to control the transparency of the screen, to change between VoIP and video modes.


CPUs and GPUs are so 10's. The best way to go forward is to have multi-purpose chips, that can turn themselves (or their parts) on and off at will, that can execute serial or vector code (or both) when required. So, a 16/32 core machine, with heavily pipelined CPU/GPUs, on multiple buses (not necessarily all active at the same time, or for the same communication purpose), could deal with on-demand gaming, video streaming, real-time ray-tracing and multi-threaded compilation without wasting too much power.

On a direct comparison, any of those CPU/GPU dies would have a fraction of the performance of a traditional mono-block chip, but their inherent parallelism and if the OS/drivers are written based on that assumption, a lot of power can be extracted from them. Also, with so many chips, you can selectively use only as much as you need for each task for specific applications. So, a game would use more GPUs than CPUs, probably with one or two CPUs to handle interface and sound. When programming, one or two CPUs can handle the IDE, while the other can compile your code in background. As all of this is on-demand, even during the game you could have a variable number of chips working as GPUs, depending on the depth of the world it's rendering.

Memory and disk are getting cheaper by the second. I wouldn't be surprised if in three years 128GB of memory and 10TB of solid-state disk are the new minima. All that, fitting nicely alongside the CPU/GPU bus, avoiding too many hops (NB+PCI+SATA+etc) to get the data in and out would also speed up the storage/retrieval of information. You can probably do a 1s boot up from scratch without the necessity of sleeping any more, just pure hibernate.

Network, again, wireless of course. It's already a reality for a while, but I don't expect it to increase considerably in the next 3 years. I assume broadband would increase a few percent, 4G will fail to deliver what it promises when the number of active clients reach a few hundred and the TV spectrum requires more bureaucracy than the world can handle. The cloud will have to wait a bit more to get where hard drives are today.


A few designs have revolutionized interfaces in the last three years. I consider the pointer-less interface (decent touch screen, camera-ware) and the brain interface as the two most important ones. Touch-screens are interesting, but they are cumbersome as your limbs get in the way of the screen you're trying to interact with. The Wii-mote was a pioneer, but the MS Kinect broke the barrier of usability. It's still in its early stages, but as such, it's a great revolution and because of the unnatural openness of Microsoft about it, I expect it to baffle even the most open minded ones.

On the other hand, brain interfaces only began this year to be usable (and not that much so), the combination of a Kinect, with a camera that reads your eyes and the brain interface to control interactions with the items on the screen should be enough to work efficiently and effectively.

People already follow the mouse with their eyes, it's easy to teach people to make the pointer follow their eyes. But to remove uncertainties and get rid once and for all of the annoying cursor, you need a 3D camera to take into account your position relative to the screen, the position of other people (that could also interact with the screen on a multi-look interface) and think together to achieve goals. That has applications from games to XP programming.

Voice control could also be used for more natural commands such as "shut-up" or "play some jazz, will ya?". Nothing too complex, as that's another field that is crawling for decades and hasn't have a decent sprint since it started...


The cost of such a machine wouldn't be too high, as the components are cheaper than today's complex motherboard designs, with multiple interconnection standards, different manufacturing processes and tests (very expensive!). The parts themselves would maybe be a bit expensive, but in such volumes (and standardised production) the cost would be greatly reduced.

To the environment, not so much. If mankind continues with the ridiculous necessity of changing their computers every year, a computer like that would fill up the landfills. The integration of the parts is so dense (eg monitor+cameras+speakers in one package) that would be impossible to recycle that cheaper than sending it to the sun to burn (a not so bad alternative).

But in life, we have to choose what's really important. A nice computer that puts you in a chair for the majority of your life is more important that some pandas and bumble bees, right?

Computer Science vs Software Engineering
January 13th, 2011 under Corporate, rengolin, Science, Technology. [ Comments: none ]

The difference between science and engineering is pretty obvious. Physics is science, mechanics is engineering. Mathematics is (ahem) science, and building bridges is engineering. Right?

Well, after several years in science and far too much time in software engineering that I was hoping to tell my kids when they grow up, it seems that people’s beliefs are much more exacerbated about the difference, if there’s any, than their own logic seems to imply.


General beliefs that science is more abstract fall apart really quickly when you compare maths to physics. There are many areas of maths (statistics, for example) that are much more realistic and real world than many parts of physics (like string theory and a good part of cosmology). Nevertheless, most scientists will turn their noses up at or anything that resembles engineering.

From different points of view (biology, chemistry, physics and maths), I could see that there isn’t a consensus on what people really consider a less elaborate task, not even among the same groups of scientists. But when faced with a rejection by one of their colleagues, the rest usually agree on it. I came to the conclusion that the psychology of belonging to a group was more important than personal beliefs or preferences. One would expect that from young schoolgirls, not from professors and graduate students. But regardless of the group behaviour, there still is that feeling that tasks such as engineering (whatever that is) are mundane, mechanical and more detrimental to the greater good than science.

Real World

On the other side of the table, the real world, there are people doing real work. It generally consists of less thinking, more acting and getting things done. You tend to use tables and calculators rather than white boards and dialogue, your decisions are much more based on gut feelings and experience than over-zealously examining every single corner case and the result of your work is generally more compact and useful to the every-day person.

From that perspective, (what we’re calling) engineers have a good deal of prejudice towards (what we called) scientists. For instance, the book Real World Haskell is a great pun from people that have one foot on each side of this battle (but are leaning towards the more abstract end of it). In the commercial world, you don’t have time to analyse every single detail, you have a deadline, do what you can with that and buy insurance for the rest.

Engineers also produce better results than scientists. Their programs are better structured, more robust and efficient. Their bridges, rockets, gadgets and medicines are far more tested, bullet-proofed and safe than any scientist could ever hope to do. It is a misconception that software engineers have the same experience than an academic with the same time coding, as is a misconception that engineers could as easily develop prototypes that would revolutionise their industry.

But even on engineering, there are tasks and tasks. Even loathing scientists, those engineers that perform a more elaborate task (such as massive bridges, ultra-resistant synthetic materials, operating systems) consider themselves above the mundane crowd of lesser engineers (building 2-bed flats in the outskirts of Slough). So, even here, the more abstract, less fundamental jobs are taken at a higher level than the more essential and critical to society.

Is it true, then, that the more abstract and less mundane a task is, the better?


Since the first thoughts on general purpose computing, there is this separation of the intangible generic abstraction and the mundane mechanical real world machine. Leibniz developed the binary numeral system, compared the human brain to a machine and even had some ideas on how to develop one, someday, but he ended up creating some general-purpose multipliers (following Pascal’s design for the adder).

Leibniz would have thrilled in the 21th century. Lots of people in the 20th with the same mindset (such as Alan Turin) did so much more, mainly because of the availability of modern building techniques (perfected for centuries by engineers). Babbage is another example: he developed his differential machine for years and when he failed (more by arrogance than anything else), his analytical engine (far more elegant and abstract) has taken his entire soul for another decade. When he realised he couldn’t build it in that century, he perfected his first design (reduced the size 3 times) and made a great specialist machine… for engineers.

Mathematicians and physicists had to do horrible things (such as astrology and alchemy) to keep their pockets full and, in their spare time, do a bit of real science. But in this century this is less important. Nowadays, even if you’re not a climate scientist, you can get a good budget for very little real applicability (check NASA’s funded projects, for example). The number of people working in string theory or trying to prove the Riemann hypothesis is a clear demonstration of that.

But computing is still not there yet. We’re still doing astrology and alchemy for a living and hoping to learn the more profound implications of computing on our spare time. Well, some of us at least. And that comes to my point…

There is no computer science… yet

The beginning of science was marked by philosophy and dialogue. 2000 years later, man kind was still doing alchemy, trying to prove the Sun was the centre of the solar system (and failing). Only 200 years after that that people really started doing real science, cleansing themselves from private funding and focusing on real science. But computer science is far from it…

Most computer science courses I’ve seen teach a few algorithms, an object oriented language (such as Java) and a few courses on current technologies (such as databases, web development and concurrency). Very few of them really teach about Turin machines, group theory, complex systems, other forms of formal logic and alternatives to the current models. Moreover, the number of people doing real science on computing (given what appears on arXiv or news aggregation sites such as Ars Technica or Slashdot) is probably less than the number of people working with string theory or wanting a one-way trip to Mars.

So, what do PHDs do in computer science? Well, novel techniques on some old school algorithms are always a good choice, but the recent favourite has been breaking the security of the banking system or re-writing the same application we all already have, but for the cloud. Even the more interesting dissertations like memory models in concurrent systems, energy efficient gate designs are all commercial applications at most.

After all, PHDs can get a lot more money in the industry than remaining at the universities, and doing your PHD towards some commercial application can guarantee you a more senior position as a start in such companies than something completely abstract. So, now, to be honestly blunt, we are all doing alchemy.

Interesting engineering

Still, that’s not to say that there aren’t interesting jobs in software engineering. I’m lucky to be able to work with compilers (especially because it also involves the amazing LLVM), and there are other jobs in the industry that are as interesting as mine. But all of them are just the higher engineering, the less mundane rocket science (that has nothing of science). But all in all, software engineering is a very boring job.

You cannot code freely, ignore the temporary bugs, ask the user to be nice and have a controlled input pattern. You need a massive test infrastructure, quality control, standards (which are always tedious), and well documented interfaces. All that gets in the way of real innovation, it makes any attempt of doing innovation in a real company a mere exercise of futility and a mild source of fun.

This is not exclusive of the software industry, of course. In the pharmaceutical industry there is very little innovation. They do develop new drugs, but using the same old methods. They do need to get new medicines, more powerful out of the door quickly, but the massive amount of tests and regulation they have to follow is overwhelming (this is why they avoid as much as possible doing it right, so don’t trust them!). Nevertheless, there are very interesting positions in that industry as well.

When, then?

Good question. People are afraid of going out of their area of expertise, they feel exposed and ridiculed, and quickly retract to their comfort area. The best thing that can happen to a scientist, in my opinion, is to be proven wrong. For me, there is nothing worse than being wrong and not knowing. Not many people are like that, and the fear of failure is what keeps the industry (all of them) in the real world, with real concerns (this is good, actually).

So, as far as the industry drives innovation in computing, there will be no computer science. As long as the most gifted software engineers are mere employees in the big corporations, they won’t try, to avoid failure, as that could cost them their jobs. I’ve been to a few companies and heard about many others that have a real innovation centre, computer laboratory or research department, and there isn’t a single one of them that actually is bold enough to change computing at its core.

Something that IBM, Lucent and Bell labs did in the past, but probably don’t do it any more these days. It is a good twist of irony, but the company that gets closer to software science today is Microsoft, in its campus in Cambridge. What happened to those great software teams of the 70’s? Could those companies really afford real science, or were them just betting their petty cash in case someone got lucky?

I can’t answer those questions, nor if it’ll ever be possible to have real science in the software industry. But I do plea to all software people to think about this when they teach at university. Please, teach those kids how to think, defy the current models, challenge the universality of the Turin machine, create a new mathematics and prove Gödel wrong. I know you won’t try (by hubris and self-respect), but they will, and they will fail and after so many failures, something new can come up and make the difference.

There is nothing worse than being wrong and not knowing it…


Creative Commons License
We Support



National Autistic Society

Royal Society for the Prevention of Cruelty to Animals


End Software Patents

See Also

The information in this weblog is provided “AS IS” with no warranties, and confers no rights.

This weblog does not represent the thoughts, intentions, plans or strategies of our employers. It is solely our opinion.

Feel free to challenge and disagree, and do not take any of it personally. It is not intended to harm or offend.

We will easily back down on our strong opinions by presentation of facts and proofs, not beliefs or myths. Be sensible.

Recent Posts