Miguel has finally seen the light :-)

I had the privilege to meet Gnome’s founder, Miguel de Icaza at the first GUADEC back in… oh my, 2000, has it been that long already. I was still co-leading gtk– at the time. Miguel is a brilliant fellow, full of energy and spirit. But at the time I was disagreeing with most of his views. So it is with some amusement that I caught the following bit in a recent interview :

derStandard.at: What would – in retrospect – be the one piece of software that you most regret having written in plain C cause C# was not around at that time?

Miguel de Icaza: Everything that I ever wrote for the desktop.

And just before that :

I would not waste time in the 99% of the application that copes with high-level issues doing it in a low-level language.

You don’t say.

Gee, Miguel, C++ was there at the time too. It’s far from being as high level as C#, but it’s still better than C. (that said, believe it or not, I very much hope that mono will grow to be a commonly used platform for desktop apps on Linux, because C# is just so much better than C++ – but please drop GTK as a graphic toolkit, that can only be a temporary solution. Debugging through two levels of completely different object models will always be a nightmare)

Poisonous users, a live example

It’s ironic that a couple of weeks after this silly display of zealotry, this interesting video on how to deal with poisonous users would be highlighted on slashdot. We’re quite fortunate with Rosegarden that we almost never had to deal with this problem. I can only think on one occurrence, but the guy actually turned into a valuable contributor once we explained the problem to him.

The thread linked above on the merits of KDE’s new file manager, Dolphin has pretty much all of the standard features of the clueless user who can’t tolerate having to change his ways. His point is that his current way of working is the best one and must not be altered, more specifically must not be simplified or “dumbed down”. I’m pretty sure there’s a strong correlation between how much a user wants his environment to be configurable, and how not unproductive he actually is. I.e., the more you care about fine-tuning your tools, the less you actually use them. The extreme being, you guessed it, PC tuners :-). View it as a form of procrastination if you will, I’ve yet to see someone who’s adamant about “having choice” (which really means wanting to use his pet apps and considering that anything else is crap) also producing anything useful.

That won’t prevent him from demanding to see the data from usability tests, only if those contradict his own usage patterns of course (which are completely geekish but he can’t realise that).

On a side note, you have to be impressed by Aaron’s tactful and patient behavior all throughout the thread.

Free software as a political paradox

This slashdot story prompted me to write a post about free software and politics which I had hinted in a footnote of my first post.

Before carrying on, for the sake of clarity I should state that in France, I’m center-left, in the US I’m a dangerous leftist.

The author of the slashdot story is puzzled to see that right-wing people are more likely to use free software than left-wing ones. But, given that free software is generally considered to be a ‘left’ value, one would have thought it would be the opposite (and, as some comments explain, I believe it is the case in France). However there are two ways to see the problem :

You can consider the software industry to be the perfect example of capitalism and free entreprise in action, and therefore free software, aiming to destroy it, to be anti-capitalist (thus left-oriented).

Or, you can consider that free software, being the empowerment of the individual, is an even better free enterprise illustration against the state-like monopoly of the software industry. One of the basic postulate behind right-wing politics is that the free market always finds the best solution, eventually, a solution which state-driven economics (i.e. socialism) cannot hope to find. And that’s exactly what free software postulates : give free reign to developpers and they will create software that the industry (which is a state-like structure) can’t possibly produce. And there you have why libertarians such as ESR who believe in minimal government also support free software.

At this point it’s hard to resist indulging into an obvious statement : the little I’ve learned about economics clearly show that free market (or, the combination of everybody’s selfish motives) does reach equilibrium, only the worst one. And you have a perfect example of this in free software : people prefer to start their own project than to collaborate with an existing one. The very philosophy of free software gives them a perfect reason to do so : “let the users decide”. Only the users don’t have perfect market knowledge, they don’t evaluate different pieces of software rationally, but emotionally. Because they liked what the author said in an interview, because they like the looks, or some specific feature, but not because it’s really better written or designed (they may sometimes argue so but honestly, have they really looked at the code ?). So sometimes there is indeed stabilisation over a few pieces of software, each fitting its own niche. Most of the time however you get a big old duplication of efforts, with zealots on each side arguing on how their own favorite is really the best one and the other side are just a bunch of morons.

So there. If free market really was working so well, we wouldn’t have two incomplete desktop frameworks and Microsoft would be long gone. Instead, it seems that the State as a government system is still the best working solution : letting devs do what they like whenever possible, and constraining them to do what’s needed the rest of the time. And that’s why I’d rather pay taxes than letting the market decide if a school or a road should be built or not.

on software testing

A long time ago, in a galaxy… err, no, scratch that. A long time ago (circa 1999-2000), I used to work for a company which was making software debugging and testing tools. One of the main product was a C++ tool akin to Purify. You’d apply it on your code and it would show you mem leaks, uninitialized pointers, etc… all these pesky bugs which make C/C++ development so enjoyable.

At some point I was working on a GUI in C++, and during an informal meeting with the main boss, he told me that I should be using our C++ tool on my code. I wasn’t. Why ? Two reasons. One, because I was reasonably confident about my code. That sounds preposterous and conceited, but I about 99% of the mem allocations in my code where done in a “managed” way. That is, the framework I was using (ok, that was Qt, version 2.x at the time) was handling them for me (yay for object trees).

Upon explaining this to him, he proceeded to show me how wrong I was. He sat down at the keyboard, downloaded the product’s last version, untarred it, and, if I recall correctly, first wasted a few minutes trying to install it correctly. Then he tried to apply it on my code. “Applying” meant recompiling my code with the tool, since it was pre-processing the code (‘instrumenting’ was the exact term) to stuff it with calls to its own libraries before passing it to the actual compiler, then linking with its own libs. I don’t recall exactly if we couldn’t even get passed that stage, or if the resulting binary was dumping core too quickly to yield any relevant information (if you’ve used this kind of tool, you know that even system libs are likely to trip it), but it ended up with him finally giving up and concluding with “well, you really should use it”.

And that was a perfect demonstration of the 2nd and most important reason why I wasn’t using our tool : because it was too darn complicated to use. And from that I shall derive what I think is the single most important feature of a any software testing tool, no matter the language or the framework : ease of use. Using it should be a no-brainer.

Compare the above with a similar tool which we happily use with rosegarden from time to time : valgrind (valgrind appeared about a couple of years after the events above happened). Using valgrind amounts to typing valgrind program-to-check from the command line, and it will present you with a bunch of error reports, most of them relevant (unlike that tool above which would spout a bunch of false alarms). No rebuild, no relink with exotic libs, just run your usual build (compiled with debug info in so you’d get useful stack traces for each error found), and there you go. Sure it runs way slower than usual, but waiting is much less of a problem then spending brain cycles on getting the tool to work.

Software development is hard and complicated. So it’s pretty obvious that any tool you use in order to make it less hard and complicated absolutely must not itself be hard and complicated. And yet it seems most such tools completely forget about this, putting cleverness above anything else. “Look at how our tool could find this obscure bug!” Yes, but look at what it took to find it…

So what, you’ll say, TANSTAAFL. This is software dev, not walking in the park. True, but the unavoidable consequence is that no matter how effective it is, the tool won’t be used. Nobody will use your debugging tool if it takes hours to get working. A regression test suite will soon be left to rot if it’s too hard to run and maintain. You can have a team which sole purpose is to bear the brunt of dealing with all test-related issues, but that causes a whole new bunch of problems (communication, training, etc…) A test team should do high-level tests, not functional, fine-grain ones. Those really are for the guy writing the code, and the only way to get him to do them is to make him want to do them. And a developer won’t be willing to do something unless it clearly makes his job less complicated.

Why do we use IDEs for Java programming ? Because that squiggly red line under that bit of code I’ve just typed there telling me there’s something wrong is helpful. I don’t even have to try building the code to get that error. The gain is obvious. That’s how any test tool should be : so simple to use that I will want to use it, without being constrained by some PHB.

Unfortunately there’s still no way around the work needed to build a test suite, for instance. But at the very least build it so that running it is as easy as possible (and that’s hard too 🙂 ). Actually in that case, the suite may be based on a test tool, but it becomes a test tool itself so, sorry, it’s up to you to make it easy to use (another lesson I learned the hard way).