on software testing

A long time ago, in a galaxy… err, no, scratch that. A long time ago (circa 1999-2000), I used to work for a company which was making software debugging and testing tools. One of the main product was a C++ tool akin to Purify. You’d apply it on your code and it would show you mem leaks, uninitialized pointers, etc… all these pesky bugs which make C/C++ development so enjoyable.

At some point I was working on a GUI in C++, and during an informal meeting with the main boss, he told me that I should be using our C++ tool on my code. I wasn’t. Why ? Two reasons. One, because I was reasonably confident about my code. That sounds preposterous and conceited, but I about 99% of the mem allocations in my code where done in a “managed” way. That is, the framework I was using (ok, that was Qt, version 2.x at the time) was handling them for me (yay for object trees).

Upon explaining this to him, he proceeded to show me how wrong I was. He sat down at the keyboard, downloaded the product’s last version, untarred it, and, if I recall correctly, first wasted a few minutes trying to install it correctly. Then he tried to apply it on my code. “Applying” meant recompiling my code with the tool, since it was pre-processing the code (‘instrumenting’ was the exact term) to stuff it with calls to its own libraries before passing it to the actual compiler, then linking with its own libs. I don’t recall exactly if we couldn’t even get passed that stage, or if the resulting binary was dumping core too quickly to yield any relevant information (if you’ve used this kind of tool, you know that even system libs are likely to trip it), but it ended up with him finally giving up and concluding with “well, you really should use it”.

And that was a perfect demonstration of the 2nd and most important reason why I wasn’t using our tool : because it was too darn complicated to use. And from that I shall derive what I think is the single most important feature of a any software testing tool, no matter the language or the framework : ease of use. Using it should be a no-brainer.

Compare the above with a similar tool which we happily use with rosegarden from time to time : valgrind (valgrind appeared about a couple of years after the events above happened). Using valgrind amounts to typing valgrind program-to-check from the command line, and it will present you with a bunch of error reports, most of them relevant (unlike that tool above which would spout a bunch of false alarms). No rebuild, no relink with exotic libs, just run your usual build (compiled with debug info in so you’d get useful stack traces for each error found), and there you go. Sure it runs way slower than usual, but waiting is much less of a problem then spending brain cycles on getting the tool to work.

Software development is hard and complicated. So it’s pretty obvious that any tool you use in order to make it less hard and complicated absolutely must not itself be hard and complicated. And yet it seems most such tools completely forget about this, putting cleverness above anything else. “Look at how our tool could find this obscure bug!” Yes, but look at what it took to find it…

So what, you’ll say, TANSTAAFL. This is software dev, not walking in the park. True, but the unavoidable consequence is that no matter how effective it is, the tool won’t be used. Nobody will use your debugging tool if it takes hours to get working. A regression test suite will soon be left to rot if it’s too hard to run and maintain. You can have a team which sole purpose is to bear the brunt of dealing with all test-related issues, but that causes a whole new bunch of problems (communication, training, etc…) A test team should do high-level tests, not functional, fine-grain ones. Those really are for the guy writing the code, and the only way to get him to do them is to make him want to do them. And a developer won’t be willing to do something unless it clearly makes his job less complicated.

Why do we use IDEs for Java programming ? Because that squiggly red line under that bit of code I’ve just typed there telling me there’s something wrong is helpful. I don’t even have to try building the code to get that error. The gain is obvious. That’s how any test tool should be : so simple to use that I will want to use it, without being constrained by some PHB.

Unfortunately there’s still no way around the work needed to build a test suite, for instance. But at the very least build it so that running it is as easy as possible (and that’s hard too 🙂 ). Actually in that case, the suite may be based on a test tool, but it becomes a test tool itself so, sorry, it’s up to you to make it easy to use (another lesson I learned the hard way).

Nabaztag ? Not just yet.

It seems this cute little device is enjoying a growing attention from geeks all around. I’ve just taken a long look at it and was very much interested : open API (with a 3rd party Ruby module to boot, a large and dynamic community… then I came across this deal-killer : you can’t pilot the nabaztag directly, all requests have to go through Violet’s server. If the thing has an embedded web server (like all these devices have), you can’t reach it through your LAN.

The usual reaction would be to wonder what the hell ever came through their mind when they took this decision, but I suppose they had a good reason to do things that way. At least I hope so, otherwise it boggles the mind that designers of a rather clever device would make such a blunder. And sure enough they paid the price for it, since it seems that last Christmas their servers keeled over under the registration requests of the new rabbits.

I’ll still keep an eye on it, hopefully they’ll change this.

Ajax sux

I’m reading this inflamatory, yet spot-on post by Bruce Eckel, and while its main topic is about why Java failed on the web (because applets are fugly and java is a PITA to install – no grand revelations here), the part which I really want to “me-too” on is Bruce’s account of Ajax apps (“The Web is a Mess”). A few days ago I had an interesting discussion with Cédric on why I still wasn’t using gmail (I do have an account but I hardly use it). Aside of all the migration problems (which can certainly be solved), the main reason is still that, as an application, gmail’s shortcomings still far outweight its advantages.

I want my mailer to be a full-fledged, well integrated desktop application, not something which desperately tries to look like one but will never be. I want my mailer to look like the other apps on my desktop. I want to be able to drag’n drop attachments to and from messages. I want to be able to locally grep my messages. I want keyboard shortcuts to work, not having to think if they’ll conflict with another shell application which shouldn’t even be there in the first place.

No matter how you look at them, Ajax is a hack. It’s pushing the boundaries of a technology (html+javascript) in order to achieve something it wasn’t designed for. As such, I firmly believe that even though it does fill a need, it’s a dead-end. It begs to be replaced by something more appropriate, which unfortunately remains to be designed. Ajax is still CGI on steroïds, and these got exposure not because they were any good, but because at the time the alternative to creating a page with two text fields and a “submit” button was hundreds of lines of code in C using Motif.

Or may be that replacement is already there.

While I hardly use gmail, I do use flickr a lot. As a web-application, flickr is great. As an application, it’s passable. But flickr has understood that the best way to expand an app is to open it, so they’ve provided a nice API which plenty of “real” application use. So for instance I never use flickr’s upload pages to add photos to my stream, instead I use flickr’s own uploader from my macbook, or kflickr from my linux box.

Another solution which I haven’t tried yet is flock, which IMHO is what the “web 2.0” really should be (rather than one of the silliest buzzwords ever devised).

As for gmail, the solution wouldn’t be a web API, there’s already POP and IMAP (the latter gmail unfortunately still doesn’t support).

Web APIs are probably the sane alternative to Ajax, however they leave the burden of app development to 3rd parties, and the service provider will probably want to have more control than this. So it boils down to the same age-old problem all over again : remote applications.

X11 solved this a couple of decades ago, but in a way which just doesn’t work over a WAN connection. If you want to “send” an application over the wire, you need to describe it more concisely than through heaps of drawLine() directives. In the same article, Bruce Eckel evokes Macromedia Flash re-invention, Flex. Having briefly worked with it, I concur it’s a pretty good solution and I hope it will expand. Many would probably think that the best solution remains to be designed (or is probably running in some lab somewhere), but it would take some serious corporate muscle to get any kind of recognition (if only to write the dev tools in the first place).

Bottom line : the web is likely to remain a mess for some time, and the sooner the Ajax balloon deflates, the better.

Steve Jobs on DRMs

A while ago I blogged about how DRMs are going the way of the dodo (not a moment too soon) and on the misconception that Apple would be their main proponent. Steve Jobs just confirmed it by explaining that indeed, adding DRMs on iTunes wasn’t exactly his idea (which isn’t news to anyone who had done a bit of research on the subject).

I wouldn’t be surprised if there were a bunch of #ifdef MAJORS_ARE_STILL_CLUELESS peppered all over iTunes’s shop code.

On how GUI toolkits need more than a good language to be great

A few days ago I was pointed to this very interesting post from an Apple engineer who had jumped off the Java bandwagon after getting a peek of Cocoa.

One could think that Java being more advanced that Objective C (though perhaps not in all areas), such a move would be foolish, but I’m pretty sure there were more than a few nods among his readers on this one.

Building a good GUI toolkit is a craft of its own, and requires much more than just a good language. I like Java (even though I find it somewhat boring), but it’s a shame that Swing would be such clumsy a tool to build apps with. The toolkit I know the most is Qt, and even though it’s based on probably the most intricate language in common use today (C++), I still think that it kicks Swing’s butt in term productivity.

I suppose anyone who has used Swing long enough has his own list of its biggest annoyances. Mine is limited to layouts and the pervasive use of listeners (or more precisely the need to implement an interface or derive a class just to handle a frickin’ UI event). I guess this is because Qt is particularly more convenient than Swing in these specific areas. Layouts are hard to do, that’s understood. The ones offered with Swing have the fundamental problem that none of them is really that good out of the box, and always requires extensive tweaking. The obvious use is never the right one. Listeners, however, stem from another design mistake : preferring a “clean” API over a convenient one, or more generally, enforcing a principle to the point that it becomes a dogma and is no longer connected to reality. I can’t think of a better way to turn a something that’s meant to help into just the opposite.

Swing declares that everything should fit some OO design. Qt recognizes that listeners are a feature in itself, and implements it at the language level instead (at the price of a meta compiler). Note that Qt also has event handling based on overriding virtual methods, like in Swing. However this is limited to events generated by the graphic system (widget hide/expose, mouse button or key press), because these need to be processed quickly. But user interactions on widgets (a checkbox has been toggled, an item in a list has been selected, etc…) go through Qt’s signal/slot mechanism, which is much more practical. It has two big advantages : requiring much less code and making loose-coupling between the event emitted and the object processing it much easier. It somehow lets C++ make a tiny step into the realm of weak-typed languages like Python or Ruby. When you write

connect(myButton, SIGNAL(clicked()), thehandler, SLOT(wasClicked()))

no check is made at compile time that ‘thehandler’ actually has a ‘wasClicked()’ method. This might sound strange, but it’s actually quite convenient on several ways. For one, it opens a whole realm of runtime fiddlings. It also means that at this stage, ‘thehandler’ needs only to be defined as a QObject. No need to #include its whole class declaration, thus less dependencies.

So there you have it : a toolkit committing a cardinal sin against the language it’s based on, adding something which OO designers would probably frown upon, all for the sake of convenience. And it works.

On the contrary, by enforcing an API to follow a design paradigm consistently all throughout, disregarding the practical problems it inevitably raises, you turn it into something which will be more easily read (or written) by the machine than by the programmer.

e-paper is coming ? (really ?)

After dozens of “real soon now” false alarms, may be e-paper is finally coming around. One of the consequences should be that book editors will join the MPAA and RIAA (and their siblings from other countries) in the Great Big Fight Against Piracy, as ebook readers become a common product.

Contrary to music and movies, where what broke the scarcity wall was the advent of Net connections and machines with the capacity to handle such large files, thus enabling people to distribute them easily, ebooks are waiting for a device to display them. Distributing them is a long-solved problem, given how small the files are, but people still rarely do it. What they need is something which is as practical to handle as a bunch of paper sheets, and with a reasonably good DPI resolution so it’s comfortable enough to read for long periods of time.

Of course, one could argue that the other main reason for people not being more interested in ebooks is that people read less than they listen to music, watch movies or play videogames. Who knows, may be ebooks will spur a return of reading as a leisure :-).

social networking galore

Yesterday, at the suggestion of a former colleague, I’ve opened an account on linkedin. I used to have one on Orkut, but I closed it a while ago (after months of not using it at all) just so that it would stop bothering me with mail alerts telling me someone had sent me an invitation/message/note/whatever, especially since all of these were in portugese or spanish, neither of which I speak. Anyway, linkedin’s business orientation could prove useful these days where I’m looking for a job, but also it’s rather well done (unlike Orkut, which Google probably bought on an impulse because it was hanging next to the cashier’s desk at the Startuporama supermarket – “social network ? oooh, gotta have one of these”) and quite populated. Given that this is yet another “winner-takes-all” kind of market, I think they could get the lead in this section (the non-business-oriented one going to myspace, of course).

It would be a good thing if some of all these sites could merge and turn down the redundancy, but before that happens, this blog entry from Guy Kawasaki got me thinking : do I even have the time (ok, the will) do take such extensive care of my linkedin profile ? Clearly no, and most probably so do a lot of people. Hence I’m going to risk myself at predicting the future (something I’m always very wary of) : before long, all these fun little companies which you can pay to get them build your website and generally make yourself web-visible will also offer services to create you a spiffy profile on any of those social networks. Said networks are, after all, little self-contained versions of the WWW, profiles being homepages and connections being the equivalent of hyperlinks.


So DRMs are dying. Gee, how surprising.

More and more music vendors are giving them up or at least seriously considering the possibility (EMI, VirginMusic, Yahoo, the FNAC here in France). All those years to realize a basic principle of marketing : convenience of service sells. DRMs are the digital equivalent to a public swimming pool owner laying shards of broken glass all around it and forcing the customers to purchase special protecting shoes to access the pool in order to limit fradulent bathing.

Actually they are also the only way the market can “create” scarcity where there is inherently none, because the market as we know it doesn’t work well on abundance yet, but that’s a whole other subject.

Also, regarding the fact that Apple would be the main proponent of DRMs, this isn’t the case. ITunes has DRMs only because it was the only way music majors would grant it access to their catalogue, but Jobs is quite aware of the futility of the whole thing. Quoted from this interview in Rolling Stone magazine :

When we first went to talk to these record companies — you know, it was a while ago. It took us 18 months. And at first we said: None of this technology that you’re talking about’s gonna work. We have Ph.D.’s here, that know the stuff cold, and we don’t believe it’s possible to protect digital content

So itunes DRMs scheme is basically Jobs saying to the majors “yeah yeah, we’re protecting these songs”, in the same way a developer tells his PHB “yeah yeah, we’re <buzzword>-compliant” (which really means “I’m giving you this biscuit so you stop nagging me about things you don’t understand, now go away”). But notice that Apple pretty much never raised a finger against the DRM-unlocking programs which are circulating for itunes, and it’s always been pretty trivial to work around (just burn the song to an audio CD and rip it). I wouldn’t be surprised that itunes will drop DRMs as soon as the music majors stop imposing them.

[add. jan. 23rd 2007 : the MIDEM is confirming the trend]

[add. feb. 7th 2007 : more in this other post]

3rd post : the iphone

‘kay, gotta talk about something a bit more current.

Beyond the huge buzz this baby has created, it’s been long since a device has generated this kind of geek lust. Pretty much everybody I know wants one. That doesn’t mean much, everybody I knew back then wanted a BeBox too. OK, I’m screwing around here, Be never had a chance from the start. If a large company had introduced the BeBox, that would have been another story.

Anyway, some random thoughts :

  • the price : doesn’t seem that excessive to me, that’s what I paid for my Treo 600 2 years ago
  • said Treo looked desperately outfashioned, obsolete, OLD, the minute I saw it featured along with the Blackberry and other smartphones against the iphone during the keynote. Seriously, never before have I seen a piece of hardware go from “a bit used, but still current” to “paperweight” in an instant, right in front of my eyes. Creepy.
  • the touchscreen. I’ll have to try it before I get an opinion on this one. The Treo keyboard is adequate, I don’t think I’ve ever used a touchscreen as a keyboard for any kind of prolonged use before. You might ask, do people really go into hour-long texting/mailing sessions often anyway ? It seems the answer is affirmative. I trust Apple to have given this one some careful thought, though.

One more thing… Enough with the ‘iphone shuffle’ jokes already :-).