We started discussing about a full rewrite of Rosegarden as early as 1997 I think. We considered several options for both language and toolkit : GTK+, Qt. Our first language of choice was actually Objective C, which unfortunately didn't have any suitable graphic toolkit. We had to give up pretty quickly. Nice language, but dead. So we went for C++, and GTK+ which was nicely shaping up.
C++ is better. The only reasons why, given the choice, you would rightly choose C over C++ are a crappy C++ compiler on your platform of choice. Yes, there are better OO languages (easier to use, more powerful, more elegant, yada yada), but none which can deliver the same performances (although Java is getting there), and are as widely supported. C++ works, period.
Why we didn't choose Qt at that time : The license and the look. Probably the single worst mistake we made and one of the main reasons why, after all this time, we still haven't released anything. The license debate over Qt is probably one of the most appalling thing in the history of Open Source, even though the outcome (Qt being GPLed) is nice. Now the current complain one can read on slashdot and Linux Today is that GTK+ lets you write proprietary software without having to pay anything. How stupid can you get.
So we went for GTK+... But calling C from C++ is no fun. It's like driving a Ferrari on a trip with forest trails every 10 km, and you have to switch down to 2nd gear and drive very carefully when you're on these.
While Chris was struggling with MICO, I tried prototyping the RG GUI with Gtk-- (at the time, version was 0.3). It was lacking a lot of stuff, so I started adding some. Another big mistake. That's how I ended up helping maintaining the thing, and while I did that, Rosegarden development dropped dead in the water. I was constantly saying to Chris and Rich "just one more feature to Gtk-- and I'll return to RG".
Another thing that made this return next to impossible is that the code base was such a mess, even though it was rather small. Trying to get into it was way too hard. Way too many libs, mostly. That's the key thing, if you don't have too much time to work on your project, and can't work on it regularly, you can't afford to use too complicated tools. Because each time you get back to it, you need to find your marks very quickly. If you have two hours of spare time and have to spend one hour to either maintain your tools (e.g. upgrading libs) or remembering how this works and what that is, you're fucked.
So use the most complete set of tools possible. If you have to write your own tools (or help writing them) you stand a huge chance to get caught in this part of the development and loose sight of your initial goal.
Being able to say "I've put a big amount of work on this code, I really like what I've done, yet the solution you've hacked in one afternoon is better so we'll use that instead". Or, "I've put a big amount of work on this code, but it seems there is no need for it, so I'll remove it". And being able to recognize when to say it :-).
KDE has won, at least on the technical ground. It may not be obvious from a user's perspective, especially RMS-fanatics college-kids types, but from a application developer's standpoint, it's quite clear cut. When I read Miguel's famous "Unix Sucks" paper, I totally agreed many parts of it (mostly when he exposes the problems of Unix). Even more because I had just switched to KDE2, and was seeing how they had implemented almost every single item he was promoting. I'd like to quote what I consider to be the single most crucial part of this paper :
Here is a common problem: people focus on their strengths and ignore their flaws when it comes to anything that is dear to them. Even worse, when comparing with another competing entity, they focus on their weaknesses and ignore their strengths.
The problem with this approach at looking at things is that eventually the competition will catch up with you. At the time you realize this, they already got your features, and you have none of theirs.
This is why it is very important to keep a self-critical approach and try to improve things before it is too late.
A direct example of this is available KDE apps compared to Gnome ones. Konqueror doesn't have the visual polish of Nautilus, but it has a comparable modular architecture and is putting it into application at a much larger scale than Nautilus (see the wealth of IO slaves and KParts available). But it was developed by only 4 people, only one of them working full time on it (David Faure), as far as I know. Compare with the 13 full-time Nautilus developers.
Likewise, people compare KOffice to Gnumeric + ABIWord + DIA + etc... or to Open Office. The interesting point is that a relatively few (less than a dozen, AFAIK) developpers have been able to provide KDE with a genuine office suite, instead of grabbing an external project and adding KDE support to it. Regardless of the respective merits of both approaches, it illustrates how powerful KDE is as a development platform.
Common complain : Gnome/KDE is a blatant copy of Windows. So ? Windows is still useful to millions of people every day, like it or not. Yes, one can certainly do better, but "better" is mostly in the eye of the end-user. A "better" interface which you doesn't look like anything you're used to and thus requires re-learning everything is not "better". A computer is a tool which should first and foremost be useful. Not "beautiful", or "conformant to every single item of the Foo standard or policy", just useful. Policies are meant to increase usefulness and efficiency, they don't exist for their own sake, so following them should only go so far.
I recently saw a post on kde-devel from someone who was planning to write OCaml KDE bindings... I used to think that the added difficulty of binding C++ to other languages wasn't worth the hassle of tying yourself down into C. Now I'm beginning to think that, since C++ defines an API much more finely than C (OO, constness, overloading, public/private), it might actually be easier to wrap than C. The main problem we had with Gtk-- was that writing bindings automatically was impossible, because there is much more information in a function declaration in C++ than in C. Parsing the GTK+ headers wouldn't help, and even the API description format which the python bindings were using wasn't enough. And even if it was, it still meant that they had to be maintained over the headers. So most of the wrapping was done manually with lots of look-up in GTK+ source code, and lots of mistakes.
But C++ gives much more information in its headers, usually more than an other language needs to have. So automatic generation is possible, since there is less need of human-guessing.
Nevertheless, the multi-language issue is secondary. It's better to have the best possible base platform than dumbing it down trying to make it available to other languages, because "other languages" will never be the main source of applications. For one thing, even when generated automatically, maintaining bindings isn't a light job (documentation, debugging), so there's always the problem of being up to date. Another thing is that developping an application in an exotic language has a big downside, because it either means you're going to have to distribute binaries, or that people will have to install said language parts (libs, interpreter) and the bindings to run your app. It also means the added memory overhead of the language's runtime and of the bindings.
Probably the single most overused phrase in programming-related discussions, especially by clueless people. "Why didn't KDE reuse Gecko instead of writing khtml ?" Duh. Think "easier said than done" here.
I use KDE because right now it's the best development platform available on Linux : powerful, easy to use (i.e. to program with), widespread, supported, documented. However, having recently looked at the jdk 1.3 demos, Java is much closer to be a credible alternative than it used to be. Chances are this will be my next development platform.
Back in 1986, my father bought me an Apple //gs to replace our aging Apple //e. This machine had a large ROM holding a fairly extensive toolkit, akin to the one the Mac had. It mostly dealt with memory management, 2D graphics (Quickdraw), and widgets. This toolkit was despised by a lot of Apple ][ hackers who considered it as useless bloat. Indeed the machine was slow as a pig, but that was mostly due to a crippling hardware design. Some software (demos or cracked stuff) in circulation was proudly displaying a "Say No to Tools!" logo on startup... Puts a nice little spin on the recurrent "this is useless bloat" claims one can read about C++, Java, component frameworks, etc...
On this same machine I once wrote a short C program which would compute all prime numbers from 1 to 10e6, using a very simple algorithm (given an integer N, divide by all previously found prime numbers which are less than the square root of N). This program would take around 10 hours to complete. The same program now takes around 1 second on my P-III 600.