Table 4 is wrong! Visual C++ 7.1 is given undeserved bad marks on template conformance.
During preparation of the manuscript from my draft, some elements from the Watcom column migrated into
the Visual C++ 7.1 column. It was my responsibility, as the author, to verify the correctness of the
manuscript prior to final publishing, and I'm afraid I fluffed. Apologies to the Visual C++ 7.1 team, since
VC++ 7.1 has very good language support indeed.
The correct version of the table, in PDF form, is available here.
Speed optimisation settings too conservative
I've had feedback from several compiler vendors suggesting that the optimisation settings I used were too conservative.
For example, CodeWarrior shows markedly better performance when using -opt full instead of -opt speed,
which was used in the test. Similarly Watcom is much better when -oxt is used rather than the -ot.
The rationale for the tests is explained in the next section, but it's fair to say that some of the settings I used were
simply the incorrect choice, rather than following the ethos described. However, since I'm doing a speed specific article
(see below) in May 2004's issue of Dr Dobb's I'll defer providing quantitative assessments until then.
Speed of generated code
I've had some feedback from a couple of readers questioning the decisions
regarding choice of compiler flags in a couple of the speed of generated
code tests. Let me explain my rationale:
1. The stock advice is that optimisation for speed is meaningful for
test/demonstration purposes, but that real applications should optimise for
size since the better cache performance outweighs any specific localised
advantages in the larger speed-optimised code. It was on this premise that I
used the corresponding test-for-size executables in the test-for-speed
2. Being a library kind of chap, I am interested in portability, so did take
the specific decision to go for P5, rather than anything more "modern". I
take the point entirely (expressed from a few people representing different
compiler vendors) as regards its not representing the best performance
achievable for a given compiler.
3. I choose not to go for whole-program optimisation, because I deemed it as
an uneven playing field. I recognise that this argument has at least two
I can certainly see the restrictive nature imposed by these decisions, and
even the arguable invalidity of #3. In an ideal world, I would have
liked to have done the various performance test applications under the
following three conditions:
A. optimised for size
B. optimised for speed, but without processor-specific optimisations. I'd
have to give some thought to whether things such as Intel's
multi-processor-version runtime-dispatching could be used here, since that
does allow for backwards compatibility.
C. absolute maximum speed optimisation that each compiler can possibly
deliver, for the testing host system (e.g. say P7).
Naturally, in the space of an article that covered more than just speed of
generated code - which, as it was, had to be substantially cut during the
editorial process - it would be difficult to achieve all this.
This detailed optimisation-specific study will appear in the May 2004
issue of Dr Dobb's Journal. If any compiler-vendors wish to contact
me regarding that to offer input and advice, I'll be doing the research for
this in January.
It may not be necessary, but I'd like to address the accusations precipitated by this article of being either a stooge or an
assassin. Let me clearly state: I have no axe to grind for/against any of them. I use each of these compilers on a regular
basis, that's why they featured. I like them all, but I am not employed by any of the vendors. I think the efforts put into
all their developments are commendable (even writing simple C++ analysis software analysis software is hard enough!), but none
are undeserving of all criticism.
As I said in the article, my tool of choice is Visual Studio 97 (VC++ 5), but that's to do with the IDDE, not the compiler.
My personal favourites are Intel and CodeWarrior, and I'm fond of the Digital Mars compiler as well, but recognise it's
lacking in certain facilities. I have previously not liked Watcom, Borland and GCC, and have (like most of us, if we're
honest) had lots of criticisms to make of Visual C++. These latter four all performed better in one way or another than
I had expected, and I did nothing to hide that. In fact VC++ 7.1 is now ranked as equal in my estimation to CodeWarrior and
Intel, so I'm pretty embarassed at the typo in Table 4. Conversely, my three "preferred"
compilers all failed to live up to my expectations in one way or another. So show me the bias!
Finally, as mentioned in the article, I examined issues of interest to me, especially in my guise as the author of the STLSoft
libraries, and you should be mindful of this when reading the results and my discussions of them.