It takes some pretty big balls to claim to be “the de facto world standard for measuring PC computing speed since 1984”, but that’s exactly what Landmark Research International Corporation did. They even used the phrase “de facto” more than once in their documentation and program.
Why so cocky? Maybe it’s because they were the first benchmark on the scene; there’s no denying they had a lot of visibility. They did many things right: Their display updated realtime, so you could see what the performance delta was when you hit the turbo button. They had a more complicated metric than just a single IMUL and IDIV. They also benchmarked the FPU, which was extremely valuable in determining if you should spend that extra $500 before trying to run AutoCAD or Lotus 1-2-3. It was these features that made Landmark Speed Test a staple at computer shows, where nearly every single clone vendor ran it on their systems in the late 1980s.
However, Landmark System Speed Test did one thing terribly wrong: They attempted to give sense to their metric by reporting it as “AT MHz Speed”. As in, if your computer was an IBM AT, this is what MHz it would be running at. From their documentation:
The CPU and FPU speeds show performance relative to an IBM AT with a 6 MHz 80287 math coprocessor chip installed. For example, if the display shows CPU speed to be 22.53 MHz, it means an IBM AT would have to run at 22.53 MHz to have the same computing performance as this computer.
Benchmark scores are completely arbitrary; they can be reported as a plain integer, or float, or number of gibbon farts per second, or whatever. They’re completely made-up numbers that help you compare different systems. Trying to label their score as “6 MHz 80286 IBM AT relative MHz speed” just confused consumers; many of them confused the benchmark score as actual MHz speed. Besides, if you had never used an IBM AT, you had no prior experience to understand the comparison in the first place.
They attempted to justify that bad 1984 decision in their 1991 documentation:
...it compared the speed of the computer it ran on to the speed of the first and only de facto standards that ever existed for personal computers: the IBM PC and the IBM AT. IBM never changed the architecture of either computer, but replaced them with the PS/2 line in 1987 and took the PC and AT out of production. Since IBM will never change the PC or AT, those will always be stable standards of comparison for PC performance, and Landmark will be the performance measurement standard.
This sounds reasonable until you realize that the point of running benchmarks is to compare relative performance, not performance relative to a 1984 computer that nobody was targeting to run fast software.
Another significant misstep for Landmark is that they changed their metric code between releases. The FPU scores between the shareware and registered versions of v2.0 differ slightly when run on the same machine. The same goes for the CPU test between version 2.0 and 6.0 on the same machine. They did not publicize this fact, obviously, because it would have hurt their reputation. I believe the scores differed because they did not write all of their metrics in assembler, but rather in C, just like the rest of the program. Version 2.0 was written in Microsoft C, and version 6.0 was written in Turbo-C, and when they changed compilers, the resulting metric code changed because both compilers had their own code generation optimization.
I have not yet disassembled Landmark v2.0 to verify what their metric code is, but when I do I will post it here.
You can download two versions of Landmark System Speed Test on the Other Benchmarks page.