Misnomers in Computer Science

One thing you notice if you work in our field for very long is that the terms change as quickly as the practices and the technology. Once upon a time, memory was a premium product. Now it is cheap. Small-memory-footprint algorithms were once valuable. Now not so much. Open-source software was once for academics and scientists. Now the all top companies use it.

Terminology is changing as fast as the technology, but sometimes not fast enough. Take RISC and CISC : According to a questionable definition of RISC, Reduced Instruction Set Computing is called so because of the reduced number of computer instructions supported by the CPU. A better definition, in my opinion, comes from pointing out that the removal of complex memory-fetching instructions reduces the complexity of CPU design. After all, RISC CPUs add instructions at every generation, and we still called them RISC.

CISC (where C stands for Complex) computing predates RISC, and includes high-level instructions that compute complex software functions to fulfill common developer needs, such as moving sprites around on the screen or computing a high-level math function. But, starting with the 486, CISC CPUs have deprecated the clumsier older instructions, implementing them on top of a RISC core using microcode to support the older instruction set. That’s right! CISC has been RISC all along, or at least since 1989.

These days the term CISC means Intel x86 or x64, while RISC just means anything else. The terms no longer have technical value to chip developers. The same thing happened with the terms System V and Berkeley Unix : The terms just don’t mean anything outside of historical context. To test whether I’m right, try to name one difference between the two supposed flavors of operating systems. The unix family of operating systems has diverged so much that no single standard for all file placement, application behavior, and kernel functionality describes any real operating system in any way.

Another example is the seemingly innocuous term driver. Historically, the term driver was chosen for software that interfaced with a peripheral. The software drove the peripheral to come to life and start doing something. This is the same sense of driver that is used in the term test driver, which drives software to wake up and do something, so that the correct behavior of that software can be validated.

But today, a device driver is integrated into the operating system, providing applications with a standard API to request some task be performed in the hardware. The device driver has taken on a passive role. It no longer drives anything.

While reading about DDS and distributed simulation, I am struck by how easy it would be to utilize this kind of framework for a distributed control system. It turns out that DDS is being used for exactly that purpose. And not just DDS: Other software platforms, like HLA, are bridging the gap from simulation to realization. We now live in a world where you can mix and match your real and simulated components as you please.

So is the line between simulation and control system going away? Time will tell.