The Quirky History of Desktop Math Coprocessors
Computer History
Summary
- Math coprocessors boosted CPU performance with specialized math processing chips.
- Early models like Intel 8087 allowed modest desktops to handle complex tasks.
- Third-party companies like Cyrix competed with Intel, offering specialized coprocessors.
These days, between your CPU and GPU, you pretty much expect that your computer can do any type of math you throw at it with aplomb. However, in the early days of personal computing, sometimes you needed to throw an entire extra chip at the problem—a math coprocessor.
What Is a Math Coprocessor?
A math coprocessor (more properly known as an FPU or Floating Point Unit) is a specialized microchip that enhances the mathematical processing performance and precision of the CPU it’s designed to work with. For example, the Intel 80387SX is the math coprocessor for the 80386SX CPU.
If you bought an 80386SX computer back in the day, and later realized you needed to do more serious work on it that required advanced math, you could buy the 80387SX, pop it in a socket on your motherboard, and get a massive performance boost for those specific types of floating point math operations.
“Floating point” math is simply math involving decimal points. This is as opposed to “integer” math, which only works with whole numbers. Floating point numbers are more precise, and are essential for science and engineering work. These days, floating point math is used in a variety of software, especially video games. This is where terms like gigaflop or
teraflop
come from. A “FLOP” is a “floating point operation.”
Apart from floating point math, coprocessors might also be used to do signal processing, or to handle IO (input/output) functions between the various components of the computer. The main general purpose CPU in the computer can do all of this, but it may not be particularly fast or efficient at these tasks.
The Birth of the Desktop Math Coprocessor
While having specialized processors handle different types of computer math wasn’t a new idea in the world of mainframe computers and minicomputers, it didn’t really make a debut in home computing until the late 70s and early 80s. This makes sense, since until that time, there was barely any home computing market to speak of.
One of the most iconic early math coprocessors was the Intel 8087, released in 1980 as an optional add-on for the Intel 8086 and 8088 processors—chips used in IBM’s first personal computers. For home users, this meant that even a modest desktop computer could handle tasks that previously required larger, more expensive systems.
Other manufacturers quickly followed suit. Motorola introduced the 68881 for its 68000-series processors, which powered early Apple Macintosh and Amiga computers.
The Rise of Third-Party Math Coprocessors
That open coprocessor socket in home computers was too appealing to ignore for some companies. Cyrix, for example, made its debut with the Cyrix FasMath 83D87 and 83S87. These offered strong competition to Intel’s official coprocessors and this was just the start of a lengthy history where Cyrix rubbed Intel the wrong way. Cyrix would soon start offering its own full CPU, and I even had the chance to experience the company’s Pentium Pro 200 competitor known as the 6x86MX—it was not good.
There were also exotic coprocessors that did very specialized things, like the Weitek Abacus FPU. This chip was used by software like Autodesk Renderman and other professional 3D applications. This was long before the GPU existed, but here we have an extra chip you plug in to help accelerate specific graphics-related math!
By the late 1990s, the era of the coprocessor was over. Today, CPUs and GPUs do all their own floating point math, and it’s all nice and integrated into single processor packages. Then again, my current Windows laptop has 24 CPUs in it. So I guess having multiple complete CPUs in one computer is still a sort of “co” processing.