Related Questions
- Why is speed at sea measured in knots?
- When will AI be smart enough to outsmart people?
- How were we able to navigate from the Earth to the Moon with such precision?
- Is chaos an actual state, or just a name for rules we haven’t discovered yet?
- How did people in the olden days create software without any programming software?
- Does the outside edge of a ceiling fan blade move faster than the inside edge?
- Can a computer generate a truly random number?
- Can we use artificial intelligence to generate new ideas?
- Is computer software always a step ahead of hardware?
- How can I tell if a certain tree is big enough to support a 30-foot zip line?
How do computers perform complex mathematical operations?
One small step at a time (but very, very quickly)…
By Peter DunnComputers perform dazzlingly complex tasks, but the microprocessor chips inside them are only capable of performing very basic mathematical operations, such as adding and comparing binary numbers. Utilizing this limited toolset to perform calculus and other advanced mathematical operations was a signal achievement of the early days of electronic computing. Institute professor and former dean of engineering Joel Moses was part of that revolutionary effort, and explains that the solution lay in breaking down large and complex problems into smaller and simpler ones.
Complex math requires the handling of two types of operations: numerical ones that involve specific numerical values, and symbolicones, such as those in algebra and calculus, that involve symbols like “x” and “y.”
Moses notes that numerical operations can be broken into addition, subtraction, multiplication, and division, which are bread-and-butter tasks for a microprocessor. To accommodate a wider range of numerical values without overwhelming memory and processing resources, computers use the floating-point system, replacing common numbers (say, 1,300,000) with floating-point values (say, 1.3 x 106). This approach usually produces only an approximation of the result, but with some care it renders values extremely close to the “correct” answer. (One notable exception: some Intel Pentium chips in the early 1990s had a floating-point bug that in rare cases caused incorrect calculations.)
Symbolic operations are a bigger challenge. “The first problem,” explains Moses, “is how to represent symbols using only the 0s and 1s available in a binary computer. This is done with coding, where ‘x’ is represented by one number and ‘y’ by another.” The computer hardware and software understands these as codes, rather than numerical values. More complex expressions can be represented via a decomposition of expressions into simpler parts, which are connected by pointers in the computer’s memory. This allows representation and processing of expressions such as “x + 2y.”
For example, a differentiation can be reduced into steps that differentiate simpler expressions. The results of such differentiated expressions can be represented as sums, products, constants, and variables. The ultimate result is a procedure that incorporates a complete differentiation algorithm but uses only computer-friendly numbers and functions. Other operations, such as symbolic integration, can be even more complex, but the basic concept is usually the same: reduce the complex problem into simpler ones, and compute.
While these procedures can be lengthy and sometimes time-consuming, today’s microprocessors are so powerful that they make short work of them by performing billions operations per second.
More information on this topic is available in the textbook Structure and Interpretation of Computer Programs, available online from MIT Press.
Posted: November 16, 2010