- Why is speed at sea measured in knots?
- Is computer software always a step ahead of hardware?
- Can a computer generate a truly random number?
- Will we ever run out of music?
- Does the outside edge of a ceiling fan blade move faster than the inside edge?
- How can I tell if a certain tree is big enough to support a 30-foot zip line?
- Can we use artificial intelligence to generate new ideas?
- How were we able to navigate from the Earth to the Moon with such precision?
- Is chaos an actual state, or just a name for rules we haven’t discovered yet?
- How do computers perform complex mathematical operations?
How did people in the olden days create software without any programming software?
The olden days are a little older than you might think…By Sarah Jensen
From the simplest to the most sophisticated, all computer programs rely on very simple instructions to perform basic functions: comparing two values, adding two numbers, moving items from one place to another. In modern systems, such instructions are generated by a compiler from a program in a high-level language, but early machines were so limited in memory and processing power that every instruction had to be spelled out completely, and mathematicians took up pencil and paper to manually work out formulas for configuring the machines — even before there were machines to configure.
“If you really want to look at the olden days, you want to start with Charles Babbage,” says Armando Solar-Lezama, assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Babbage designed an analytical engine — a mechanical contraption outfitted with gears and levers — that could be programmed to perform complicated computations. His collaborator, Ada Lovelace (daughter of poet Lord Byron), recognized the potential of the machine, too, and in 1842 wrote what’s considered to be the first computer program. Her lengthy algorithm was created specifically for computing Bernoulli numbers on Babbage’s machine — had it ever actually been built.
By the early 20th century, though, working computers existed consisting of plug boards and cables connecting modules of the machine to one another. “They had giant switchboards for entering tables of values,” says Solar-Lezama. “Each row had a switch with 10 positions, one for each digit. The operator flipped the switches and reconfigured the plugs in order to set the values in the table.”
Before long, programmers realized it was possible to wire the machine in such a way that each row of switches would be interpreted as an instruction in a program. The machine could be reprogrammed by flipping switches rather than having to rewire it every time — not that writing such a program was easy. Even in later machines that used punched tapes or cards in place of switchboards, instructions had to be spelled out in detail. “If you wanted a program to multiply 5 + 7 by 3 + 2,” says Solar-Lezama, “you had to write a long sequence of instructions to compute 5+7 and put that result in one place. Then you’d write another instruction to compute 3+2, put that result in another place, and then write the instruction to compute the product of those two results.”
That painstaking process became a thing of the past in the late 1950s with Fortran, the first automated programming language. “Fortran allowed you to use actual formulas that anyone could understand,” says Solar-Lezama. Instead of a long series of instructions, programmers could simply use recognizable equations and linguistic names for memory addresses. “Instead of telling the computer to take the value in memory address 02739, you could tell it to use the value X,” he explains.
Thanks to 16-year-old Edrick from Jakarta for this question.
Posted: April 3, 2012