How did people in the olden days create software without any programming software?

The olden days are a little older than you might think…

By Sarah Jensen

From the simplest to the most sophisticated, all computer programs rely on very simple instructions to perform basic functions: comparing two values, adding two numbers, moving items from one place to another. In modern systems, such instructions are generated by a compiler from a program in a high-level language, but early machines were so limited in memory and processing power that every instruction had to be spelled out completely, and mathematicians took up pencil and paper to manually work out formulas for configuring the machines — even before there were machines to configure.

“If you really want to look at the olden days, you want to start with Charles Babbage,” says Armando Solar-Lezama, assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Babbage designed an analytical engine — a mechanical contraption outfitted with gears and levers — that could be programmed to perform complicated computations. His collaborator, Ada Lovelace (daughter of poet Lord Byron), recognized the potential of the machine, too, and in 1842 wrote what’s considered to be the first computer program. Her lengthy algorithm was created specifically for computing Bernoulli numbers on Babbage’s machine — had it ever actually been built.

By the early 20th century, though, working computers existed consisting of plug boards and cables connecting modules of the machine to one another. “They had giant switchboards for entering tables of values,” says Solar-Lezama. “Each row had a switch with 10 positions, one for each digit. The operator flipped the switches and reconfigured the plugs in order to set the values in the table.”

Before long, programmers realized it was possible to wire the machine in such a way that each row of switches would be interpreted as an instruction in a program. The machine could be reprogrammed by flipping switches rather than having to rewire it every time — not that writing such a program was easy. Even in later machines that used punched tapes or cards in place of switchboards, instructions had to be spelled out in detail. “If you wanted a program to multiply 5 + 7 by 3 + 2,” says Solar-Lezama, “you had to write a long sequence of instructions to compute 5+7 and put that result in one place. Then you’d write another instruction to compute 3+2, put that result in another place, and then write the instruction to compute the product of those two results.”

That painstaking process became a thing of the past in the late 1950s with Fortran, the first automated programming language. “Fortran allowed you to use actual formulas that anyone could understand,” says Solar-Lezama. Instead of a long series of instructions, programmers could simply use recognizable equations and linguistic names for memory addresses. “Instead of telling the computer to take the value in memory address 02739, you could tell it to use the value X,” he explains.

Today’s programming software can take programs written at a very high-level and compile them into sequences of billions of instructions that a computer can understand. But programmers are still faced with the task of specifying their computation at the correct level of detail, precision, and correctness. “Essentially, programming has always been about figuring out the right strategy for a machine to perform the computation that you want,” says Solar-Lezama. Many principles of today’s programming languages such as Ruby or JavaScript can be directly traced to the work of early programmers like Ada Lovelace, he continues. “It’s fascinating that people were thinking about the same programming issues before computers even existed.”

Thanks to 16-year-old Edrick from Jakarta for this question.

Posted: April 3, 2012

[contact-form-7 id="442" title="Submit Question"]


content Link link