Peculiar Books Reviewed: David A. Mindell's "Digital Apollo"

The US space program is a treasure trove of insight into engineering at the extremes of human ability. It is a field which concerns itself deeply with human-machine interaction. Spacecrafts are not fully automated, nor are they under the total control of the human operators (the astronauts "in the can" and the ground control crew).
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

This is the first of a series of monthly book reviews intended to make the case for expanding the canon of Software Engineering texts. Don't get me wrong, books like Code Complete or the Mythical Man Month are venerable and valuable, but I contend that the corpus should be more inclusive of interdisciplinary studies. What does this mean? I believe we can create better software more rapidly, by studying the works of other fields and learning, as much as possible, from their mistakes and triumphs. If this sounds suspiciously like a liberal arts reading list for engineering then your suspicions are accurate. By way of justification, I will simply note that the best engineers I have ever had the privilege of working with were, respectively, a military historian and a philosopher.

The US space program is a treasure trove of insight into engineering at the extremes of human ability. It is a field which concerns itself deeply with human-machine interaction. Spacecrafts are not fully automated, nor are they under the total control of the human operators (the astronauts "in the can" and the ground control crew). Rather, they are sophisticated semi-autonomous machines, the machine performing background tasks and translating human commands into sensible actions. The balance between human and machine is not immediately obvious as David A. Mindell explores in Digital Apollo: Human and Machine in Spaceflight. Mindell's book is concerned with the interaction between the test pilots (later, astronauts) and the rocketry and guidance control engineers of N.A.C.A. (later, NASA), the MIT Instrumentation Laboratory and their struggle to design extremely reliable aircraft (later, spacecraft) in the presence of environmental unknowns and human fallibility. "Stable, but not too stable," meaning that the craft is autonomous enough to avoid being overwhelming to the pilot, but unstable enough to be responsive to the pilot's commands.

For Apollo, NASA and it's contractors built a "man-machine" system that combined the power of a computer and its software with the reliability and judgment of a human pilot. Keeping the astronauts "in the loop," overtly and visibly in command with their hands on the stick, was no simple matter of machismo and professional dignity (though it was that too). It was a well-articulated technical philosophy.

Mindell traces the history of this philosophy through the X15 project--a rocket-powered plane which left and re-entered the atmosphere variably under full human and full computer control, but successfully only in hybrid operation--the Mercury project--relatively short "spam in a can" orbital and sub-orbital flights with extensive ground-based observation and secondary computer control--and the Gemini project--a series of moon-trip length flights and computer-aided orbital rendezvous--up through the last Apollo flight, Apollo 17. Project Gemini was particularly influential in solidifying this philosophy.

Unlike Mercury, where the craft reentered the atmosphere in an open-loop, ballistic fashion, Gemini would be steered by the pilot right down to the point of landing.

Re-entry is tricky, the wrong angle will send you bouncing back into space. Sufficient instrumentation allowed the human pilots to control this angle manually. The rendezvous missions proved more difficult.

Intuitive piloting alone proved inadequate for rendezvous. Following Grissom and Young's successful demonstration of manual maneuvering on Gemini III, on Gemini IV astronaut Jim McDivitt attempted to rendezvous with a spent booster. He envisioned the task as "flying formation essential in space," but quickly found that his aviation skills would not serve him (...) Orbital dynamics created a strange brew of velocity, speed and range between two objects and
called for a new kind of piloting. Catching up to a spacecraft ahead, for example, might actually require flying slow, to change orbit. (...) A successful rendezvous would require (...) numbers, equations and calculations. It would require simulators, training devices and electronics.

The Gemini computer--the first digital onboard flight computer--translated the pilot's intuitive commands into the proper orbital responses. Mercury's extensive instrumentation and Gemini's onboard flight control would be synthesized in the Apollo project, the craft of which could be largely automated but kept the human crew "in the loop," aware of craft operations, in control and able to respond creatively with deep knowledge of the running system in times of accident. Wernher von Braun, creator of the Saturn V launch vehicle, had imagined future space travelers as mere passengers in fully automated machines. Indeed, Neil Armstrong was deeply disappointed that the Saturn V could not be flown off the platform by astronauts, but previous simulations had demonstrated that human reflexes were too slow. These passenger astronauts could not service the craft as it flew, nor would they be aware of it's operations; ground control would be the sole human element "in the loop."

While the X15 had suggested this was folly--the solely computer controlled craft was capable only in situations designers had anticipated, having a tendency to skip off the atmosphere--the Apollo 13 accident and the highly trained astronauts' role in their survival demonstrated this unequivocally: had Apollo 13 carried mere passengers, ground control could only have sat helplessly as they asphyxiated on a pre-proved flight path.

Digital Apollo is a detailed study of a complex organization's struggle to find the right balance between abstraction--via automation--and skilled human oversight to create a more functional system in a complex environment. It's here that Mindell's work finds its applicability to the craft of creating software: designing good systems with informed humans' interaction in mind is fiendishly difficult to get right.

Simply swamping the human with information and requests is unfeasible, but setting the machine off as an automata without oversight is, while possible, only justifiable until an accident occurs, seemingly without warning and with no clear path toward resolution. It is essential, if we are to tackle complex new frontiers, to get this balance right. It's the humans' knowledge of the system, their understanding of when to trust the machine and when to silence it--as Mindel notes in his opening chapter on the Apollo 11 landing--that leads to a more capable machine. As we rely increasingly on semi-automated systems, the lessons learned in the Space Race have great bearing for the designers and implementors of these systems. We should not seek to cut humanity out but keep our hands on the stick, as it were, and trust to our genius and our learning to go further than we might either alone or as mere passengers of machines. Mindell does a fine job detailing how NASA succeeded in striking this balance in the service of landing on the Moon. We software engineers can learn a thing or two for our own moonshots.

  1. David A. Mindell, Digital Apollo: Human and Machine in Spaceflight (MIT Press, 2011), 5.
  2. Mindell, Digital Apollo, 83.
  3. Mindell, Digital Apollo, 86.

Popular in the Community

Close

What's Hot