../

Book Review: The Dream Machine

The full title of this 2001 book by M. Mitchell Waldrop is The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal. I enjoyed reading this book a lot, especially because I read a hard cover version of it, a change from the mostly digital versions of books and papers I have been reading lately.

I learnt quite a lot about computers from this book. I will split the learnings into two categories: First there is all the interesting stories, tidbits, trivia about the history of computing that are exciting to know. But also, this book relates the human aspects of computing that are often overlooked by computer science practitioners. I will start with the cool stories, and then conclude with the deep lessons about computing because I actually came away from this book with a lot more appreciation for what computer science really is.

When I started learning how to program, I took it for granted that I could write code and save it in the computer and then run that code to get some desired behavior from the computer. It was not always like that. The idea of stored-program, which is that the program a computer runs could be stored in memory was an important breakthrough. Before that, machines were designed to do only one thing, and to program a machine to do a different task, you had to change wires or some other mechanical thing. For instance, the ENIAC was programmed by changing wires.

This idea and several others that have brought computing to where it is now came from many different sources. Written in an engaging style with lots of analogies to make concepts more understandable, this book shows you how those inventions were not made in a vacuum by giving some context to their development. CPU time-sharing, for instance, was invented because MIT had only one IBM 704 computer and when multiple people needed to work on it, people (like John McCarthy) had to wait to run jobs and were getting impatient. Time-sharing was a way to make the CPU do multiple tasks ‘at the same time’ by switching between the different tasks so quickly it appears to a human that the tasks are being done at the same time.

And oh, I learnt about why the internet protocols (and some other proposals people make today) are called RFCs (Request for Comments) (see pp 280, 292). How networking was not a sexy side of computing right from the start; people like John McCarthy and Steve Jobs did not like networking at first. But of course it is very important, connecting people to each other, and it is what the Internet is built on, and just imagine what using a computer without the Internet would be like.

I also learnt how artificial intelligence had been an important part of the history of computing right from the start. You could call it the Holy Grail of computer science. AI research goes back way before even the invention of personal computers. We have always wanted to model and create intelligence like ourselves, it appears. Computation is not just about writing commands for a computer to execute, it is about modelling behaviour. It is the language to model human behavior just like mathematics is the language to model the physical world around us. So in this way, it seems to me that computer science is related to cognitive sciences/psychology just like mathematics is related to physics. Like the human mind, the computer is an information processor, and Information Theory, I learnt from this book, appeared to apply to the human mind just like it did to physical communication channels used in communications and computers. There is a lot of information in this paragraph.

The Dartmouth 1956 summer research conference on Artificial Intelligence was when the term Artificial Intelligence started gaining prominence. At that time, there was a paradigm of studying human behavior which was starting to die off. This was behaviorism, and its premise is that the human is a black box, and so proponents of behaviorism experimented by studying human ‘output’ (behavior) given certain inputs. The revolution that was going on then was that psychologists recognized that “there were rules that could generate behavior, and that behavior wasn’t just the accumulation of reinforced responses” (pg. 144). And so they strongly rejected behaviorism. So at this conference, there were approaches to artificial intelligence presented like the heuristic approach (Logic Theorist by Newell and Simon) and the logical-deduction approach (Advice Taker by McCarthy). These approaches were trying to create intelligent systems based on well-defined rules.

Well, this is not part of the book, but reinforcement learning has become the most prominent and successful way that natural language processing is being done today. OpenAI uses reinforcement learning with human feedback (RLHF) to train large language models, and that has made them wildly successful. Most of the models that are the most successful today for AI are like black boxes, and I thought that was exactly what a behaviorist would do, isn’t it? So when I read the part of the book about the rejection of behaviorism and developing rules-based AI, I was like ah, have we come full circle! I immediately checked online to see what Noam Chomsky (who practically led the demolition of the behaviorist movement in psychology) thought about deep learning and reinforcement learning, and it was entertaining (he doesn’t like it, doesn’t think it’s science). Interesting read here, this podcast, and this response by Peter Norvig.

The main thread running through the book is the vision of ‘human-machine symbiosis’ that J.C.R. Licklider (Lick) and others had, and how Lick, while working at ARPA, directed funding to research labs across the US to achieve that vision.

That was what people like Vannevar Bush, J. C. R Licklider, Wes Clark, and Doug Engelbart had always perceived so well, he thought. The real significance of computing was to be found not in this gadget or that gadget, but in how the technology was woven into the fabric of human life - how computers could change the way people thought, the way they created, the way they communicated, the way they worked together, the way they organized themselves, even the way they apportioned power and responsibility. That was what resonated so deeply in Taylor’s mind. … if Pake could be made to understand what computing was really about, then there could be a tremendous opportunity here to make the dreams into something real. (pp. 329)

I saw a parallel to this in a more primitive, first-principles way in the movie “Godzilla x Kong: The New Empire” when the scientists augmented Kong’s right hand with a new metal arm, and Kong was now able to fight more effectively with the final boss ape (Skar King) who had a chain with a hook at the end that would slash if caught with an arm of flesh. Making gadgets are all well and good, but what can it help Kong do? What human process can it augment and make better?

The biggest lesson from this book for me is the nature of changing the world. It is all about people. We all want to change the world, but I learnt that how to go about it is to change how people do things. Less about making things, and more about impacting people. Change how people pay for goods and services, how they access information, how they interact with each other, how they think about something like weather, the universe, governments, make it easier for folks to move from place to place, etc. It could be a really simple idea, it often is. But once it gets accepted and implemented on a large scale, you have just changed the world. Sounds very obvious (what is the world without people?) but I admittedly never thought of it that way.