Marvin Minsky
Photograph by Louis Fabian Bachrach

Marvin Minsky passed away on January 24, 2016.

Many years ago, when I was a student casting about for what I wanted to do, I wandered into one of Marvin's classes. Magic happened. I was awed and inspired. I left that class saying to myself, “I want to do what he does.”

I have been awed and inspired ever since. Marvin became my teacher, mentor, colleague, and friend. I will miss him at a level beyond description.

Marvin's impact was enormous. People came to MIT's Artificial Intelligence Laboratory from everywhere to benefit from his wisdom and to enjoy his deep insights, lightning-fast analyses, and clever jokes. They all understood they were witnessing an exciting scientific revolution. They all wanted to be part of it.

Marvin received important recognition from all over the planet. In 2014, he received a BBVA Foundation Frontiers of Knowledge Award. My nomination letter, following, enumerates some of his achievements.


Minsky is Toshiba Professor of Media Arts and Sciences, Emeritus, and Professor of Electrical Engineering and Computer Science, Emeritus, at the Massachusetts Institute of Technology. His research included seminal contributions to cognitive psychology, neural networks, automata theory, symbolic mathematics, and especially artificial intelligence, including work on learning, knowledge representation, commonsense reasoning, computer vision, and robot manipulation. He has also made important contributions to graphics and microscope technology.

Minsky was born in New York City. He attended the Fieldston School, followed by the Bronx High School of Science, in New York City, followed by Phillips Academy, in Andover, Massachusetts. After spending time in the United States Navy toward the end of World War II, he continued his education, earning a BA in Mathematics from Harvard College, followed by a PhD in Mathematics from Princeton University.

During his undergraduate years at Harvard, he interacted with the distinguished mathematician Andrew Gleason and the eminent psychologist George Miller. Minsky impressed Gleason with some fixed point theorems in topology, which first established his depth in mathematics and forecasted his elevation to the National Academy of Sciences.

While at Princeton, he built a learning machine, with tubes and motors, which established his passion for building and forecasted his elevation to the National Academy of Engineering. At Princeton, John Tukey and John von Neumann were on his thesis committee.

When he finished his PhD work, von Neumann, Norbert Wiener, and Claude Shannon supported his admission to the select group of Junior Fellows at Harvard. As a Junior Fellow, Minsky invented the confocal scanning microscope, a microscope for thick, light-scattering specimens. Light travels from the light source, through a beam splitter, comes to a point inside the specimen, bounces back to the beam splitter, and from there into the viewing optics. Because only a point is viewed, the specimen moves to form a complete image.

Minsky's invention disappeared from view for many years because the lasers and computer power needed to make it really useful had not yet become available. About ten years after the original patent expired, it started to become a standard tool in biology and materials science.

Minsky's work on Artificial Intelligence through symbol manipulation dates from the field's earliest days in the 1950s and 1960s. Many consider his 1961 paper, “Steps toward Artificial Intelligence,” to be the call to arms for a generation of researchers. That paper established symbol manipulation—divided into heuristic search, pattern recognition, learning, planning, and induction—to be at the center of any attempt at understanding intelligence.

In the late 1950s, Minsky, along with John McCarthy, founded the MIT Artificial Intelligence Group within the Research Laboratory for Electronics. The AI Group became part of Project MAC in 1963, and a separate AI Laboratory in 1970.

Students and staff flocked Minsky's group and then to the new Artificial Intelligence laboratory to meet the challenge of understanding intelligence and endowing machines with it. Work in the new laboratory included not only attempts to model various aspects of human perception and intelligence but also efforts to build practical robots. Minsky himself designed and built mechanical hands with tactile sensors and a fourteen-degree-of-freedom arm. He exploited the fact that the force and torque vector associated with any single point of contact along an arm can be determined by a sophisticated force-sensing wrist.

From the 1960s, Minsky has argued that space exploration, undersea mining, and nuclear safety all would be vastly simpler with manipulators driven locally by intelligent computers or remotely by human operators. Early on, he foresaw that microsurgery could be done by surgeons who work at one end of a telepresence system at their own comfortable scale while the other end does the chore required down at the uncomfortable scale where, say, tiny nerve bundles are knitted together or clogged blood vessels are reamed out.

In the late 1960s, Minsky began to work on perceptrons, which are simple computational devices that are meant to capture some of the characteristics of neural behavior. Minsky and Seymour Papert showed just what perceptrons could and could not do, thus raising the sophistication of research on neurally-inspired mechanisms to a new level. Renewed interest in neurally-inspired mechanisms, twenty years later, led to a reprinting of their classic book, Perceptrons, with a new chapter treating contemporary developments, along with a dedication to Frank Rosenblatt, whose early work stimulated much perceptron research, including that of Minsky and Papert.

Taken together, Minsky's Steps toward Artificial Intelligence, his early work on symbol manipulation and perceptrons, the founding of the MIT Artificial Intelligence Laboratory, and the work of his earliest students firmly established Minsky as one of the founders of Artificial Intelligence.

Minsky and Papert continued their collaboration into the 1970s and early 1980s, synergistically bringing together Minsky's computational ideas with Papert's understanding of developmental psychology. They worked both together and individually to develop theories of intelligence and radical approaches to childhood education centered on Logo, Papert's educational programming language.

Minsky's best known work from the mid 1970s introduced a family of ideas that he labeled the Theory of Frames. He emphasized two key concepts in his famous, often reprinted paper, “A Framework for Representing Knowledge.” First, objects and situations can be represented as sets of slots and slot-filling values; second, many slots ordinarily can be filled by inheritance from the default descriptions embedded in a class hierarchy. A frame describing a birthday party, for example, would have a slot for the person celebrated, that person's age, the location, and a list of the gifts presented. When published, the Theory of Frames offered not only a fresh way to think about human thinking, but also had high impact on Artificial Intelligence as an emerging engineering discipline: the popular expert-system shells developed during the following decade all offered tools for developing, manipulating, and displaying frames.

A few years later, in “K-lines: A Theory of Memory” (1979), Minsky addressed four key questions: How is information represented? How is it stored? How is it retrieved? How is it used. His answer was that knowledge lines help us solve a problem by actuating those parts of our brains that put us back in a mental state much like one we were in when we thought about a similar problem before. An elementary physics problem, for example, might take a student into a mental state partially populated with previous applications of Newton's laws, the conservation of energy, force diagrams, and the role of friction.

In 1985, frames and k-lines and many other ideas came together in Minsky's book, The Society of Mind. As its name suggests, the book is not about a single idea. Instead it is a statement that intelligence emerges from the cooperative behavior of myriad little agents, no one of which is intelligent by itself. Throughout the book, Minsky presents example after example of these little agents at work, some supporting natural language understanding, some solving problems, others accumulating new ideas, and still others acting as critics.

In 2006, Minsky published a second seminal book, The Emotion Machine, which is full of ideas about consciousness, emotions, levels of thinking, and common sense. Multiplicity is a dominant theme. In the book, Minsky wrote that our resourceful intelligence arises from many ways of thinking, such as search, analogy, divide and conquer, elevation, reformulation, contradiction, simulation, logical reasoning, and impersonation. These ways of thinking are spread across many levels of mental activity, such as instinctive reactions, learned reactions, deliberative thinking, reflective thinking, self-reflective thinking, and self-concious emotions. The upper levels of mental activity enable many ways of modeling self, such as physical, emotional, intellectual, professional, spiritual, social, political, economic, and familial. Concepts such as awareness and consciousness seem complex largely because such words do not label single, tightly bounded processes, but rather many different ways of thinking, spread across many levels of mental activity, involving many ways of modeling self; awareness and consciousness are suitcase words so big you can stuff anything into them.

24 January 2016