Frank Rosenblatt
From Wikipedia, the free encyclopedia
Frank Rosenblatt (1928–1969) was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960. This was the first computer that could learn new skills by trial and error, using a type of neural network that simulates human thought processes.
Rosenblatt’s perceptrons were initially simulated on an IBM 704 computer at Cornell Aeronautical Laboratory in 1957. By the study of neural networks such as the Perceptron, Rosenblatt hoped that "the fundamental laws of organization which are common to all information handling systems, machines and men included, may eventually be understood."
Rosenblatt was a colorful character at Cornell in the early 1960s. A handsome bachelor, he drove a classic MGA sports car and was often seen with his cat named Tobermory. He enjoyed mixing with undergraduates, and for several years taught an interdisciplinary undergraduate honors course entitled "Theory of Brain Mechanisms" that drew students equally from Cornell's Engineering and Liberal Arts colleges.
This course was a melange of ideas drawn from a huge variety of sources: results from experimental brain surgery on epileptic patients while conscious, experiments on measuring the activity of individual neurons in the visual cortex of cats, studies of loss of particular kinds of mental function as a result of trauma to specific areas of the brain, and various analog and digital electronic circuits that modeled various details of neuronal behavior (i.e. the perceptron itself, as a machine).
There were also some breathtaking speculations, based on what was known about brain behavior at this time (well before the CAT or PET scan was available), including one calculation that, based on the number of neuronal connections in a human brain, the human cortex had enough storage space to hold a complete "photographic" record of its perceptual inputs, stored at the 16 frames-per-second rate of flicker fusion, for about two hundred years.
In 1962 Rosenblatt published much of the content of this honors course in the book "Principles of neurodynamics: Perceptrons and the theory of brain mechanisms" (Spartan Books, 1962) which he used thereafter as a textbook for the course.
Rosenblatt's bitter rival and professional nemesis was Marvin Minsky of MIT. Minsky despised Rosenblatt, hated the concept of the perceptron, and wrote several polemics against him. For years Minsky crusaded against Rosenblatt on a very nasty and personal level, including contacting every group who funded Rosenblatt's research to denounce him as a charlatan, hoping to ruin Rosenblatt professionally and to cut off all funding for his research in neural nets. Throughout this vicious persecution, Rosenblatt never responded in kind.
Finally, in 1969 Minsky and Papert published a book "Perceptrons" with a mathematical proof that the simplest single-layer perceptrons were incapable of learning the "exclusive or (XOR)" operation, and then postulating (incorrectly as it later turned out) that all types of more complex multi-layer perceptrons would also be incapable of performing the XOR logical operation.
Minsky's book was certainly a block to the funding of research in neural networks for more than ten years. The book was widely interpreted as showing that neural networks are basically limited and fatally flawed. So Minsky had vanquished his rival, and Rosenblatt was not able to answer this final attack on his work and reputation because he died in a boating accident in 1969, the same year Minsky's book was published.
Ironically, after Rosenblatt's death Minsky became interested in neural networks in the 1980s, and published research showing greatly enhanced learning abilities of neural networks that are multi-layer perceptrons -- thus renewing interest in (and assuming credit for) research into the behavior of multi-layer (or "hidden-layer") perceptrons that had been interrupted by Rosenblatt's death in 1969.