Strong AI vs. Weak AI
From Wikipedia, the free encyclopedia
In the philosophy of artificial intelligence, Strong AI vs. Weak AI debates the supposition that some forms of artificial intelligence can truly reason and solve problems. Strong AI supposes that it is possible for machines to become sapient, or self-aware, but may or may not exhibit human-like thought processes. Weak AI makes no such claim and denies this possibility. The term strong AI was originally coined by John Searle, who writes:
- "...according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind"[1]
The term "artificial intelligence" would equate to the same concept as what we call "strong AI" based on the literal meanings of "artificial" and "intelligence". However, initial research into artificial intelligence was focused on narrow fields such as pattern recognition and automated scheduling, in hopes that they would eventually allow for an understanding of true intelligence. The term "artificial intelligence" thus came to encompass these narrower fields ("weak AI") as well as the idea of strong AI. Some suggest that the term synthetic intelligence would be a more accurate name for what is expected of strong AI.[1]
Contents |
[edit] Weak AI
In contrast to strong AI, weak AI refers to the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases, are completely outside of) the full range of human cognitive abilities. An example of weak AI software would be a chess program such as Deep Blue. Unlike strong AI, a weak AI does not achieve self-awareness or demonstrate a wide range of human-level cognitive abilities, and at its finest is merely an intelligent, more specific problem-solver.
Some argue that weak AI programs cannot be called "intelligent" because they cannot really think. In response to claims that weak AI software such as Deep Blue are not really thinking, Drew McDermott, now a computer science professor at Yale, wrote:
- "Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings." [2]
He argued that Deep Blue does possess intelligence and is simply lacking breadth of intelligence.
Others note that Deep Blue is merely a powerful, heuristic search tree, stating that claims of its "thinking" about chess are similar to claims of single cells' "thinking" about protein synthesis; both are unaware of anything at all, and both merely follow a program which has been encoded within them. Many among these critics are proponents of Weak AI, claiming that machines can never be truly intelligent, while Strong AI proponents simply state that true self-awareness and thought as we know it may require a specific kind of "program" designed to observe and take into account the processes of one's own brain. Some evolutionary psychologists point out that humans may have developed just such a program especially strongly for the purpose of social interaction or perhaps even deception.
[edit] Strong AI
A computer enters the framework of strong AI if a machine approaches or supersedes human intelligence, if it can do typically human tasks, if it can apply a wide range of background knowledge and has some degree of self-consciousness.
Since human-bound definitions of measurable intelligence, like the IQ, cannot easily be applied to machine intelligence, a proposal to define a more easily quantifiable measure of artificial intelligence is:
Intelligence is the possession of a model of reality and the ability to use this model to conceive and plan actions and to predict their outcomes. The higher the complexity and precision of the model, the plans, and the predictions, and the less time needed, the higher is the intelligence.[3]
[edit] Artificial General Intelligence
Artificial General Intelligence research aims to create AI that can replicate human intelligence completely, often called an Artificial General Intelligence (AGI) to distinguish from less ambitious AI projects. As yet, researchers have devoted little attention to AGI, many claiming intelligence is too complex to be completely replicated.[citation needed] Some small groups of computer scientists are doing AGI research, however. Organizations pursuing AGI include the Adaptive AI, Artificial General Intelligence Research Institute (AGIRI) and the Singularity Institute for Artificial Intelligence. One recent addition is Numenta, a project based on the theories of Jeff Hawkins, the creator of the Palm Pilot. While Numenta takes a computational approach to general intelligence, Hawkins is also the founder of the RedWood Neuroscience Institute, which explores conscious thought from a biological perspective.
By most measures, actual progress towards strong AI has been limited. No system can pass a full Turing test for unlimited amounts of time, although some AI systems can at least fool some people initially now (see the Loebner prize winners). Few active AI researchers are prepared to publicly predict whether, or when, such systems will be developed, perhaps due to the failure of bold, unfulfilled predictions for AI research progress in past years. There is also the problem of the AI effect, where any achievement by a machine tends to be deprecated as a sign of true intelligence.
[edit] Synthetic intelligence
Proponents of the term synthetic intelligence (SI) state that the term artificial intelligence is an oxymoron; implying that the intelligence is not intelligent[4]. Diane Law, Department of Computer Sciences, The University of Texas at Austin, Searle, Subsymbolic Functionalism and Synthetic Intelligence compares it to the difference between artificial and synthetic rubies, stating that only the synthetic ruby is truly a ruby [5].
Proponents of this term suggest that SI identifies the entity or construct under discussion as truly intelligent and created intentionally by man, which indicates that they are referring to a Strong AI as opposed to the modern and limited achievements of weak or good old fashioned artificial intelligence.
In an effort to advance the use of this term, the definition of SI sometimes extended to subjects whose intelligence arises as an emergent quality from the convergence of random, man-made technologies. While this has the intended effect of giving the term a larger group of subjects, this second definition presents a problem as human sentience—or any other biological and naturally occurring intelligence—arises out of the natural process of species evolution and an individual's experiences. This creates a question about the boundary between synthetic, or artificial, and "naturally occurring" intelligence. As discusson of this eventuality is currently limited to fiction and theory, it is largely unimportant.
[edit] Artificial consciousness
When discussing the possibility of Strong AI, issues arise about the nature of the 'Mind-Body Distinction' and the role of symbolic computation. John Searle and most others involved in this debate address whether a machine that works solely through the transformation of encoded data could be a mind, not the wider issue of monism versus dualism (i.e., whether a machine of any type, including biological machines, could contain a mind).
Searle states in his Chinese room argument that information processors carry encoded data which describe other things. The encoded data itself is meaningless without a cross reference to the things it describes. This leads Searle to assert that there is no meaning or understanding in an information processor itself. As a result, Searle claims that even a machine that passed the Turing test would not necessarily be conscious in the human sense.
Some philosophers hold that if Weak AI is possible then Strong AI must also be possible. Daniel C. Dennett argues in Consciousness Explained that if there is no magic spark or soul, then Man is just a machine, and he asks why the Man-machine should have a privileged position over all other possible machines when it comes to intelligence or 'mind'. In the same work, he proposes his Multiple Drafts Model of consciousness. Simon Blackburn, in his introduction to philosophy, Think, points out that you might appear intelligent but there is no way of telling if that intelligence is real (i.e., a 'mind'). However, if the discussion is limited to strong AI rather than artificial consciousness, it may be possible to identify features of human minds that do not occur in information processing computers.
Many strong AI proponents believe the mind is subject to the Church-Turing thesis. This belief is seen by some as counter-intuitive and even problematic, because an information processor can be constructed out of balls and wood. However, although such a device would be very slow and failure-prone, it could do anything that a modern computer can do. If the mind is Turing-compatible, it implies that, at least in principle, a device made of rolling balls and wooden channels can contain a conscious mind.
Roger Penrose attacked the applicability of the Church-Turing thesis directly by drawing attention to the halting problem, in which certain types of computation cannot be performed by information systems yet are alleged to be performed by human minds. However, this is arguably not a computability issue (involving potentially infinite computations), but it is an issue of simulation—the making of copies of the same computations using different technologies.
The massively parallel pattern matching of the brain's neural systems produce an immediacy of perception and awareness. Notions like 'seeing' in the sense of awareness of what is in view, 'consciousness' in the sense of self referential feelings, and 'emotions' in the sense of mentally induced physicality of feeling are emergent higher level concepts. Searle's Chinese Room interpretation of symbolic processing does not account for the 'semantic mapping' whereby symbolic computations link to the physicality of biological systems. The brain does not feel, but it does do the feeling.
Ultimately, the truth of Strong AI depends upon whether information processing machines can include all the properties of minds such as consciousness. However, Weak AI is independent of the Strong AI problem and there can be no doubt that many of the features of modern computers such as multiplication or database searching might have been considered 'intelligent' only a century ago.
[edit] Simulated human brain model
This is seen by many as the quickest means of achieving Strong AI, as it doesn't require complete understanding of how intelligence works. Basically, a very powerful computer would simulate a human brain, often in the form of a network of neurons. For example, given a map of all (or most) of the neurons in a functional human brain, and a good understanding of how a single neuron works, it would be possible for a computer program to simulate the working brain over time. Given some method of communication, this simulated brain might then be shown to be fully intelligent. The exact form of the simulation varies: instead of neurons, a simulation might use groups of neurons, or alternatively, individual molecules might be simulated. It's also unclear which portions of the human brain would need to be modelled: humans can still function while missing portions of their brains, and areas of the brain are associated with activities (such as breathing) that might not be necessary to think.
This approach would require three things:
- Hardware. An extremely powerful computer would be required for such a model. Futurist Ray Kurzweil estimates 1 million MIPS. If Ray Kurzweil's predictions are correct, this will be available for $1,000 by 2020. At least one special-purpose petaflops computer has already been built (the Riken MDGRAPE-3) and there are nine current computing projects (such as BlueGene/P) to build more general purpose petaflops computers all of which should be completed by 2008, if not sooner.[6] However, most other attempted estimates of the brain's computational power equivalent have been rather higher, ranging from 100 million MIPS to 100 billion MIPS. Furthermore, the overhead introduced by the modelling of the biological details of neural behaviour might require a simulator to have access to computational power much greater than that of the brain itself.
- Software. Software to simulate the function of a brain would be required. This assumes that the human mind is the central nervous system and is governed by physical laws. Constructing the simulation would require a great deal of knowledge about the physical and functional operation of the human brain, and might require detailed information about a particular human brain's structure. Information would be required both of the function of different types of neurons, and of how they are connected. Note that the particular form of the software dictates the hardware necessary to run it. For example, an extremely detailed simulation including molecules or small groups of molecules would require enormously more processing power than a simulation that models neurons using a simple equation, and a more accurate model of a neuron would be expected to be much more expensive computationally than a simple model. The more neurons in the simulation, the more processing power it would require.
- Understanding. Finally, it requires sufficient understanding thereof to be able to model it mathematically. This could be done either by understanding the central nervous system, or by mapping and copying it. Neuroimaging technologies are improving rapidly, and Kurzweil predicts that a map of sufficient quality will become available on a similar timescale to the required computing power. However, the simulation would also have to capture the detailed cellular behaviour of neurons and glial cells, presently only understood in the broadest of outlines.
Once such a model is built, it will be easily altered and thus open to trial and error experimentation. This is likely to lead to huge advances in understanding, allowing the model's intelligence to be improved/motivations altered.
The Blue Brain project aims to use one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to simulate a single neocortical column consisting of approximately 60,000 neurons and 5km of interconnecting synapses. The eventual goal of the project is to use supercomputers to simulate an entire brain.
The brain gets its power from performing many parallel operations, a computer from performing operations very quickly.
The human brain has roughly 100 billion neurons operating simultaneously, connected by roughly 100 trillion synapses[2]. Although estimates of the brain's processing power put it at around 1014 neuron updates per second,[3] it is expected that the first unoptimized simulations of a human brain will require a computer capable of 1018 FLOPS. By comparison a general purpose CPU (circa 2006) operates at a few GFLOPS (109 FLOPS). (each FLOP may require as many as 20,000 logic operations).
However, a neuron is estimated to spike 200 times per second (this giving an upper limit on the number of operations).[citation needed] Signals between them are transmitted at a maximum speed of 150 meters per second. A modern 2GHz processor operates at 2 billion cycles per second, or 10,000,000 times faster than a human neuron, and signals in electronic computers travel at roughly half the speed of light; faster than signals in human by a factor of 1,000,000.[citation needed] The brain consumes about 20W of power where supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5x1020 op/sec/watt at room temperature)
Neuro-silicon interfaces have also been proposed [7] [8].
[edit] Independent intelligent behavior
In opposition to human-brain simulation, the direct approach attempts to achieve AI directly without imitating nature. Supporters of this approach often use the analogy that early attempts to construct flying machines modelled them after birds, but modern aircraft do not look like birds.
The main question in the direct approach is: "What is AI?". The most famous definition of AI was the operational one proposed by Alan Turing in his "Turing test" proposal. There have been very few attempts to create such definition since (some of them are in the AI Project). John McCarthy stated in his work What is AI? that we still do not have a solid definition of intelligence.
[edit] See also
- History of artificial intelligence
- Technological singularity aka "The Singularity"
[edit] References
- ^ J. Searle in Minds, Brains and Programs. The Behavioral and Brain Sciences, vol. 3, 1980.
- ^ "nervous system, human." Encyclopædia Britannica. 9 Jan. 2007
- ^ "Artificial Intelligence, A Modern Approach", Stuart Russel and Peter Norvig, Prentice Hall, Inc. 1995.
- Goertzel, Ben; Pennachin, Cassio (Eds.) Artificial General Intelligence. Springer: 2006. ISBN 3-540-23733-X.
- John Searle: Minds, Brains and Programs Behavioral and Brain Sciences 3 (3): 417-457 1980.
- Kurzweil, Ray; The Singularity is Near. ISBN 0-670-03384-7.
- Singularity Institute for Artificial Intelligence
- Expanding Frontiers of Humanoid Robots
- How Intelligent is Deep Blue? by Drew McDermott. New York Times, May 14, 1997
- Godel, Escher, Bach
- The Mind's I
[edit] External links
- World Lectures from Tokyo hosted by Rolf Pfeifer. — (English speakers please click the second link, "Film z wykladu", to download the .avi files.)
- Artificial General Intelligence Research Institute
- The Genesis Group at MIT's CSAIL — Modern research on the computations that underlay human intelligence
- Essentials of general intelligence, article at Adaptive AI.
- Problems with Thinking Robots
- www.cs.utoronto.ca
- www.otterbein.edu lecture notes
- www.senapps.com
- www.eng.warwick.ac.uk
- [9]
- www.thetechzone.com, Article - Looks at new modelling ideas such as fuzzy logic and quantum inspired parallel distributed networks.