What is Artficial Intelligence ?
Artificial intelligence (AI) is the ability of an artificial mechanism to exhibit intelligent behavior. Artificial intelligence is also the name of the field in which artificial mechanisms that exhibit intelligence are developed and studied. The term invites philosophical speculation about what constitutes the mind or intelligence. Such questions can be considered separately, however, as the endeavor to construct and understand increasingly sophisticated mechanisms remains.
While research in all aspects of AI is vigorous, there is concern that both the progress and expectations of AI have been overstated. AI programs are primitive when compared to the kinds of intuitive reasoning and induction of which the human brain is capable. Artificial Intelligence has shown great promise in the area of EXPERT SYSTEMS, or knowledge-based expert programs, which although powerful when answering questions within a specific domain, are nevertheless incapable of any type of adaptable, or truly intelligent, reasoning.
Examples of artificially intelligent systems include computer programs that perform medical diagnoses, mineral prospecting, legal reasoning, speech understanding, vision interpretation, natural-language processing, problem solving, and learning. Most of these systems are far from being perfected. Most have proved valuable, however, either as research vehicles or in specific, practical applications.
Characterstics of Artificial Intelligence
No generally accepted theories have yet emerged within the field of AI, due in part to the fact that AI is a very young science. However, it is assumed that on the highest level, an AI system must receive input from its environment, determine an action or response, and deliver an output to its environment. A mechanism for interpreting the input is needed. This leads to research in speech understanding, vision, and natural language (see VOICE RECOGNITION, PATTERN RECOGNITION). The interpretation must be represented in some form that can be manipulated by the machine. For this problem, techniques of knowledge representation are invoked. The interpretation, together with knowledge obtained previously, is internally manipulated by a mechanism or algorithm to arrive at an internal representation of the response or action. This requires techniques of expert reasoning, common sense reasoning, problem solving, planning, signal interpretation, and learning. Finally, the system must construct a response that will be effective in its environment. This requires techniques of natural-language generation.
History of Artificial Intelligenence
The term artificial intelligence was coined in 1956, when a group of interested scientists met for an initial summer workshop. Those attending included Allen Newell, Herbert SIMON, Marvin Minsky, Oliver Selfridge, and John McCarthy. Early work in AI consisted of attempts to simulate the neural networks of the brain with numerically modeled nerve cells called perceptrons. Success was very limited due to the great complexity of the problem (but interest was revived in the 1980s and continued into the 1990s, because of advances in computer technology). In the late 1950s and early 1960s, scientists Newell, Simon, and J. C. Shaw offered their "logical theorist" computer program, and introduced symbolic processing (see COGNITIVE PSYCHOLOGY). Instead of building systems based on numbers, they attempted to build systems that manipulated symbols. Their approach was powerful and is fundamental to most work in AI to this day. In it knowledge is expressed as rules, for example, "If x is a bird, then x can fly." If such an AI system determines or is told that a robin is a bird, then it can infer that the robin can fly.
Scores of AI systems have been built as a means for uncovering and facing the problems of producing intelligent behavior. In the 1950s a checkers-playing program capable of championship- level play was developed. In the 1960s a program was developed that could prove theorems in Euclidean geometry. Another program was capable of solving analogy problems such as those given on standard intelligence tests. In the late 1960s a program was developed that could create a betting strategy for the card game poker.
Explosive growth occurred in the 1970s. Progress was made in scene analysis, that is, the interpretation of visual input. A method was developed for representing actions in a less ambiguous way, advancing the capabilities of natural-language- understanding programs. A rudimentary speech recognition system, capable of identifying spoken words, was also developed. The first knowledge-based expert program was written in 1967. Called Dendral, it could predict the structures of unknown chemical compounds based on routine analyses. More sophisticated rule-based expert systems were subsequently developed, notably the Mycin program. It uses rules derived from the medical domain to reason backward (deduce) from a list of symptoms to a particular disease. Many expert systems of similar design have been constructed. In the field of strategy, chess-playing programs were devised by the 1990s that could compete successfully at the level of grand masters of the game.
Recent trends in Artificial Intelligence
A large number of problems in the AI field have been associated with robotics (see AUTOMATA, THEORY OF ROBOT). In addition to the mechanical problems of getting a machine to make very precise or delicate movements, there is the problem of determining the sequence of movements. Much work in this area involves problem solving and planning.
One of the most useful ideas that have emerged from AI research is that facts and rules (declarative knowledge) can be represented separately from decision-making algorithms (procedural knowledge). This realization has had a profound effect both on the way that scientists approach problems and on the engineering techniques used to produce AI systems. By adopting a particular procedural element, called an inference engine, development of an AI system is reduced to obtaining and codifying sufficient rules and facts from the problem domain. This codification process is called knowledge engineering. Reducing system development to knowledge engineering has opened the door to non-AI practitioners. In addition, business and industry have been recruiting AI scientists to build expert systems.
An impediment to building even more useful systems is the problem of input, in particular, the feeding of raw data into an AI system. To this end, much effort is currently being devoted to speech recognition, character recognition, machine vision, and natural-language processing. A second problem is in obtaining knowledge. It has proved arduous to extract knowledge from an expert and then code it for use by the machine. To this end, efforts are currently being devoted to learning and knowledge acquisition.
Following the idea of representing knowledge declaratively, logic programming has been born, most notably with the computer language PROLOG. PROLOG is actually an inference engine that searches declared facts and rules to confirm or deny a hypothesis. A drawback of PROLOG is that it cannot be altered by the programmer. In the 1980s the Japanese government began building powerful computers with hardware that makes logical inferences in the manner of PROLOG. They refer to such machines as fifth-generation computers.
Hopes for breakthroughs in Artificial Intelligence hinge on a number of factors, such as the growing number of scientists involved in AI.The continuing identification of useful techniques and advances in computer science including Parallel Processing.