An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. They are indeed self learning mechanisms which don't require the traditional skills of a programmer. Yet, all of these networks are simply tools and as such the only real demand they make is that they require the network architect to learn how to use them. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well. The brain basically learns from experience, and it is this feature we seek to emulate in Neural Networks. Artificial neural networks (ANNs) can be used as operation models because they can handle strong non-linearites, large number of parameters, missing informationThe applications of Neural Networks are diverse, and include language processing, character recognition, image and data compression, pattern recognition, signal processing and servo controls in robotics. ANN also found its applications in machining (e.g. in planning, in setting of tool and machine parameters, in monitoring and control).
Definition: An Artificial Neural Network (ANN) is an Information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems.
The basic attributes of neural networks may be divided into the Architecture and Neurodynamics.
Architecture: Defines the network structure i.e., the number of artificial neurons in the network and their interconnectivity.
Neurodynamics: Defines their properties i.e., how the neural network learns, recalls, associates, and continuously compares new information with existing knowledge.
The Artificial Neural Network (Neural Net or just ANN for short) is a collection of simple processors connected together. Each processor can only perform a very straightforward mathematical task, but a large network of them has much greater capabilities and can do many things which one on its own can�t. Figure shows the basic idea.
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras. In the early 1940s, Warren McCulloch and Walter Pitts published a seminal paper titled "A Logical Calculus of the Ideas Immanent in Nervous Activity". In it, they proposed a mathematical model of a neuron, which could perform computations. This artificial neuron, or neurode (some call them neurones), was a simple device which could receive input from other such devices.
The neurode's output was either a 1 or a 0, reflecting the all-or-none theory of biological neurons. When the total input reached a certain critical level, the neurode would send its output to other neurodes with which it was connected. This method is called threshold logic. This theory was so influential that this type of neurode is called the McCulloch- Pitts neuron. Some modern neural networks use neurodes which are essentially extensions of the McCulloch-Pitts neuron.
The concept of neural networks has been around since the early 1950s, but was mostly dormant until the mid 1980s. One of the first neural networks developed was the perceptron. Created by a psychologist named Frank Rosenblatt in 1958, the perceptron was a very simple system which used interconnected neurodes to analyze data, usually visual patterns. Rosenblatt published a series of papers which generated a great deal of interest in the perceptron. Many people researched and developed further the perceptron model, even implementing it in hardware. The perceptron was widely and unrealistically praised by researchers. Rosenblatt and other scientists claimed that eventually, with enough complexity and speed, the perceptron would be able to solve almost any problem.
This was far from the truth. In 1969, Marvin Minsky and Seymour Papert published an influential book titled "Perceptrons". In it, they proved several theorems which showed that the perceptron could never solve a class of simple problems, and hinted at several other serious, fundamental flaws in the model. After "Perceptrons", scientists working on neural network type devices found it almost impossible to receive funding.
Neural Network an anlaog to Brain
The most basic element of the human brain is a specific type of cell which, unlike the rest of the body, doesn't appear to regenerate. Because this type of cell is the only part of the body that isn't slowly replaced, it is assumed that these cells are what provides us with our abilities to remember, think, and apply previous experiences to our every action. A neuron basically consists of three sections: cell body, dendrites and the axon, each with separate but complementary functions. Dendrites receive the signals from other cells at connection points called synapses, and from there, the signals are passed on to the cell body, where they are averaged with other signals to produce an output. If the average of these signals is sufficiently large, the cell �fires,� producing a pulse down its axon and it is passed on to succeeding cells. Therefore, each neuron receives an �electronic� input from other neurons at dendrites. It is imperative to note that the neuron fires only if the total signal received at the cell body exceeds a certain level.
The Artificial Neurons used in an ANN are the electronic equivalents of the neuron. A set of inputs are applied, each representing the output of another neuron. Each input is multiplied by a corresponding weight analogous to a synaptic strength, and all the weighted inputs are then summed to determine the activation level of the neuron.
When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes.
An Engineering Approach
An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not.
The firing rule is an important concept in neural networks and accounts for their high flexibility. A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained.
A simple firing rule can be implemented by using Hamming distance technique. The rule goes as follows:
Take a collection of training patterns for a node, some of which cause it to fire (the 1-taught set of patterns) and others which prevent it from doing so (the 0-taught set). Then the patterns not in the collection cause the node to fire if, on comparison , they have more input elements in common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught set. If there is a tie, then the pattern remains in the undefined state.
For example, a 3-input neuron is taught to output 1 when the input (X1,X2 and X3) is 111 or 101 and to output 0 when the input is 000 or 001. Then, before applying the firing rule, the truth table is:
As an example of the way the firing rule is applied, take the pattern 010. It differs from 000 in 1 element, from 001 in 2 elements, from 101 in 3 elements and from 111 in 2 elements. Therefore, the 'nearest' pattern is 000 which belongs in the 0-taught set. Thus the firing rule requires that the neuron should not fire when the input is 001. On the other hand, 011 is equally distant from two taught patterns that have different outputs and thus the output stays undefined (0/1).
By applying the firing in every column the following truth table is obtained:
The difference between the two truth tables is called the generalisation of the neuron. Therefore the firing rule gives the neuron a sense of similarity and enables it to respond 'sensibly' to patterns not seen during training.
Related Online Articles:
No comment yet. Be the first to post a comment.