Architecture Of Neural Network - Online Article

Feed-forward networks:

Feed-forward ANNs (figure) allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down.

Neural network

Feedback networks:

Feedback networks (figure) can have signals travelling in both directions by introducing loops in the network. Feedback networks are very powerful and can get extremely complicated. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent, although the latter term is often used to denote feedback connections in single-layer organisations.

Neural network

Network Layers

The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of "output" units.

The activity of the input units represents the raw information that is fed into the network. The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.

The behaviour of the output units depends on the activity of the hidden units and the weights between the hidden and output units.

This simple type of network is interesting because the hidden units are free to construct their own representations of the input. The weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.

We also distinguish single-layer and multi-layer architectures. The single-layer organisation, in which all units are connected to one another, constitutes the most general case and is of more potential computational power than hierarchically structured multi-layer organisations. In multi-layer networks, units are often numbered by layer, instead of following a global numbering.

Neural Network

Learining in Artificial Neural Network

The memorization of patterns and the subsequent response of the network can be categorized into two general paradigms:

Associative mapping in which the network learns to produce a particular pattern on the set of input units whenever another particular pattern is applied on the set of input units. The associative mapping can generally be broken down into two mechanisms:

  • Auto-association is an input pattern is associated with itself and the states of input and output units coincide. This is used to provide pattern completition, i.e. to produce a pattern whenever a portion of it or a distorted pattern is presented. In the second case, the network actually stores pairs of patterns building an association between two sets of patterns.
  • Hetero-association is related to two recall mechanisms:
    • Nearest-neighbour recall, where the output pattern produced corresponds to the input pattern stored, which is closest to the pattern presented, and
    • Interpolative recall, where the output pattern is a similarity dependent interpolation of the patterns stored corresponding to the pattern presented. Yet another paradigm, which is a variant associative mapping is classification, i.e. when there is a fixed set of categories into which the input patterns are to be classified.

Regularity detection in which units learn to respond to particular properties of the input patterns. Whereas in associative mapping the network stores the relationships among patterns, in regularity detection the response of each unit has a particular 'meaning'. This type of learning mechanism is essential for feature discovery and knowledge representation.

Every neural network possesses knowledge which is contained in the values of the connections weights. Modifying the knowledge stored in the network as a function of experience implies a learning rule for changing the values of the weights.

Information is stored in the weight matrix W of a neural network. Learning is the determination of the weights. Following the way learning is performed, we can distinguish two major categories of neural networks:

  • Fixed networks in which the weights cannot be changed, i.e. dW/dt=0. In such networks, the weights are fixed a priori according to the problem to solve.
  • Adaptive networks which are able to change their weights, ie dW/dt not= 0.

All learning methods used for adaptive neural networks can be classified into two major categories:

  • Supervised learning which incorporates an external teacher, so that each output unit is told what its desired response to input signals ought to be. Paradigms of supervised learning include error-correction learning, reinforcement learning and stochastic learning. Important issue concerning supervised learning is the problem of error convergence, i.e. the minimization of error between the desired and computed unit values. The aim is to determine a set of weights which minimizes the error. One well-known method, which is common to many learning paradigms is the least mean square (LMS) convergence.
  • Unsupervised learning uses no external teacher and is based upon only local information. It is also referred to as self-organisation, in the sense that it self-organises data presented to the network and detects their emergent collective properties. Paradigms of unsupervised learning are Hebbian lerning and competitive learning.

We say that a neural network learns off-line if the learning phase and the operation phase are distinct. A neural network learns on-line if it learns and operates at the same time. Usually, supervised learning is performed off-line, whereas unsupervised learning is performed on-line.

Characterstics of ANN

Artificial Neural Networks are characterized by

  • Collective and synergistic computation (or neurocomputing)
    • Program is executed collectively and synergistically.
    • Operations are decentralized.
  • Robustness
    • Operation is insensitive to scattered failures.
    • Operation is insensitive to partial inputs or outputs with inaccuracies.
  • Learning
    • Network makes associations automatically.
    • Program is created by the network during learning.
    • Network adapts with or without a teacher; no programme intervention.
  • Asynchronous operation
    • Biological neural nets have no explicit clock to synchronize their operation. A number of ANNs require a clock.

Applications in Mechanical Engineering

Applications in Robotics

There are many complex applications for neural nets in robotics. These include control of drive mechanics and manipulators, vision or other sensing systems and intelligent power supplies. However, to illustrate a very simple "neural brain" for a robot, let's just consider a mobile vehicle with two bump sensors on the front. We could connect these up to the sort of network shown in figure. Figure, a controller for a simple robot.

Neural Network

Such a network could be trained using Back-Propagation to manoeuvre out of the way of obstacles in the robot's path. The training set would comprise of inputs corresponding to the various situations in which the robot would find itself and the targets would be the required outputs to the motors to allow the robot to avoid obstacles. Some extra circuitry in the form of timers would probably be also required (unless the bump sensors were long "whiskers") because to get out of corners we would want to turn off the left or right motor for a short time to allow the vehicle to turn and then switch it back on. We'll see in section 9.4 how neurons which produce a waveform or "spiky" output can overcome this problem.

AN ARTIFICIAL NEURAL NETWORK SYSTEM FOR DIAGNOSING GAS TURBINE ENGINE FUEL FAULTS The US Army Ordnance Center & School and Pacific Northwest Laboratory are developing a turbine engine diagnostic system for the M1A1 Abrams tank. This system employs Artificial Neural Network (ANN) technology to perform diagnosis and prognosis of the tank's AGT-1500 gas turbine engine. This paper describes the design and prototype development of the ANN component of the diagnostic system, which we refer to as "TEDANN" for Turbine Engine Diagnostic Artificial Neural Networks.

Power Generation & Transmission

GNOCIS (Generic NOx Control Intelligent System) was developed by Power Technology for use as an on-line advisory or closed-loop supervisory systemfor NOx emissions. The purpose of the software is to adapt to long-term changes in the plant condition, enabling better optimization of the operating mode of the plant. Trials were conducted at a 500 MWe unit at Kingsnorth power station, claiming to identify major annual efficiency savings by reducing carbon-inash, worth more than £100k per annum, while maintaining NOx emissions under prescribed limits. GNOCIS has now been applied to a range of boiler sizes, with several close-loop applications, with substantial efficiency gains.

In a separate project jointly funded by BCURA (British Coal Utilisation Research Association) and the Department of Trade and Industry, a hybrid neural network based controller for a 3.7 MWth (i.e. MW thermal) chain-gate stoker-fired shell boiler at Her Majesty's Prison Garth, Leyland, in collaboration with James Proctor Ltd. This demonstrated

10% lower NOx emissions without sacrificing carbon-in-ash losses and with a 10% reduction in CO emissions, in steady state, with gains also when load-following. The code was implemented in Matlab, and expert knowledge had a key rôle for the integration of the neural network module into an efficient control loop structure (http://www.dti.gov.uk/ent/coal).

Process Industries

A manufacturing area where neural network control has been successfully applied for some time is steel rolling mills. Having developed prototype neural-network models for strip temperature and rolling force at the hot strip mill of Hoesch, in Dortmund, in 1993, Siemens has applied this technology at 40 rolling mills world-wide. Claimed efficiency gains are 30% better accuracy in rolling force modelling, and with prediction improvements leading to US$200k p.a. in material costs.

Transport Industries

Aircraft icing is a major hazard for which weather forecasters must advise pilots. The Experimental Forecast Facility at the Aviation Weather Centre in Kansas City, Missouri, is currently evaluating NNICE, a neural network-based icing intensity predictive forecast tool.

Also in the US, at NASA's Dryden Flight Research Centre, Edwards, a joint programme with Boeing is testing neural network damage recovery control systems for military and commercial aircraft . The purpose of the research is to add a 'significant margin of safety' to fly-by-wire control, when the aircraft sustains major equipment or systems failure, ranging from the inability to use flaps to encountering extreme icing. Example aircraft where

this approach can be applied are the Boeing 777, and the current test plane, a F-15 with canards and pitch/yaw vectoring nozzles. At Long Beach airport inductive loops are used to identify aero planes at specific locations on the runways, using Loop Technology (LOT). The potential for use of low-cost surface sensors in avoiding incursion incidents relies on neural networks to classify loop induction signatures for accurate aircraft type identification.

In the UK, vibration analysis monitoring in jet engines is the focus of a research project involving Rolls-Royce and the Department of Engineering at Oxford University. This produced a diagnostic system, Quince which combines the outputs from neural networks with template matching as well as statistical and signal processing methods, processing them with a small set of rules. The software is designed for the pass-off tests of jet engines, has a tracking facility to suggest the most likely fault, and centers on the use of novelty detection to identify unusual vibration signatures. According to the web site, Quince is now being licensed to Rolls-Royce under the terms of a Licensing Agreement signed in May 1998.

A second condition monitoring application between Rolls-Royce and Oxford University involves predicting a thermocouple reading of the exhaust gas temperature in aero-derivative gas turbines with a power output of 3-50 MW. High prediction errors are indicative of developing faults, and it is claimed on the web site that the model is capable of identifying real faults several hours before it is detected by the control-system logic which shuts-down the engine.

A third collaborative application listed on the web site is to perform comprehensive whole -engine data analysis and interpretation, with attached confidence levels, by fusing diverse sensor readings (performance parameters, vibration spectra and oil debris information) to produce 'reliable indications of departures from normality'. The aim is real-time in-flight monitoring for the new Trent 900 Rolls-Royce engine. Technically the project combines standard observers, i.e. Kalman filters, with more advanced signal processing techniques and neural networks, as well as other elements of computational intelligence.

Conclusion

Neural networks do not perform miracles. But if used sensibly they can produce some amazing results.

The development of true Neural Networks is a fairly recent event, which has been met with success. Perhaps the most exciting aspect of neural networks is the possibility that some day 'conscious' networks might be produced. There are a number of scientists arguing that consciousness is a 'mechanical' property and that conscious' neural networks are a realistic possibility.

The future of Neural Networks is wide open, and may lead to many answers and/or questions. Like

  • Is it possible to create a conscious machine?
  • What rights do these computers have?
  • How does the human mind work?
  • What does it mean to be human?

About the Author:

No further information.




Comments

Chandra Bhushan on 2009-04-03 16:28:34 wrote,

very good article dear. Post some more like this