Adaptive Logic Networks - Online Article

Introduction

Adaptive Logic Networks technology has recently emerged as an effective alternative to artificial neural networks for machine learning tasks. This technical overview describes the advantages of Adaptive Logic Networks technology and its diverse applications of analysis, prediction, and control. To fully motivate the discussion of Adaptive Logic Networks, a brief background on the goals and benefits of machine learning, or neurocomputing, is presented.

Neurocomputing

Programmed computing has dominated information processing for the last 50 years. When information-processing functions are complex, it is appealing to consider the idea of training a system instead of programming it. This is especially true for problems in which a large number of variables must be considered as part of the decision process.

Traditional computer applications use the programmed computing approach. Solutions are devised by designing algorithms that solve the problem and then implementing them in software. The objective is to deliver quality applications that solve specific user needs on time and within budget. Programmed computing works well if there is a well understood process or set of rules for solving the given problem.

Providing solutions to novel problems when the algorithm is unknown can involve a costly and time consuming development cycle. Quality software development can require a rigorous cycle of design, validation, and incremental improvement, making it an expensive and lengthy process.

An alternative approach to programmed computing, particularly well suited to problems in areas such as sensor processing, pattern recognition, data analysis (e.g., data mining), prediction and control, is neurocomputing. Neurocomputing refers to systems that learn the relationships between data through a process of training. Neural networks are the primary information processing structure used in neurocomputing. Neurocomputing benefits include and are often measured in terms of:

  1. A reduction in the time it takes to solve the problem when compared to the programmed computing approach;
  2. A reduction in the quantity of software needed to solve the problem; and
  3. A convenient solution to the problem that may be too complex for programmed computing.

Typical applications of neurocomputing technology are often grouped into one of three domains: Analysis, prediction and control.

Analysis

Data analysis applications are used to discover relationships and recognize patterns within data. Data mining and pattern classification are typical analysis applications.

Data Mining

Corporate America is accumulating vast quantities of data describing their operations and storing this information in "data warehouses." Understanding the relationships in this data creates applications that can forecast sales, predict a competitor’s bid, identify new markets, and detect fraud.

Pattern Classification

Patterns in data can be detected and classified based on a sequence of input measurements. Applications include optical character recognition, sensor data classification, face recognition, trend analysis, and signal detection.

Prediction

Prediction or forecasting is the ability of the system to predict future values and outcomes based on current input values. Applications include predictive maintenance and load forecasting.

Predictive Maintenance

Based on data gathered over time on the health of a piece of machinery (including breakdowns), a predictor is used to schedule machine maintenance before the next breakdown occurs.

Load Forecasting

Historical load data is used to create a model that can forecast future load values. Successful applications include electrical power load forecasting, and telecommunications switch load forecasting.

Control

The control of machines or processes often requires high-speed computations and function inversion (the ability of the model to provide the required input given a desired output). Applications include automotive control systems and computer-controlled prostheses.

Vehicle Active Suspension Systems

Computer controlled active suspension systems allow a vehicle to adaptively adjust the firmness of the suspension system and improve handling.

Walking Aids

Computer assisted walking aids for spinal cord injured persons are used to control walking gait by detecting the user’s intended action.

An Effective Alternative to Neural Networks

Although neurocomputing benefits are many, critics have justifiably cited the "black-box" solution approach as the primary reason for not using the technology in many practical or safety-critical applications. An immensely successful neurocomputing technology that does not suffer from the "black-box" criticism is Dendronic Decisions’ Adaptive Logic Networks technology.

Adaptive Logic Networks Technical Overview

What it is ?

An Adaptive Logic Network is a form of neurocomputing capable of modeling complex non-linear systems by using piece-wise linear surfaces.

The inputs to an Adaptive Logic Network may be data from large corporate databases, observations recorded by a scientist, or real-time measurements from a manufacturing process. The outputs of an Adaptive Logic Network may be used for analysis, prediction, or real-time control of machines and processes.

Linear extrapolation techniques are often the basis of traditional data modeling tools used for prediction, but they may not be able to adequately deal with data from the real world, which is often non-linear, noisy, and contains contradictory values. Adaptive Logic Networks are a non-linear data modeling technology that overcomes these limitations.

How it works ?

Adaptive Logic Networks learn relationships and patterns by using a supervised learning algorithm that examines data in a training set consisting of examples of inputs and their associated outputs. During the learning phase, an Adaptive Logic Network modifies its internal structure to reflect the relationship between the inputs and the outputs in the training set. The accuracy of an Adaptive Logic Network is checked after the learning cycle is complete by using a separate set of inputs and outputs called the validation set.

Reinforcement learning is a recent addition to the algorithms developed for Adaptive Logic Networks. This type of learning is used when the desired output for a given input is not known during a sequence of actions that is taking place. During the reinforcement learning process, the only feedback given to the system is a rough indicator of performance, such as "good", "bad", "too slow", or "too fast." This type of feedback is similar to the way humans learn.

The internal structure of an Adaptive Logic Network is very simple: it is composed of one or more linear surfaces joined by simple operators. Fortunately, ordinary computers perform linear calculations and simple comparison operations very quickly. This typically eliminates the need for special hardware to solve real-world problems.

Seven key features of adaptive Logic Networks

Safety

Safety and mission-critical applications require that system responses be understood for all possible inputs. Since an Adaptive Logic Network model is composed of linear surfaces that are well understood mathematically, and since the Dendronic Learning Engine allows us to control how fast the output changes with changes in any input, proofs about the accuracy of an Adaptive Logic Network model for all inputs are feasible without requiring exhaustive testing.

Speed

Adaptive Logic Networks are very fast because the evaluation of a trained network typically involves simple comparison operations and a limited number of linear surface calculations. This speedup is analogous to the alpha-beta search algorithm of games. Decision trees are very fast, and ALNs can be turned into ALN decision trees, or DTREES. (Note: these are not optimized in the present version of the DLE).

Scalability

An Adaptive Logic Network can handle complex problems with many input variables. Instead of adapting a fixed internal architecture, an Adaptive Logic Network’s architecture can grow dynamically and efficiently in response to the complexity of the training data. The structure of an Adaptive Logic Network efficiently represents the relationship between your problem’s inputs and outputs. The speedup under point 2 scales very well.

Broad Domain Applicability

Adaptive Logic Networks have successful applications in many problem domains. An investment in Adaptive Logic Network technology can pay for itself repeatedly, greatly reducing the complexity of the tool set required to build your applications involving machine intelligence.

Embedding Expert Knowledge

Learning systems often have difficulties when there is a lack of historical data for training, or data contains too much noise. Adaptive Logic Networks can compensate for these problems during the learning phase by constraining their internal structure. These constraints are often based on physical laws and rules of thumb that dictate certain relationships in the data must hold. Capturing common sense or even expert knowledge of a problem domain can compensate for sparse and noisy data, often resulting in a faster learning phase. Often rules are of the form: the greater this input is, the greater must the output be, all other inputs being equal.

Function Inversion

The ability of a model to provide the required input given a desired output can be very useful in the real-time control of machines and processes. The process of exchanging the role of one input variable and the outputvariable, or function inversion, is facilitated by Adaptive Logic Networks because the comparison operations that combine linear surfaces in an Adaptive Logic Network preserve a mathematical property called monotonicity. Provided that the output of an ALN is monotonic in one of the inputs, the internal structure of the ALN can be rearranged so that the output of the network effectively trades places with that input.

Ease of Understanding

The learning phase of an Adaptive Logic Network is controlled by parameters directly related to the properties of the data (weights on variables correspond to rates of change of the output with respect to the inputs). Users need only be familiar with their data, not with the way the learning algorithms work. The user can concentrate on solving the problem, rather than on becoming an expert in the underlying technology.

About the Author:

No further information.




Comments

No comment yet. Be the first to post a comment.