Artificial neural networks are computational models used in machine learning, computer science, and other research disciplines. They are inspired by and loosely based on biological neural networks. Networks consist of simple elements, which are similar to axons in the brain. ANNs are becoming increasingly popular because they work well for many tasks (e.g. classification or segmentation).

Many neural network models differ in how they work. The essential difference is related to the way feedback is processed. Based on that, we can distinguish two types of ANNs – feedforward networks and feedback networks. Two other interesting and popular types of ANNs are SOM (Self-organizing Map) networks and time delay networks.


Feedforward Neural Networks

In general, these are networks in which there are no loops, so there is no feedback. In feedforward networks, signals are transmitted from the input layer through hidden layers (if present) to the output layer. In feedforward networks, signals travel in one direction only; from input to output. Since there is no feedback, the output of any layer does not affect that same layer.

The construction of these networks is relatively simple. They are arranged in layers. Each unit only connects with the unit in the next layer. There are no connections within the same layer, connections to previous layers, and connections that skip any of the layers.


Feedback Networks

In feedback networks, the signal can travel in both directions. Sometimes they are called bi-directional networks. In such networks, reverse connections are possible, meaning that the output could be sent to the neurons of previous layers.

Feedback networks are incredibly powerful and can become very complicated. They are also dynamic; their state is constantly changing until the equilibrium point is found. Networks remain in balance until new data is entered, then a new equilibrium point is needed. Feedback networks are often referred to as interactive or recurrent.

Feedback networks are very well suited for recognizing speech or handwriting. For example, in handwriting recognition, performance can increase by knowing the next letters and context of the utterance.


SOM (Self-Organizing Map)

In SOM-type networks, architecture is determined by itself, without the developer’s involvement. SOM networks rely on unsupervised training. They provide a topology that preserves the mapping from the high dimensional space to map units. Map units or neurons form a two-dimensional grid, which means that the mapping proceeds from the multidimensional space to the plane. The topology preservation means that the mapping keeps the relative distance between points. Points located close to each other in the input space are mapped to nearby mapping units in SOM.

SOM can, therefore, serve as a cluster analysis tool for high-dimensional data. Also, SOM can generalize. Generalization capability means that the network can recognize or characterize input data it has never faced before. A new input is assimilated with the map unit to which it is mapped.


Time Delay Neural Network (TDNN)

In TDNN networks, input data is sequential. This means that the data has a manually assigned delay value. All neurons in a feature share the same weight, so they detect the same feature but in a different position. TDNN units recognize the position sequence and usually form part of a larger pattern recognition system. Combining the past and present inputs make the TDNN approach unique.

TDNNs have several advantages. Thanks to the reduced number of weights, they require fewer examples in the training set and the learning process is faster. Another important feature is also invariance under time or space translation.


These examples do not fully cover the topic of neural network architecture. Depending on the application, different models are used. ANN is still a relatively young field of computational science in which new solutions and approaches are still emerging.


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.