Now days, a computer is the most common gadget in every household. With the advances in programming and new software, it is now possible to stimulate and design any system in the virtual world.
Artificial neural networks are models of brain and nervous system. The difference between a conventional system and neural networks is in the processing of data. Neural networks intake the data, process it and gives the output much like the functioning of brain.
It is composed of many artificial neurons linked with unique network architecture. The input signals are transformed into meaningful outputs.
The Origins
The origin of neural networks is largely inspired from the biological systems. Animals react to the external conditions and adapt to the environment. These changes form pattern of learning and is the basis of learned behavior in animals. The concept had its origins in the study of learned behavior in pigeons.
An appropriate model produces similar responses in artificial stimuli. It is relatively easy to mimic the behavior and functionality of neurons. The information gets transmitted at the synapses. The electrical signals pass through the axon of pre-synaptic neuron which initiates synthesis of neurotransmitter at synapse. The integration of signals both excitatory and inhibitory causes the transmission.
Functions
• Improve precision of Robotic movements.
• Recognition of facial features.
• Predicting the response to artificial stimuli.
Basics of Artificial Neural Networks
There are two fundamental components of biological neural networks, neurones and synapses. Neurones are considered as nodes and synapses as weights.
In biological systems, learning at neural level occurs by changing synaptic strengths, eliminating some synapses, and creating newer ones.
In the biological systems learning by animals help to better adapt to the surroundings. This increases the survival ability of the animal. Learning processes help to optimize the resources.
The learning is according to the rules of Hebb. Synchronous activation increases the strength of synapses while asynchronous activation decreases the strength. Hence in order to maintain the synaptic strength, the biological systems need energy and energy utilization should be minimized by optimal use of these processes.
The principle of energy minimization requires the use of mathematical optimisation techniques to find out how the weights of the synaptic connections can be optimally varied between different neurons. Weight settings determine the nature of network.
The McCullough-Pitts model
The electric signals are received and processed by neurons to give a signal output. The simplest model of CNN resembles this and the different spikes are considered as rates. Synaptic strength is depicted as synaptic weights. In case of excitatory signals, the positive product of incoming signal rate and the synaptic weight. Inhibition takes a negative product of the spike rate and synaptic weight.
The output is a function of input vectors and synaptic weights.
y = f (x, w) where y- output; x- vector of inputs and w- synaptic weights.
MLP neural networks are Multi Layer Perceptrons in which a single neuron is described first. It then extends to multiple neurons and then stacks these to form different layers of the network. These layers are cascaded to form the final network architecture. In typical studies, the transfer function is arbitrarily chosen by the study researcher and w (scalar weight) and b (bias) are adjusted to meet some learning goals. This should be done to make the input to output relation in a specific pattern. Each MLP is therefore consisting of three different layers at least; one input layer, one output layer and one or more hidden layers.
Feed Forward Networks offer unidirectional information transfers with the distributed representation of data sets. The data is processed simultaneously and Backpropagation methods are used for training. Backpropagation methodology requires training set with input and output parameters. Small random weights are used in the beginning and adjusted using the error values. They are not exact representations of the biological networks but are good for demonstration and learning.
Recurrent networks
Unlike FFN, information can flow in all directions in this network architecture. It employs temporal dynamics parameters. Learning techniques other than Backpropagation such as Hebbian, artificial evolution, Reinforcement learning etc can be used. They are better models than FFN's.
Elman nets
They are FFNs with partial recurrency. They have memory or sense of time incorporated into traditional FFNs.
More sophisticated network architectures such as Bayesian, Radial Basis Function, etc and the hybrid approaches are now widely used to address more research questions. Hopfield networks are based on Hebb rule and are widely used especially if the task involves identification of associated images. Central Pattern generators are used to create rhythmic patterns such as heart beat, locomotion etc.
About Author / Additional Info: