A SHORT INTRODUCTION TO NEURAL NETWORKS
WHAT IS AN ARTIFICIAL NEURAL NETWORK?
Artificial neural networks were first developed in the 1950s. They are an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a humanlike manner. Artificial neural networks are software simulations: They are created by programming regular computers to behave as though they are interconnected brain cells.
A typical neural network has anything from a few dozen to hundreds, thousands, or even millions of artificial neurons called units arranged in a series of layers, each of which connects to the layers on either side.
Some of them, known as input units, are designed to receive various forms of information from the outside world that the network will attempt to learn about, recognize, or otherwise process.
Other units sit on the opposite side of the network and signal how it responds to the information it's learned; those are known as output units. In between the input units and output units are one or more layers of hidden units, which, together, form the majority of the artificial brain.

Most neural networks are fully connected, which means each hidden unit and each output unit is connected to every unit in the layers either side. The connections between one unit and another are represented by a number called a weight, which can be either positive (if one unit excites another) or negative (if one unit suppresses or inhibits another). The higher the weight, the more influence one unit has on another. (This corresponds to the way actual brain cells trigger one another across tiny gaps called synapses.)
Most ANNs contain some form of 'learning rule' which modifies the weights of the connections according to the input patterns that it is presented with. In a sense, ANNs learn by example as do their biological counterparts; a child learns to recognize dogs from examples of dogs.
HOW DOES ANN LEARN?
Learning process in a neural network is an iterative process of “going and return” by the layers of neurons. The “going” is a forward propagation of the information and the “return” is a back propagation of the information.
The first phase forward propagation occurs when the network is exposed to the training data
For a neural network to learn, there has to be an element of feedback involved—just as children learn by being told what they're doing right or wrong. The feedback process is called backpropagation. Backpropagation involves comparing the output a network produces with the output it was meant to produce, and using the difference between them to modify the weights of the connections between the units in the network, working from the output units through the hidden units to the input units—going backward, in other words.

Once the network has been trained with enough learning examples, it reaches a point where you can present it with an entirely new set of inputs it's never seen before and see how it responds.

CRITICISM OF ANN
A common criticism of neural networks is that they require too much training for real-world operation. Human beings can learn abstract relationships in a few trials. Popular neural network applications lack a mechanism for learning abstractions through explicit, verbal definition, and works best when there are thousands, millions or even billions of training examples. In problems where data are limited, standard neural networks often are not an ideal solution.
D-NETWORKS : DIVERA'S SOLUTION
Divera's D - networks is an advanced solution to neural networks' shortcomings. They are not data hungry, can learn from few examples and can generalise successfully. D-networks are not dense networks. Initial architecture of the network is designed in a way to enable the system to learn from few examples. D- Networks is developed to build decision engines for simulated customers. They are totally unique in the current landscape of artificial intelligence.