Neural network architectures

Last revised by Edward Chmiel on 23 Jun 2019

Artificial neural networks can be broadly divided into different architectures, feedforward or recurrent neural architectures.

Feedforward neural networks are more readily conceptualised in 'layers'. The first layer of the neural network is merely the inputs of each sample, and each neuron in each successive layer is connected to a set of neurons in the preceding layer. 

To compute the function represented by the network, we calculate the activation in each neuron by applying a non-linear activation function (typically a sigmoid function) to the weighted sum of the activations of the connected neurons in the preceding layer. These weights represent the information stored by the neural network and are the parameters that we update during training. The activations of the final layer are the output of the network. 

The different choices of how we connect neurons in successive layers to the previous layers strongly influence the abilities of the network and consists of what we normally refer to as the 'architecture' of the network. Common architectures are fully connected neural network and convolutional neural networks.

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.