This thesis provides a brief overview regarding the basic principles underpinning neuronal networks. The Fundamental Element is the building block for conventional neural networks, often simply called a node. Many nodes in parallel form a layer, which in turn are cascaded to form a network.
Neural networks are trained with a training data set containing data vectors and, in the case of supervised learning, corresponding labels. These data vectors are fed into the network. An error is calculated from the label and the value obtained from the network output. It is used to adjust the network weights in an iterative process called error backpropagation.
Convolution is a method of extracting important characteristics within incoming data into a feature map. It is used in general image processing, but also helps Convolutional Neural Networks reduce their computational load.
Spiking Neural Networks are inspired by biological neuron formations. They work with timed information spikes and form connections through plasticity.
At the hardware level, neural networks benefit from existing Single Instruction, Multiple Data hardware blocks, acceleration vector operations. Specialized chips, tailored to neural network needs, offer large amounts of Fused Multiply Add units and quick memory access. Per network layer, neural networks are straightforward to compute in parallel, resulting in high throughput and low latency on dedicated hardware. SNNs, being relative newcomers to the commercial market, lack a strong hard- and software ecosystem—for now.