Figure 2: Neural networks. | Nature Physics

Figure 2: Neural networks.

From: Learning phase transitions by confusion

Figure 2

a, A single artificial neuron, with n inputs labelled x1 through xn and a single output y. The output of the neuron is computed by applying the activation function f to the weighted input a = ∑inwixi = w x. b, A neural network, consisting of many artificial neurons that have been arranged in layers. In this particular network architecture, called a feedforward network, the neurons within each layer are not connected. Apart from the first layer and the last layer we use one hidden layer in between (a shallow network, as opposed to a deep network with many layers). The neurons in the first layer have no inputs, but instead their outputs are fixed to the values of the input data and hence they serve as dummy neurons. The entire network can be considered as a highly nonlinear function g(x; W) that takes the input data x and feeds them forward to get the output. The goal of a neural network-based approach is to optimize the choice of the weights such that the network approximates the desired function.

Back to article page