Feedforward Neural Networks#

Hide code cell content
# pip install packages that are not in Pyodide
%pip install ipympl==0.9.3
%pip install seaborn==0.12.2

# Import the necessary libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import StandardScaler
from mude_tools import neuralnetplotter, draw_neural_net
from cycler import cycler
import seaborn as sns

%matplotlib widget

# Set the color scheme
sns.set_theme()
colors = [
    "#0076C2",
    "#EC6842",
    "#A50034",
    "#009B77",
    "#FFB81C",
    "#E03C31",
    "#6CC24A",
    "#EF60A3",
    "#0C2340",
    "#00B8C8",
    "#6F1D77",
]
plt.rcParams["axes.prop_cycle"] = cycler(color=colors)

Introduction#

Recall that in the previous pages we have applied linear models with basis functions

\[ y(x,\mathbf{w}) = \sum_{j=0}^M w_j \phi_j(x) = \mathbf{w}^T \boldsymbol{\phi} (x). \]

Here \(\mathbf{w}\) are the flexible parameters, and \(\boldsymbol{\phi}\) the basis functions.

Because a linear model is linear in its parameters \(\mathbf{w}\), we could solve for \(\bar{\mathbf{w}}\) directly

\[ \bar{\mathbf{w}} = \left(\boldsymbol{\Phi}^T \boldsymbol{\Phi} \right)^{-1} \boldsymbol{\Phi}^T \mathbf{t}, \]

where \(\mathbf{\Phi}\) is the collection of basis funcitons evaluated in all data points. The basis functions need to be chosen a priori for this approach. When the phenomenon to be modeled is complex, relying on pre-defined basis functions might not give sufficient accuracy. We can overcome this issue with a more flexible model. Aside from the pragmatic strategy of increasing the number of basis functions, we can also achieve more flexibility by replacing the basis functions with parametric functions. In this page we will dive into one variant of this concept, namely neural networks. The underlying problem will stay the same: we are trying to learn a process based on a limited number of noisy observations \(\mathcal{D}=\{\mathbf{X}, \mathbf{t}\}\). Following decision theory, we need to minimize the mean squared error loss function

\[ E_D = \frac{1}{N} \sum_{n=1}^N \left(t_n - y(x_n, \mathbf{w}) \right)^2 \]

where \(y(x, \mathbf{w})\) now comes from a neural network. As always we set up our ground truth and observation models:

Hide code cell source
# The true function relating t to x
def f_truth(x, freq=2, **kwargs):
    # Return a sine with a frequency of f
    return np.sin(x * freq)


# The data generation function
def f_data(epsilon=0.7, N=100, **kwargs):
    # Apply a seed if one is given
    if "seed" in kwargs:
        np.random.seed(kwargs["seed"])

    # Get the minimum and maximum
    xmin = kwargs.get("xmin", 0)
    xmax = kwargs.get("xmax", 2 * np.pi)

    # Generate N evenly spaced observation locations
    x = np.linspace(xmin, xmax, N)

    # Generate N noisy observations (1 at each location)
    t = f_truth(x, **kwargs) + np.random.normal(0, epsilon, N)

    # Return both the locations and the observations
    return x, t

Neural network architecture#

A neural network consists of neurons connected by weights, with information flowing from input neurons towards output neurons. In supervised learning, the states of the input and output neurons are known during training. There are additional neurons in layers in between the inputs and outputs, forming a so-called hidden layer. Neurons are separated into layers, where all neurons of one layer depend on (at least) the neurons of the previous layer.

The state of a neuron is determined by a linear combination of states \(z\) from the previous layer with their connecting weights \(w\)

\[ a^{(l)}_{j} = \sum_{i}^{D} w_{ji}^{(l)} z_{i}^{(l-1)} + w_{j0}^{(l)} \]

where \(w_{j0}^{(l)}\) are so-called biases, allowing the model to have an offset. Make sure not to confuse this quantity with the model bias from the bias-variance tradeoff discussion. This linear combination of states is followed by a nonlinear transformation with an activation function \(h(\cdot)\):

\[ z^{(l)}_{j} = h(a^{(l)}_{j}). \]

In the plot below, we can see the identity (or linear), sigmoid, hyperbolic tangent (tanh), and rectified linear unit (relu) activation functions applied on an arbitrary state \(z\) in the \([-4,4]\) range.

Hide code cell source
x = np.linspace(-4, 4, 25)

# Compute activation functions
identity = x
sigmoid = [1 / (1 + np.exp(-x)) for x in x]
tanh = np.tanh(x)
relu = [max(0, x) for x in x]

# Plot figure
fig, ax = plt.subplots(figsize=(8, 4.5))
fig.canvas.toolbar_visible = False
ax.set_position([0.2, 0.1, 0.7, 0.8])
plt.plot(x, identity, "-v", label="Identity (linear)")
plt.plot(x, sigmoid, "-|", label="Sigmoid")
plt.plot(x, tanh, "-o", label="Tanh")
plt.plot(x, relu, "-x", label="ReLU")
plt.xlim(-4, 4)
plt.ylim(-2, 2)
plt.xlabel("$a$")
plt.ylabel("$z$")
plt.title("Activation functions")
plt.legend()
plt.show()

The number of layers in a neural network commonly refers to the number of hidden layers. Following the aforementioned setup of compounding linear transformations with nonlinear activations, the output of a two-layer neural network can be written as:

\[ y(x, \mathbf{w}) = h^{(out)} \left( \sum_{k=1}^{K} w_{k}^{(3)} h^{(2)} \bigg( \sum_{j=1}^{J} w_{kj}^{(2)} h^{(1)} \Big( \sum_{i=1}^{I} w_{ji}^{(2)} x_i + w_{j0}^{(1)} \Big) + w_{k0}^{(2)} \bigg) + w_{0}^{(3)} \right) \]

Since the activation function can be nonlinear and quantities proportional to the weights pass through them, the model is evidently no longer necessarily linear w.r.t. the weights and, in general, no closed-form Maximum Likelihood solution can be found. Compare this with the linear basis function models from before. Instead of seeking an analytical solution that no longer exists, some sort of gradient-based optimization scheme, as discussed in the previous page in the form of SGD, is required calibrate the weights.

When your dataset contains multiple inputs or outputs, this model can easily be extended by including multiple neurons in the input or output layer, and the other procedures stay the same. Generally, the activation function of the outputs \(h^{(out)}\) is linear and the activations in hidden layers are of a nonlinear type.

Hide code cell source
fig, ax = plt.subplots(1, 2, figsize=(8, 3))
fig.canvas.toolbar_visible = False
draw_neural_net(ax[0], 0.1, 0.9, 0.1, 0.9, [1, 5, 1])
draw_neural_net(ax[1], 0.1, 0.9, 0.1, 0.9, [2, 5, 3])
ax[0].set_title("1 input, 5 hidden nodes, 1 output")
ax[1].set_title("2 inputs, 5 hidden nodes, 3 outputs")
[axs.axis("off") for axs in ax]
plt.show()

Model flexibility#

The flexibility of a feedforward neutral network can be adapted by varying the number of neurons per hidden layer, or by adding more hidden layers. Both options lead to an increase in the number of parameters \(\mathbf{w}\). When a neural network has too few parameters, it generally puts us at risk of underfitting the data, whereas having too many parameters can quickly lead to overfitting. Since they control model complexity, the number of layers and neurons per layer are therefore hyperparameters, which need to be calibrated. Once again, remember that hyperparameters are calibrated with validation data. Simply minimizing training error w.r.t. these hyperparameters will always lead to huge and severely overfit models, especially when we do not have a lot of data available.

In the following interactive plot you can study the influence of the number of neurons per layer on model prediciton. The number of hidden layers is fixed at two. You have to click the re-run button to retrain the model after varying the parameter. Be aware that the required computations can take a few moments to run.

Hide code cell source
# Define the prediction locations
# (note that these are different from the locations where we observed our data)
x_pred = np.linspace(-1, 2 * np.pi + 1, 200)

xscaler = StandardScaler()
xscaler.fit(f_data()[0][:, None])


# Function that creates a NN
def NN_create(**kwargs):
    return MLPRegressor(
        solver="sgd",
        hidden_layer_sizes=(kwargs["neurons"], kwargs["neurons"]),
        activation=kwargs["activation"],
        batch_size=kwargs["batch_size"],
    )


# Function that trains a given NN for a given number of epochs
def NN_train(x, t, network, epochs_per_block):
    # Convert the training data to a column vector and normalize it
    X = x.reshape(-1, 1)
    X = xscaler.transform(X)

    # Run a number of epochs
    for i in range(epochs_per_block):
        network.partial_fit(X, t)

    return network, network.loss_curve_


# Function that returns predictions from a given NN model
def NN_pred(x_pred, network):
    # Convert the prediction data to a column vector and normalize it
    X_pred = x_pred.reshape(-1, 1)
    X_pred = xscaler.transform(X_pred)

    # Make a prediction at the locations given by x_pred
    return network.predict(X_pred)


# Pass everything to the neuralnetplotter
plot1 = neuralnetplotter(
    f_data,
    f_truth,
    NN_create,
    NN_train,
    NN_pred,
    x_pred,
    N=100,
    val_pct=60,
    epochs=20000,
)
plot1.fig.canvas.toolbar_visible = False
plot1.add_sliders("neurons", valmax=20, valinit=3)
plot1.add_buttons("truth", "seed", "reset", "rerun")
plot1.show()

As you might have noticed, using the default setting of three neurons per layer was not enough for the network to learn the underlying trend in the data. Which number of neurons per layer gave you a visibly good fit? Did you ever spot overfitting?

Pay close attention to what we show on the right-hand plot during training:

Line color

Quantity

Expression

Blue

Training set loss (\(40\%\) of \(N\))

\(E_D=\frac{1}{N_\mathrm{train}}\sum_{n=1}^{N_\mathrm{train}} \left(t_n - y(x_n, \mathbf{w}) \right)^2\)

Purple

Validation set loss (\(60\%\) of \(N\))

\(E_D=\frac{1}{N_\mathrm{val}}\sum_{n=1}^{N_\mathrm{val}} \left(t_n - y(x_n, \mathbf{w}) \right)^2\)

Black

Expected loss (numerical integration)

\(\mathbb{E}[L]=\int\int\left(t-y(x,\mathbf{w})\right)^2p(x,t)dxdt\)

The black line is a very precise version of our error, but one that requires a lot of data to obtain. Since analytical integration is not possible due to the complex nature of \(t\) and \(y(x,\mathbf{w})\), we are forced to compute it through numerical integration, in this case by using \(1500\) equally-spaced predictions in the range \([0,2\pi]\). Although interesting to include here for educational purposes, it is obvious that we will not have access to this measure in practice.

What we have instead is the validation loss (purple line) computed as a relatively crude Monte Carlo approximation of the expected loss, since we are only using a very small number of data points to compute it. In practice, we need to rely on this approximation to make model selection decisions!

Early stopping#

Choosing a high number of neurons increases the number of parameters and, therefore, slows down training. In addition, it can make the model too flexible and prone to overfitting. It is therefore good practive to always monitor the predictive capability of a NN on a validation set. First, run the model below, then, select the model you think best fits the data by pulling the corresponding slider. At which epoch do you find the best model?

Hide code cell source
plot2 = neuralnetplotter(
    f_data,
    f_truth,
    NN_create,
    NN_train,
    NN_pred,
    x_pred,
    neurons=20,
    epochs=12000,
    N=40,
    val_pct=60,
    batch_size=2,
)
plot2.fig.canvas.toolbar_visible = False
plot2.seed = 4
plot2.add_sliders("cur_model")
plot2.add_buttons("truth", "seed", "reset", "rerun")
plot2.show()

The example above is a good illustration of overfitting. Note how the training loss almost immediately becomes an unreliable estimate of the true expected loss (black line). In contrast, the validation loss (purple line) does not exactly agree with the black line but consistently follows the same trends. Crucially, the validation loss correctly points to the model with the lowest true loss, at around \(3000\) training epochs. Move the model selection slider there and try to see why the true loss starts to increase from that point on.

It is possible to train a neural network for a long time, and then select the \(\mathbf{w}\) that corresponds to the lowest validation error. An alternative, known as early stopping, uses the indication that the validation loss increases for a number of epochs as a stopping sign to halt training.

Manual model selection#

In the previous page we used L2-regularization to control the model complexity. The application of this technique to neural networks is straightforward, and will therefore not be demonstrated here. Instead, we will focus on the impact of the number of trainable parameters and the number of samples on overfitting. The ability to display our models at different stages of the training phase will help us to find and inspect particularily good or bad models.

Hide code cell source
plot3 = neuralnetplotter(
    f_data, f_truth, NN_create, NN_train, NN_pred, x_pred, nnlinewidth=0.25
)
plot3.fig.canvas.toolbar_visible = False
plot3.add_sliders("freq", "neurons", "N", "cur_model", "val_pct")
plot3.add_buttons("truth", "seed", "reset", "rerun")
plot3.add_radiobuttons("activation")
plot3.show()

Train a number of different neural networks with varying hyperparameter settings, and try to understand the influence of all the the parameters on the resulting model. Try to answer the following questions:

  • Is the model with the lowest validation error always the one that gives the best fit visually?

  • Try out a model with linear (identity) activation function. Can you make sense of what you observe?

  • We plot our activation functions next to their selector buttons. How does the shape of your trained model correlate with the shape of your activation function?

  • In practical situations it is often difficult to visualize model predictions for the whole domain. Can you detect when a model is underfit based only on the training and validation loss?

  • For a well-trained flexible model with a large training size (N) the errors usually converge to a specific value. What is this value and why does this happen? Can we ever get rid of it?

  • How well does the model predict outside of the training range?

Wrap-up#

In these lectures you have seen a non-parametric model, namely k-nearest neighbours, and have learned about the bias-variance trade-off. Linear regression was shown as a parametric model that is linear in its parameters and has a closed form solution. Ridge regression has been introduced to prevent overfitting. Stochastic Gradient Descent has been shown as a way to train a model in an iterative fashion. Here we explored a model with a nonlinear dependence on its parameters. You now understand the underlying principles of a broad set machine learning techniques, and know how to distinguish naive curve fitting from extracting (or learning) the relevant trends and patterns in data.