Gradient Descent is an optimization method used in neural network, where the weight parameters are updated recursively by subtracting a small percentage of the gradient of the loss function , in order to minimize the loss function.

Mathematics

Let’s prove that the gradient descent method would lead to smaller losses after each step. Without loss of generality, we assume that the initial weight parameter is a two dimensional vector, .

Given a new vector that are close to , the Taylor series expansion of the loss function can be approximated by the first-order partial derivatives only:

Since we would like to find and that minimize , which is the same as minimizing the dot product above, with the constraint that having close Eulidean distance to in order to satisfy the Taylor approximation.

To achieve this we select the vector such that:

Given an , we choose such that the above constraint is satisfied. The negative sign ensure that the dot product is minimized. We therefore have proved that will descend at each iteration provided that the loss function is differentiable and a sufficiently small is used.

Example

Let’s say we have a training set of 4 that maps three binary features to a binary response.

We first observe that the first feature has a 100% correlation with the response and can reasonably be used for future predictions. Now we construct a neural network to see if it can capture this relationship. First create a neural network class and randomly initialize three weights between and for each feature.

1
2
3
4
5
import numpy as np
class NeuralNetwork():
def __init__(self):
np.random.seed(1)
self.weights = 2 * np.random.random((3, 1)) - 1

Define the sigmoid activation function :

1
2
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))

Define the loss function as the mean square error:

Calculate the gradient w.r.t. weights :

1
2
def gradient(self, x, y, y_hat):
return np.dot(x.T, (2 * (y_hat - y) * (y_hat * (1 - y_hat))))

Forward and backward propogation.

1
2
3
4
5
6
7
8
9
def forward_propogation(self, x):
x = inputs.astype(float)
return self.sigmoid(np.dot(x, self.weights))


def backward_propogation(self, x, y, alpha=1, iterations=10000):
for i in range(iterations):
y_hat = self.forward_propogation(x)
self.weights -= alpha * self.gradient(x, y, y_hat)

Testing our initial hypothesis.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
if __name__ == "__main__":
nn = NeuralNetwork()
print('\nrandom synoptic weights')
print(nn.weights)

x = np.array([[0, 0, 1],
[1, 1, 1],
[1, 0, 1],
[0, 1, 1]])

y = np.array([[0, 1, 1, 0]]).T
nn.backward_propogation(x, y)

print('\nweights after training')
print(nn.weights)

outputs = nn.forward_propogation(x)
print('\noutput after training')
print(outputs)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
random synoptic weights
[[-0.16595599]
[ 0.44064899]
[-0.99977125]]

weights after training
[[10.38061249]
[-0.20642264]
[-4.98461681]]

output after training
[[0.0067959 ]
[0.99445652]
[0.99548577]
[0.00553541]]

We can see that the neural network learns to put substantial weights on the first feature and makes very accurate predictions in-sample with iterations.