
How you’ll be able to improve the “learning” and “training” of neural networks through tuning hyperparameters

In my previous post, we discussed how neural networks predict and learn from the info. There are two processes chargeable for this: the forward pass and backward pass, also referred to as backpropagation. You possibly can learn more about it here:
This post will dive into how we will optimise this “learning” and “training” process to extend the performance of our model. The areas we are going to cover are computational improvements and hyperparameter tuning and how one can implement it in PyTorch!
But, before all that great things, let’s quickly jog our memory about neural networks!
Neural networks are large mathematical expressions that try to search out the “right” function that may map a set of inputs to their corresponding outputs. An example of a neural network is depicted below:
Each hidden-layer neuron carries out the next computation:
- Inputs: These are the features of our dataset.
- Weights: Coefficients that scale the inputs. The goal of the algorithm is to search out essentially the most optimal coefficients through gradient descent.
- Linear Weighted Sum: Sum up the products of the inputs and weights and add a bias/offset term, b.
- Hidden Layer: Multiple neurons are stored to learn patterns within the dataset. The superscript refers back to the layer and the subscript to the variety of neuron in that layer.