# PyTorch- A Blessing!

Hello Everyone! This is my fifth writing on my journey of Completing the *Deep Learning Nanodegree* in a month! So far, in these 6 days, I’ve completed *two* out of *six* modules of the Nanodegree and trust me, its quite alot to learn!

## Day- 6

So, today’s lesson name is “**Deep Learning with PyTorch**” and just as the name suggests, in this lesson, we’ll be implementing all that we’ve learnt before with the help of *PyTorch*.

## PyTorch

First, lets talk about what *PyTorch* is. I’m sure there are many definitions on the web but from a beginner developer’s view, its gold. What I mean by that is, all the mathematics that I’ve studied so far, for Gradient Descent, Feed Forward, Back Propagation, Activation Functions, etc, etc are good to know and will definitely be compulsory if you’re looking to be renown in the ML industry, but if you’re someone who is a beginner and wants to get to the good stuff, the models and accuracy, etc etc, or you’re a *ML Guru* but don’t want to write all the equations and each line by ourself, most of the *professionals* use Libraries as well though, then PyTorch is what you’re looking for. Its basically an Open Source Library which has almost all the major functions that are needed to be performed to train a model in it. Lets take examples:

**Network Architecture**

Alright lets move on, the next thing PyTorch helps with is the model architecture. What I mean by this is that the library makes it super easy to write the basic architecture of the neural network which makes it more easy to debug, it makes that the practitioner can pay more attention to the data, more on this later, etc. Lets see how to build an architecture of a Network having two hidden layers, one output and uses sigmoid function.

#Without PyTorchclass Classifier(nn.Module):

def __init__(self):

super().__init__()

self.fc1 = nn.Linear(120, 73)

self.fc2 = nn.Linear(73, 1)

def forward(self, x):

# make sure input tensor is flattened

x = x.view(x.shape[0], -1)

x = self.sigmoid(self.fc1(x))

x = self.sigmoid(self.fc2(x))

x = F.log_softmax(self.fc4(x), dim=1)

return x#With PyTorchmodel = nn.Sequential(nn.Linear(),

nn.Sigmoid(),

nn.Linear(),

nn.Sigmoid(),

nn.Linear())

model

**Back Propagation**

This is what a standard Back Propagation process looks like. What it looks like in PyTorch? Here. Yes, this one line does all the work.

`model_loss.backward()`

**Updating weights**

#Without PyTorch self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records

self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records#with PyTorchoptimizer.step()

I hope you’re getting the point, What I’m trying to say is that the use of Libraries make it super easy to work with the Algorithms. Not to work with them but just the fact that you don’t really need to remember all the equations is a big relief!

Are you feeling it? Using the Library makes it so much easier to write everything. I’m sure that there a ton of other benefits which I’m not mentioning but this is a list of benefits I’ve came across. Now, let's talk about checking how your model is doing.

## Accuracy

Accuracy is basically the *measure* of the *correct classifications* made by the model divided by the total number of classifications. It actually tells the user how well the model is doing if it is and the general process of calculating the Accuracy is:

- We perform the
**forward pass**as normal. - But then with the values returned, we have to find the
**class****with the highest correct classifications**. And we do this by using ‘*TopK*’ as follows. Please note that the 1 passes in as the first parameter is the result which we are looking for.

`probabilities, classes = model.topk(1, dim=1)`

- Now, after getting the top classes, we need to
**compare them with the default labels**and we do this as follows. Note that we are comparing the Labels with the classes returned from the ‘*topk*’ method and then storing it into ‘*equals*’. And in order to compare ‘*Classes*’ and ‘*labels*’, we need them to be int he same shape, hence, we call the ‘*view*’ method on labels.

`equals = classes == labels.view(*classes.shape)`

- Now, to get the accuracy, we have to
**take the average**but in this case, where the values are always either 0 or 1, the**average is equal to the mean.**Hence, we take the mean of the ‘*equals*’ vector as follows. Note that in order to get the mean, we need the tensor to be of Float type rather than standard Integer.

`accuracy += torch.mean(equals.type(torch.FloatTensor))`

## Saving/Loading Models

The next thing PyTorch helps with is *saving* and *loading* models. We do not need to train the models each and every time we want to use them and we want the working models to be saved for whenever they are required to be sued or rather needs to be changed a bit. It won’t be efficient to train the models every time you need them, hence, we save the models for future use. To save the models, use the following syntax where ‘*model.state_dict()*’ is like the current of the timeline of the model.

`#To save a model`

torch.save(model.state_dict(), 'filename.pth')

To load a model, we follow this syntax. But be ware that if the loaded Model and your current network has different architecture then the workbook will throw an error. What this means is that the previous network and the loaded model must have the same architecture.

`#To load a model`

state_dict = torch.load('filename.pth')

model.load_state_dict(state_dict)

And with this, I’m happy to include that this completes my module and am looking forward to continuing my hard work and to successfully finish the Nanodegree in the specified time. The next module is on **Convolutional Neural Networks**, super excited to learn more about that. And I know this writing is not as long as the last ones but ain’t feeling it today! Will See you in the next one!