The most complete pytorch data scientist’s Guide (1)

Time:2021-1-26

All the pytorch features you’ll need for deep learning. From an experimental / research point of view.

PyTorchIt has become one of the de facto standards for creating neural networks, and I like its interface. However, it is difficult for beginners to get it.

I remember choosing pytorch a few years ago after extensive experiments. To be honest, it took me a lot of time to pick it up, but I’m glad I moved from keras to pytorch.With its high customizability and python syntax, PyTorchCan be compared withIt’s my honor to work with others, and I recommend it to anyone who wants to do heavy work through deep learning.

Therefore, in this pytorch guide,I will try to alleviate the pain of pytorch for beginners, and introduce how to use pytorchSome of the most important classes and modules needed to create any neural network.

But that’s not to say it’s only for beginners, becauseI’ll also talk about it Provided by pytorchOfHigh customizability, and talk about custom layers, datasets, dataloaders and loss functions.

tensor

Tensors are the basic building blocks of pytorch. In short, they are numpy arrays, but on GPU. In this section, I’ll list some of the most common actions you can use when using tensors. This is by no means an exhaustive list of actions that tensors can perform, but it helps to understand tensors before moving on to the more exciting parts.

1. Create tensor

We can create pytorch tensors in many ways. This includes converting from numpy arrays to tensors. Here’s just a point, and here are some examples, but you can do more with tensors than with numpy arrays.

The most complete pytorch data scientist's Guide (1)

The most complete pytorch data scientist's Guide (1)

2. Tensor operation

Again, you can do a lot with these tensors.

The most complete pytorch data scientist's Guide (1)

The most complete pytorch data scientist's Guide (1)

be careful:What is pytorch variable? In previous versions of python, tensor and variables used to be different and provided different functions, but now we are not in favor of using the variable API, and all variable methods used for tensors can be used. So if you don’t understand them, that’s good, because they’re not required, and if you do, you can forget them.

NN. Module

But then again, if Python doesn’t provide many ready-made layers that are used very frequently in various neural network architectures, then it won’t be widely used. Some examples are: nn.Linear , nn.Conv2d , nn.MaxPool2d , nn.ReLU ,   nn.BatchNorm2d , nn.Dropout , nn.Embedding ,  ,,  ,,, nn.GRU/nn . LSTMnn.Softmaxnn.LogSoftmaxnn . MultiheadAttentionnn.TransformerEncodernn.TransformerDecoder

The most complete pytorch data scientist's Guide (1)

The most complete pytorch data scientist's Guide (1)

Here, we define a very simple network that takes input of size 784 and passes it through two linear layers in a sequential manner. However, it should be noted that we can define any type of computation when defining forward pass, which makes pytorch highly customizable for research purposes. For example, in crazy experimental mode, we may have used the following network, on which we attached any layer. Here, after adding the input back to the second linear layer again (skipping connections), we send the output back from the second linear layer to the first linear layer again.

The most complete pytorch data scientist's Guide (1)

We can also check whether the neural network forward propagation works. Usually, I first create some random input and then pass it through the network I create.

The most complete pytorch data scientist's Guide (1)

A word about layer

Python is so powerful that you can actually create any new experiment layer yourself nn.Module . For example, instead of using predefined linear layers nn.Linear . From above, we can have created itCustom linear layer

The most complete pytorch data scientist's Guide (1)

You’ll see how to wrap the weight tensor in. nn.Parameter This is done so that the tensor is treated as a model parameter.

The parameter isTensorSubclass, when andModule-When used together, they have very special properties – when they are assigned as module properties, they are automatically added to their parameter list and appear in theparameters()Iterator.

As you’ll see later, model.parameters The () iterator will be the input to the optimizer. But there will be more later. Now, we can use this custom layer in any pytorch network, just like any other layer.

The most complete pytorch data scientist's Guide (1)

But then again, if Python doesn’t provide many ready-made layers that are used very frequently in various neural network architectures, then it won’t be widely used. Some examples are: nn.Linear , nn.Conv2d , nn.MaxPool2d , nn.ReLU ,   nn.BatchNorm2d , nn.Dropout , nn.Embedding , nn.GRU/nn .LSTM, nn.Softmax , nn.LogSoftmax , nn.MultiheadAttention , nn.TransformerEncoder , nn.TransformerDecoder

The above is the basic operation of torch. The next article will explain the operation of convolution for students.