# Detailed explanation of tensor of pytorch pit entry

Time：2021-10-17

## Introduction

The development momentum of the in-depth learning framework pytoch is amazing. Xiaobian also has a deep understanding of this. After flipping through the open source code of in-depth learning on GitHub, he found that there are really many more people using pytoch. Therefore, Xiaobian is also going into pytoch recently, and writes articles to summarize it. After reading this article carefully, you will gain:

• Understand the creation of tensor
• Understand Tensor’s acceleration
• Understand common properties of tensor
• Understand Tensor’s common methods

### Tensor create

We should all know that numpy is a common extension library that supports a large number of dimensional arrays and matrix operations. However, numpy seems really powerless for computing graphs, deep learning or gradients, because its computing cannot be accelerated on the GPU like tensor. Today, let’s talk about the most basic concept of pytoch, tensor.

Tensor is an n-dimensional array. Conceptually, it is the same as numpy array. The difference is that tensor can track the calculation graph and calculate the gradient.

1. Create from numpy

``````import torch
import numpy as np

numpy_array= np.array([1,2,3])
torch_tensor1 = torch.from_numpy(numpy_array)
torch_tensor2 = torch.Tensor(numpy_array)
torch_tensor3 = torch.tensor(numpy_array)``````

It is worth noting that torch. Tensor () is an alias of the default tensor type torch. Floattensor (), that is, torch. Tensor () returns the float data type. In fact, we can also modify the default data type:

``torch.set_default_tensor_type(torch.DoubleTensor)``

And torch. Tensor () will generate corresponding torch. Longtensor, torch. Floattensor and torch. Doubletensor according to the input data type.

Of course, we can also convert tensor to numpy type through numpy()

``````numpy_ array = torch_ Tensor1. Numpy() # if tensor is on CPU
numpy_ array = torch_ Tensor1. CPU. Numpy() # if tensor is on GPU
Print (type (numpy_array)) # output: < class' numpy. Ndarray '>``````

Note that if the tensor is on the GPU, you need to use. CPU () to convert the tensor of the GPU to the CPU first.

2. Create from Python built-in types

``````lst = [1,2,3]
torch_tensor1 = torch.tensor(a)
tp = (1,2,3)
torch_tensor2  = torch.tensor(a1)``````

3. Other methods

``````#Create tensor with the same element
torch_tensor1  = torch.full([2,3],2)
#Create tensor with all 1
torch_tensor2 = torch.ones([2,3])
#Create tensor with all 0
torch_tensor3 = torch.zeors([2,3])
#Create tensor of diagonal matrix
torch_tensor4  = torch.eye(3)
#Randomly create tensor in interval [1,10]
torch_tensor5 = torch.randint(1,10,[2,2])
#Wait``````

When creating tensor, you can also specify the data type and the stored device

``````torch_tensor= torch.zeros([2,3],dtype=torch.float64,device=torch.device('cuda:0'))
torch_tensor.dtype #torch.float64
torch_tensor.device #cuda:0
torch_tensor.is_cuda #True``````

### Tensor acceleration

We can use the following two ways to speed up tensor on the GPU.

The first way is to define CUDA data types.

``````dtype = torch.cuda.FloatTensor
gpu_ Tensor = torch.randn (1,2). Type (dtype) # converts tensor to CUDA data type``````

The second way is to put tensor directly on the GPU (recommended).

``````gpu_ Tensor = torch.randn (1,2). CUDA (0) # put tensor directly on the first GPU
gpu_ Tensor = torch.randn (1,2). CUDA (1) # put tensor directly on the second GPU``````

It’s also easy to put tensor on the CPU.

``cpu_tensor = gpu_tensor.cpu()``

### Tensor common properties

1. View tensor type properties

``````tensor1 = torch.ones([2,3])
tensor1.dtype # torch.float32``````

2. View tensor dimension properties

``````Tensor1.shape # size
Tenosr1.ndim # dimension``````

3. Check whether tensor is stored on GPU

``tensor1.is_cuda #False``

4. View tensor storage devices

``````tensor1.device # cpu
tensor1.cuda(0)
tensor1.device # cuda:0``````

``tensor1.grad``

### Tensor common methods

1. Torch. Squeeze(): delete the dimension with the value of 1 and return tensor

``````tensor1 = torch.ones([2,1,3])
torch_tensor1.size() #torch.Size([2, 1, 3])
tensor2=torch.squeeze(tensor1)
print(tensor2.size())#torch.Size([2, 3])``````

From the example, we can see that the dimension of tensor has changed from [2,1,3] to [2,3], and the dimension with value of 1 has been deleted.

2. Torch. Tensor. Permute() replaces the tensor dimension and returns to a new view.

``````tensor1 = torch.ones([2,1,3])
print(tensor1.size()) # torch.Size([2, 1, 3])
tensor2 = tensor1.permute(2,1,0) # 0,1,2-> 2,1,0
print(tensor2.size()) # torch.Size([3, 1, 2])``````

As can be seen from the example, tensor changed the original value of the first dimension to 2 and the value of the second dimension to 3 and 2 respectively after replacement

3. Torch. Tensor. Expand(): expand the dimension with value 1. The expanded tensor will not allocate new memory, but create a new view based on the original one and return it.

``````>>>tensor1 = torch.tensor([,])
>>>tensor2 = tensor1.expand(2,2)
>>>tensor1.size()
torch.Size([2, 1])
>>>tensor2
tensor([[3, 3],
[2, 2]])
>>>tensor2.size()
torch.Size([2, 2])``````

It can be seen from the example that the original dimension of tensor is (2,1). Because the function extends the dimension with value 1, it can be extended to (2,2), (2,3), etc., but it should be noted that the dimension with non-1 value remains unchanged.

4. Torch. Tensor. Repeat(): repeat tensor along a dimension. Unlike expand(), this function copies the original data.

``````>>>tensor1 = torch.tensor([,])
>>>tensor1.size()
torch.Size([2, 1])
>>>tensor2=tensor1.repeat(4,2)
>>>tensor2.size()
torch.Size([8, 2])
>>>tensor2
tensor([[3, 3],
[2, 2],
[3, 3],
[2, 2],
[3, 3],
[2, 2],
[3, 3],
[2, 2]])``````

The tensor1 dimension in the example is (2,1) and tensor1.repeat (4,2). This is repeated for tensor of dimension 0 and dimension 1 for 4 and 2 times respectively, so the dimension becomes (8,2) after repeat. Look at the following example.

``````>>>tensor1 = torch.tensor([[2,1]])
>>>tensor1.size()
torch.Size([1, 2])
>>>tensor2=tensor1.repeat(2,2,1)
>>>tensor2.size()
torch.Size([2,2,2])
>>>tensor2
tensor([[[2, 1],
[2, 1]],

[[2, 1],
[2, 1]]])``````

The tensor1 dimension in the example is (1,2) and tensor1.repeat (2,2,1). At this time, the former does not correspond to the latter dimension. At this time, it can be understood that tensor1 rewrites the dimension as (1,1,2), and then tensor1.repeat (2,2,1). The tensors of dimensions 0,1,2 of tensor1 are repeated once, once and twice respectively. After repeat, the tensor dimension is (2,2,2).

Official account: CVpython, we focus on sharing Python and computer vision. We insist on being original and updating from time to time. I hope the article will help you. Scan the code and pay attention to it quickly. ## SQL exercise 20 – Modeling & Reporting

This blog is used to review and sort out the common topic modeling architecture, analysis oriented architecture and integration topic reports in data warehouse. I have uploaded these reports to GitHub. If you are interested, you can have a lookAddress:https://github.com/nino-laiqiu/TiTanI recorded a relatively complete development process in my hexo blog deployed on GitHub. You can […]