Tag:pytorch

  • The pytorch custom parameter does not update

    Time:2021-8-18

    Parameters defined in nn.module:You don’t need CUDA, you can find the derivative and back-propagation class BiFPN(nn.Module): def __init__(self, fpn_sizes): self.w1 = nn.Parameter(torch.rand(1)) print(“no—————————————————“,self.w1.data, self.w1.grad) The following example shows that the intermediate variable may not have a gradient, but the final variable has a gradient: Both cy1 and CD have gradients import torch xP=torch.Tensor([[ 3233.8557, 3239.0657, […]

  • Detailed explanation of custom data processing in pytorch

    Time:2021-8-16

    Pytorch adopts the data saving method of dataset in data and needs to inherit the data.dataset class. If you need to process data yourself, you need to implement two basic methods. :. Getitem: returns a piece of data or a sample, obj [index] = obj. Getitem (index).:. Len: returns the number of samples. len(obj) = […]

  • Detailed explanation of word vector usage based on pytorch pre training

    Time:2021-8-13

    How to use word2vec trained word vector in pytorch torch.nn.Embedding() This method is a method to map word vectors and words in pytorch. Generally, if we directly use the following method: self.embedding = torch.nn.Embedding(num_embeddings=vocab_size, embedding_dim=embeding_dim) num_ embeddings=vocab_ Size indicates the size of the vocabulary embedding_ dim=embeding_ Dim represents the dimension of word vector In this […]

  • Pytorch implements increasing and decreasing channels on the input of the pre training model

    Time:2021-8-8

    How to modify the pre training model of Imagenet and the number of channels in the input layer at will, so as to adapt to their own tasks #Add a channel w = layers[0].weight layers[0] = nn.Conv2d(4, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) layers[0].weight = torch.nn.Parameter(torch.cat((w, w[:, :1, :, :]), dim=1)) #Mode 2 […]

  • Detailed explanation of the difference between tensor. Detach() and tensor. Data in pytorch

    Time:2021-8-3

    In pytorch0.4,. Data is still reserved, but. Detach() is recommended. The difference is that. Data returns the same data tensor as X, but will not be added to the calculation history of X, and requires s_ Grad = false, which is sometimes unsafe because X.Data cannot be traced and differentiated by autograd. . detach() returns […]

  • Custom backpropagation in pytorch, derivation instance

    Time:2021-8-2

    Customize the backward() function in pytorch. In the process of image processing, we sometimes use our own defined algorithms to process images. Most of these algorithms are based on numpy or SciPy packages. So how to add the gradient of the custom algorithm to the calculation diagram of pytorch, and use the loss. Backward () […]

  • Pytorch dynamic network and weight sharing example

    Time:2021-7-31

    Pytorch dynamic network + weight sharing Pytorch is famous for its dynamic graph. The following is a chestnut to realize the dynamic network and weight sharing technology: # -*- coding: utf-8 -*- import random import torch class DynamicNet(torch.nn.Module): def __init__(self, D_in, H, D_out): “”” Several linear functions used in the forward propagation process are constructed […]

  • Pytorch learning: examples of dynamic and static graphs

    Time:2021-7-30

    Dynamic graph and static graph At present, neural network framework is divided into static graph framework and dynamic graph framework. The biggest difference between pytorch and tensorflow, Caffe and other frameworks is that they have different computational graph forms. Tensorflow uses a static graph, which means that we first define a calculation graph and then […]

  • Implementation of adding BN to pytorch

    Time:2021-7-29

    Adding BN layer to pytorch Batch standardization Model training is not easy, especially for some very complex models, which can not get convergence results very well. Therefore, adding some preprocessing to the data and using batch standardization can get very good convergence results, which is also an important reason why convolutional networks can be trained […]

  • Inception of pytorch_ Implementation case of V3

    Time:2021-7-28

    As follows: from __future__ import print_function from __future__ import division import torch import torch.nn as nn import torch.optim as optim import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy import argparse print(“PyTorch Version: “,torch.__version__) print(“Torchvision Version: “,torchvision.__version__) # Top level data directory. […]

  • Detailed explanation of the use of imagefolder in pytorch

    Time:2021-7-27

    Imagefolder of pytorch Torchvision has implemented common datasets in advance, including the previously used cifar-10 and datasets such as Imagenet, coco, MNIST and lsun, which can be called through torchvision.datasets.cifar10. Here is a frequently used dataset – imagefolder. Imagefolder assumes that all files are saved in folders. Pictures of the same category are stored in […]

  • Python: custom network layer instance

    Time:2021-7-8

    Custom autograd function For shallow networks, we can manually write forward and backward propagation processes. But when the network becomes large, especially in deep learning, the network structure becomes complex. Forward propagation and backward propagation become more and more complex, so manual writing is very difficult. Fortunately, there is an automatic differentiation package in python […]