Using Python nn.Module Construct simple full link layer instance


Python version 3.7 uses Python installed in virtual environment, so you can toss around freely without fear of affecting other Python frameworks

1. First define a class linear, inherit nn.Module

import torch as t
from torch import nn
from torch.autograd import Variable as V
class Linear(nn.Module):

  '''because variables automatically derive derivatives, we don't need to implement backward()' "
  def __init__(self, in_features, out_features):
    self.w =  nn.Parameter ( t.randn( in_ features, out_ Note that parameter is a special variable
    self.b =  nn.Parameter ( t.randn( out_ (features)), partial value b
  Def forward (self, x): the parameter x is a variable object
    x = self.w )
    return x + self.b.expand_ As (x) ා let the shape of B match the shape of output X

2. Check it out

layer = Linear( 4,3 )
Input = V (T. randn (2,4)) ා package a variable as input
out = layer( input )

#The results are as follows:

tensor([[-2.1934, 2.5590, 4.0233], [ 1.1098, -3.8182, 0.1848]], grad_fn=<AddBackward0>)

Next, we use linear to construct a multi-layer network

class Perceptron( nn.Module ):
  def __init__( self,in_features, hidden_features, out_features ):
    self.layer1 = Linear( in_features , hidden_features )
    self.layer2 = Linear( hidden_features, out_features )
  def forward ( self ,x ):
    x = self.layer1( x )
    X = t.sigmoid (x) ා activate function with sigmoid()
    return self.layer2( x )

Test it

perceptron = Perceptron ( 5,3 ,1 )
for name,param in perceptron.named_parameters(): 
  print( name, param.size() )

The output is as expected:

layer1.w torch.Size([5, 3])
layer1.b torch.Size([3])
layer2.w torch.Size([3, 1])
layer2.b torch.Size([1])

The above article uses python nn.Module Simple structure, full link layer instance is the small editor to share all the content, I hope to give you a reference, also hope you can support developeppaer.

Recommended Today

Construction of similarity calculation and fast de duplication system for 100 billion level text data based on HBase

preface With the advent of the era of big data, data information brings convenience to our life, but also brings us a series of tests and challenges. This paper mainly introduces the design and implementation of a system based on Apache HBase, Google simhash and other algorithms to support ten billion level text data similarity […]