# Pytorch realizes sin function simulation

Time：2022-1-12

## Pytorch realizes sin function simulation

### 1、 Introduction

This paper aims to use two methods to realize the simulation of sin function. The specific simulation method is realized by machine learning. We use Python torch module for machine learning to determine the coefficients of polynomials for sin.

### 2、 The first method

``````#This case is equivalent to using torch to simulate sin function for calculation.
#Through the cubic function to simulate the sin function, the operation similar to machine learning is realized.

import torch
import math

dtype = torch.float
#Type of data

device = torch.device("cpu")
#Type of equipment
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
#It is similar to numpy's linspace

y = torch.sin(x)
#Tensor - > tensor

# Randomly initialize weights
#Standard Gaussian function distribution.
#Randomly generate a parameter, and then improve the parameter through learning.
a = torch.randn((), device=device, dtype=dtype)
# a

b = torch.randn((), device=device, dtype=dtype)
# b

c = torch.randn((), device=device, dtype=dtype)
# c

d = torch.randn((), device=device, dtype=dtype)
# d

learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
#This is also a tensor.
#The third function is used for simulation.

# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
#Calculation error

# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
#Calculation error。

# Update weights using gradient descent
#Update the parameters every time.
# reward

#Final result
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')``````

Operation results:

``````99 676.0404663085938
199 478.38140869140625
299 339.39117431640625
399 241.61537170410156
499 172.80801391601562
599 124.37007904052734
699 90.26084899902344
799 66.23435974121094
899 49.30537033081055
999 37.37403106689453
1099 28.96288299560547
1199 23.031932830810547
1299 18.848905563354492
1399 15.898048400878906
1499 13.81600570678711
1599 12.34669017791748
1699 11.309612274169922
1799 10.57749080657959
1899 10.060576438903809
1999 9.695555686950684
Result: y = -0.03098311647772789 + 0.852223813533783 x + 0.005345103796571493 x^2 + -0.09268788248300552 x^3
``````

### 3、 The second method

``````import torch
import math

dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0")  # Uncomment this to run on GPU

# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)

# Create random Tensors for weights. For a third order polynomial, we need
# 4 weights: y = a + b x + c x^2 + d x^3
# respect to these Tensors during the backward pass.
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y using operations on Tensors.
y_pred = a + b * x + c * x ** 2 + d * x ** 3

# Compute and print loss using operations on Tensors.
# Now loss is a Tensor of shape (1,)
# loss.item() gets the scalar value held in the loss.
loss = (y_pred - y).pow(2).sum()
if t % 100 == 99:
print(t, loss.item())

# Use autograd to compute the backward pass. This call will compute the
# the gradient of the loss with respect to a, b, c, d respectively.
loss.backward()

# because weights have requires_grad=True, but we don't need to track this

# Manually zero the gradients after updating weights

print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

``````

Operation results:

``````99 1702.320556640625
199 1140.3609619140625
299 765.3402709960938
399 514.934326171875
499 347.6383972167969
599 235.80038452148438
699 160.98876953125
799 110.91152954101562
899 77.36819458007812
999 54.883243560791016
1099 39.79965591430664
1199 29.673206329345703
1299 22.869291305541992
1399 18.293842315673828
1499 15.214327812194824
1599 13.1397705078125
1699 11.740955352783203
1799 10.796865463256836
1899 10.159022331237793
1999 9.727652549743652
Result: y = 0.019909318536520004 + 0.8338049650192261 x + -0.0034346890170127153 x^2 + -0.09006795287132263 x^3
``````

### 4、 Summary

The above two methods are only simulated to the third power, so it is reasonable only when x is relatively small. In addition, because the coefficients are generated randomly, the results of each operation may be different.