教你利用PyTorch实现sin函数模拟
目录
- 一、简介
- 二、第一种方法
- 三、第二种方法
- 四、总结
一、简介
本文旨在使用两种方法来实现sin函数的模拟,具体的模拟方法是使用机器学习来实现的,我们使用python的torch模块进行机器学习,从而为sin确定多项式的系数。
二、第一种方法
# 这个案例相当于是使用torch来模拟sin函数进行计算啦。
# 通过3次函数来模拟sin函数,实现类似于机器学习的操作。
import torch
import math
dtype = torch.float
# 数据的类型
device = torch.device("cpu")
# 设备的类型
# device = torch.device("cuda:0") # Uncomment this to run on GPU
# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
# 与numpy的linspace是类似的
y = torch.sin(x)
# tensor->张量
# Randomly initialize weights
# 标准的高斯函数分布。
# 随机产生一个参数,然后通过学习来进行改进参数。
a = torch.randn((), device=device, dtype=dtype)
# a
b = torch.randn((), device=device, dtype=dtype)
# b
c = torch.randn((), device=device, dtype=dtype)
# c
d = torch.randn((), device=device, dtype=dtype)
# d
learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y
    y_pred = a + b * x + c * x ** 2 + d * x ** 3
    # 这个也是一个张量。
    # 3次函数来进行模拟。
    # Compute and print loss
    loss = (y_pred - y).pow(2).sum().item()
    if t % 100 == 99:
        print(t, loss)
    # 计算误差
    # Backprop to compute grhttp://www.cppcns.comadients of a, b, c, d withhttp://www.cppcns.com respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_a = grad_y_pred.sum()
    grad_b = (grad_y_pred * x).sum()
    grad_c = (grad_y_pred * x ** 2).sum()
    grad_d = (grad_y_pred * x ** 3).sum()
    # 计算误差。
    # Update weights using gradient descent
    # 更新参数,每一次都要更新。
    a -= learning_rate * grad_a
    b -= learning_rate * grad_b
    c -= learning_rate * grad_c
    d -= learning_rate * grad_d
    # reward
# 最终的结果
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
运行结果:
99 676.0404663085938
199 478.38140869140625299 339.39117431640625399 241.615371704编程客栈10156499 172.80801391601562599 124.37007904052734699 90.26084899902344799 66.23435974121094899 49.30537033081055999 37.374031066894531099 28.962882995605471199 23.0319328308105471299 18.8489055633544921399 15.8980484008789061499 13.816005706787111599 12.346690177917481699 11.3096122741699221799 10.577490806579591899 10.0605764389038091999 9.695555686950684Result: y = -0.03098311647772789 + 0.852223813533783 x + 0.005345103796571493 x^2 + -0.09268788248300552 x^3
三、第二种方法
import torch
import math
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0")  # Uncomment this to run on GPU
# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Create random Tensors for weights. For a third order polynomial, we need
# 4 weights: y = a + b x + c x^2 + d x^3
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Tensors during the backward pass.
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)
learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y using operations on Tensors.
    y_pred = a + b * x + c * x ** 2 + d * x ** 3
    # Compute and print loss using operations on Tensors.
    # Now loss is a Tensor of shape (1,)
    # loss.item() gets the scalar value held in the loss.
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())
    # Use autograd to compute the backward pass. This call will compute the
    # gradient of loss with respect to all Tensors with requires_grad=True.
    # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
    # the gradient of the loss with respect to a, b, c, d respectively.
    loss.backward()
    # Manually updwww.cppcns.comate weights using gradient descent. Wrap in torch.no_grad()
    # because wehttp://www.cppcns.comights have requires_grad=True, but we don't need to track this
    # in autograd.
    with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad
        # Manually zero the gradients after updating weights
        a.grad = None
        b.grad = None
        c.grad = None
        d.grad = None
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
运行结果:
99 1702.320556640625
199 1140.3609619140625299 765.3402709960938399 514.934326171875499 347.6383972167969599 235.80038452148438699 160.98876953125799 110.91152954101562899 77.36819458007812999 54.8832435607910161099 39.799655914306641199 29.6732063293457031299 22.8692913055419921399 18.2938423156738281499 15.2143278121948241599 13.13977050781251699 11.7409553527832031799 10.7968654632568361899 10.1590223312377931999 9.727652549743652Result: y = 0.019909318536520004 + 0.8338049650192261 x + -0.0034346890170127153 x^2 + -0.09006795287132263 x^3
四、总结
以上的两种方法都只是模拟到了3次方,所以仅仅只是在x比较小的时候才比较合理,此外,由于系数是随机产生的,因此,每次运行的结果可能会有一定的差别的。
到此这篇关于教你利用PyTorch实现sin函数模拟的文章就介绍到这了,更多相关PyTorch实现sin函数模拟内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!
 
         
       
       
       
       
       加载中,请稍侯......
 加载中,请稍侯......
      
精彩评论