开发者

教你利用PyTorch实现sin函数模拟

目录
  • 一、简介
  • 二、第一种方法
  • 三、第二种方法
  • 四、总结

一、简介

本文旨在使用两种方法来实现sin函数的模拟,具体的模拟方法是使用机器学习来实现的,我们使用python的torch模块进行机器学习,从而为sin确定多项式的系数。

二、第一种方法

# 这个案例相当于是使用torch来模拟sin函数进行计算啦。
# 通过3次函数来模拟sin函数,实现类似于机器学习的操作。


import torch
import math

dtype = torch.float
# 数据的类型

device = torch.device("cpu")
# 设备的类型
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
# 与numpy的linspace是类似的

y = torch.sin(x)
# tensor->张量

# Randomly initialize weights
# 标准的高斯函数分布。
# 随机产生一个参数,然后通过学习来进行改进参数。
a = torch.randn((), device=device, dtype=dtype)
# a

b = torch.randn((), device=device, dtype=dtype)
# b

c = torch.randn((), device=device, dtype=dtype)
# c

d = torch.randn((), device=device, dtype=dtype)
# d


learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y
    y_pred = a + b * x + c * x ** 2 + d * x ** 3
    # 这个也是一个张量。
    # 3次函数来进行模拟。

    # Compute and print loss
    loss = (y_pred - y).pow(2).sum().item()
    if t % 100 == 99:
        print(t, loss)
    # 计算误差

    # Backprop to compute grhttp://www.cppcns.comadients of a, b, c, d withhttp://www.cppcns.com respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_a = grad_y_pred.sum()
    grad_b = (grad_y_pred * x).sum()
    grad_c = (grad_y_pred * x ** 2).sum()
    grad_d = (grad_y_pred * x ** 3).sum()
    # 计算误差。

    # Update weights using gradient descent
    # 更新参数,每一次都要更新。
    a -= learning_rate * grad_a
    b -= learning_rate * grad_b
    c -= learning_rate * grad_c
    d -= learning_rate * grad_d
    # reward

# 最终的结果
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')


运行结果:

99 676.0404663085938

199 478.38140869140625

299 339.39117431640625

399 241.615371704编程客栈10156

499 172.80801391601562

599 124.37007904052734

699 90.26084899902344

799 66.23435974121094

899 49.30537033081055

999 37.37403106689453

1099 28.96288299560547

1199 23.031932830810547

1299 18.848905563354492

1399 15.898048400878906

1499 13.81600570678711

1599 12.34669017791748

1699 11.309612274169922

1799 10.57749080657959

1899 10.060576438903809

1999 9.695555686950684

Result: y = -0.03098311647772789 + 0.852223813533783 x + 0.005345103796571493 x^2 + -0.09268788248300552 x^3      

三、第二种方法

import torch
import math

dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0")  # Uncomment this to run on GPU

# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)

# Create random Tensors for weights. For a third order polynomial, we need
# 4 weights: y = a + b x + c x^2 + d x^3
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Tensors during the backward pass.
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y using operations on Tensors.
    y_pred = a + b * x + c * x ** 2 + d * x ** 3

    # Compute and print loss using operations on Tensors.
    # Now loss is a Tensor of shape (1,)
    # loss.item() gets the scalar value held in the loss.
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())

    # Use autograd to compute the backward pass. This call will compute the
    # gradient of loss with respect to all Tensors with requires_grad=True.
    # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
    # the gradient of the loss with respect to a, b, c, d respectively.
    loss.backward()

    # Manually updwww.cppcns.comate weights using gradient descent. Wrap in torch.no_grad()
    # because wehttp://www.cppcns.comights have requires_grad=True, but we don't need to track this
    # in autograd.
    with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad

        # Manually zero the gradients after updating weights
        a.grad = None
        b.grad = None
        c.grad = None
        d.grad = None

print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

运行结果:

99 1702.320556640625

199 1140.3609619140625

299 765.3402709960938

399 514.934326171875

499 347.6383972167969

599 235.80038452148438

699 160.98876953125

799 110.91152954101562

899 77.36819458007812

999 54.883243560791016

1099 39.79965591430664

1199 29.673206329345703

1299 22.869291305541992

1399 18.293842315673828

1499 15.214327812194824

1599 13.1397705078125

1699 11.740955352783203

1799 10.796865463256836

1899 10.159022331237793

1999 9.727652549743652

Result: y = 0.019909318536520004 + 0.8338049650192261 x + -0.0034346890170127153 x^2 + -0.09006795287132263 x^3

四、总结

以上的两种方法都只是模拟到了3次方,所以仅仅只是在x比较小的时候才比较合理,此外,由于系数是随机产生的,因此,每次运行的结果可能会有一定的差别的。

到此这篇关于教你利用PyTorch实现sin函数模拟的文章就介绍到这了,更多相关PyTorch实现sin函数模拟内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新开发

开发排行榜