- 博主主页:@璞玉牧之
- 本文所在专栏:《PyTorch深度学习》
- 博主简介:21级大数据专业大学生,科研方向:深度学习,持续创作中
目录
1.Linear Regression with PyTorch (用PyTorch实现线性回归)
1.1 Prepare dataset (准备数据集)
In PyTorch,the computational graph is in mini_batch fashion,so X and Y are 3×1 Tensors.
模型:y_hat = w * x + b (x与y_hat均属于实数)
mini_batch:想要一次性把三个样本[(x1,y1)、(x2,y2)、(x3,y3)]一起求出来
numpy的广播机制:两个矩阵相加时,把维度较小的矩阵扩充成与另一个矩阵维度相同的形式
y_hat1 = wx1 + b、y_hat2 = wx2 + b、y_hat3 = wx3 + b 转换为向量形式为:
ps:因为x和y_hat都是3×1的矩阵,所以b和w会进行自动广播,也变成3×1的矩阵
计算损失loss:
转换为向量形式:
代码实现
import torch
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])
1.2 Design model using Class (设计模型)
模型是用来计算y_hat的
代码实现
class LinearModel(torch.nn.Module): #将模型定义为一个类,继承自Module
def __init__(self): #构造函数
super(LinearModel, self).__init__() #调用父类构造
self.linear = torch.nn.Linear(1, 1) #类后面加(1,1)构造对象,包含权重和偏置两个Tensor
def forward(self, x): #前馈
y_pred = self.linear(x) #对象后面加(x)表示实现了可调用的对象
return y_pred
model = LinearModel () #实例化,调用model(x)
参考文档:https://pytorch.org/docs/stable/generated/torch.nn.Linear.html#torch.nn.Linear
- call()
若不清楚会传入多少变量时
def func(a, b, c, x, y):
pass
func(1, 2, 3, x=4, y=5)
将变量替换成 *args,将其打印会输出一个元组;将变量替换成 **kwargs,将其打印会输出一个字典。
def func(*args, **kwargs):
print(args) # (1, 2, 3)
print(kwargs) # {'x': 4, 'y': 5}
func(1, 2, 3, x=4, y=5)
Example
class Foobar:
def __init__(self):
pass
def __call__(self, *args, **kwargs):
print('Hello' + str(args[0])) # Hello1
foobar = Foobar()
foobar(1, 2, 3)
1.3 Construct loss and optimizer (构造损失函数和优化器)
使用PyTorch的应用接口(API)
代码实现
criterion = torch.nn.MSELoss(size_average=False) # MSE也继承自nn.Module;criterion需要的参数是y_hat和y
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) #优化器来自optim的SGD类,做实例化,第一个参数是权重,parameters会检查model中的所有成员,若成员里有相应权重,就都加到训练的参数集合上。lr:学习率
1.4 Training cycle (训练周期)
包括三步:forward(算损失)、backward(算梯度)、update(用梯度下降算法更新权重)
代码实现
for epoch in range(100):
y_pred = model(x_data) # step1:计算y_hat
loss = criterion(y_pred, y_data) # step2:计算损失Loss
print(epoch, loss) # loss是标量,打印时会自动调用__str__(); 不会产生计算图
optimizer.zero_grad() # step3:所有权重归零
loss.backward() # step4:反向传播Backward
optimizer.step() # step5:更新Update
1.5 Linear Regression-Test Model (测试模型)
# Output weight and bias
print('w = ', model.linear.weight.item()) # wieght是一个矩阵,所以打印是要加item()
print('b = ', model.linear.bias.item())
# Test Model
x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)
训练结果
2.Try Different Optimizer in Linear Regression
2.1 Adagrad
2.2 Adam
2.3 Adamax
2.4 ASGD
2.5 LBFGS
出现报错:
TypeError: step() missing 1 required positional argument: ‘closure’
改正如下:
import torch
import matplotlib.pyplot as plt
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])
class LinearModel(torch.nn.Module):
def __init__(self):
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1)
def forward(self, x):
y_pred = self.linear(x)
return y_pred
model = LinearModel ()
criterion = torch.nn.MSELoss(size_average=False)
# 5.LBFGS
optimizer = torch.optim.LBFGS(model.parameters(), lr=0.01)
epoch_list = []
loss_list = []
# 训练周期(前馈,反馈,更新)
for epoch in range(1000):
def closure():
optimizer.zero_grad()
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(epoch, loss.item())
epoch_list.append(epoch)
loss_list.append(loss.item())
loss.backward()
return loss
optimizer.step(closure)
print('w=', model.linear.weight.item())
print('b=', model.linear.bias.item())
x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred =', y_test.data)
plt.plot(epoch_list, loss_list) # 横纵坐标值
plt.xlabel('Epoch') # x轴名称
plt.ylabel('Loss') # y轴名称
plt.title('LBFGS') # 图标题
plt.show()
2.6 RMSprop
2.7 Rprop
2.8 SGD
3.总结-完整代码
import torch
import matplotlib.pyplot as plt
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])
class LinearModel(torch.nn.Module): #将模型定义为一个类,继承自Module
def __init__(self): #构造函数
super(LinearModel, self).__init__() #调用父类构造
self.linear = torch.nn.Linear(1, 1) #类后面加(1,1)构造对象,包含权重和偏置两个Tensor
def forward(self, x): #前馈
y_pred = self.linear(x) #对象后面加(x)表示实现了可调用的对象
return y_pred
model = LinearModel () #实例化,调用model(x)
criterion = torch.nn.MSELoss(size_average=False) # MSE也继承自nn.Module;criterion需要的参数是y_hat和y
# 1、Adagrad
# optimizer = torch.optim.Adagrad(model.parameters(), lr=0.01)
# 2、Adam
# optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
# 3、Adamax
# optimizer = torch.optim.Adamax(model.parameters(), lr=0.01)
# 4、ASGD
# optimizer = torch.optim.ASGD(model.parameters(), lr=0.01)
# 5、LBFGS
# optimizer = torch.optim.LBFGS(model.parameters(), lr=1)
# 6、RMSprop
# optimizer = torch.optim.RMSprop(model.parameters(), lr=0.01)
# 7、Rprop
# optimizer = torch.optim.Rprop(model.parameters(), lr=0.01)
# 8、SGD
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) #优化器来自optim的SGD类,做实例化,第一个参数是权重,parameters会检查model中的所有成员,若成员里有相应权重,就都加到训练的参数集合上。lr:学习率
epoch_list = []
loss_list = []
for epoch in range(100):
y_pred = model(x_data) # step1:计算y_hat
loss = criterion(y_pred, y_data) # step2:计算损失Loss
print(epoch, loss) # loss是标量,打印时会自动调用__str__(); 不会产生计算图
epoch_list.append(epoch)
loss_list.append(loss.item())
optimizer.zero_grad() # step3:所有权重归零
loss.backward() # step4:反向传播Backward
optimizer.step() # step5:更新Update
# Output weight and bias
print('w = ', model.linear.weight.item()) # wieght是一个矩阵,所以打印是要加item()
print('b = ', model.linear.bias.item())
# Test Model
x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)
# 画图
plt.plot(epoch_list, loss_list) # 横纵坐标值
plt.xlabel('Epoch') # x轴名称
plt.ylabel('Loss') # y轴名称
plt.title('SGD') # 图标题
plt.show() # 展示
4.Read more example from official tutorial
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
本文参考:《PyTorch深度学习实践》
我是璞玉牧之,持续输出优质文章,希望和你一起学习进步!!!原创不易,如果本文对你有帮助,可以 点赞+收藏+评论 支持一下哦!我们下期见~~
文章出处登录后可见!