PyTorch上实现卷积神经网络CNN的方法

时间:2022-01-14 04:17:19

一、卷积神经网络

卷积神经网络(ConvolutionalNeuralNetwork,CNN)最初是为解决图像识别等问题设计的,CNN现在的应用已经不限于图像和视频,也可用于时间序列信号,比如音频信号和文本数据等。CNN作为一个深度学习架构被提出的最初诉求是降低对图像数据预处理的要求,避免复杂的特征工程。在卷积神经网络中,第一个卷积层会直接接受图像像素级的输入,每一层卷积(滤波器)都会提取数据中最有效的特征,这种方法可以提取到图像中最基础的特征,而后再进行组合和抽象形成更高阶的特征,因此CNN在理论上具有对图像缩放、平移和旋转的不变性。

卷积神经网络CNN的要点就是局部连接(LocalConnection)、权值共享(WeightsSharing)和池化层(Pooling)中的降采样(Down-Sampling)。其中,局部连接和权值共享降低了参数量,使训练复杂度大大下降并减轻了过拟合。同时权值共享还赋予了卷积网络对平移的容忍性,池化层降采样则进一步降低了输出参数量并赋予模型对轻度形变的容忍性,提高了模型的泛化能力。可以把卷积层卷积操作理解为用少量参数在图像的多个位置上提取相似特征的过程。

二、代码实现

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
import matplotlib.pyplot as plt
 
torch.manual_seed(1)
 
EPOCH = 1
BATCH_SIZE = 50
LR = 0.001
DOWNLOAD_MNIST = True
 
# 获取训练集dataset
training_data = torchvision.datasets.MNIST(
       root='./mnist/', # dataset存储路径
       train=True, # True表示是train训练集,False表示test测试集
       transform=torchvision.transforms.ToTensor(), # 将原数据规范化到(0,1)区间
       download=DOWNLOAD_MNIST,
       )
 
# 打印MNIST数据集的训练集及测试集的尺寸
print(training_data.train_data.size())
print(training_data.train_labels.size())
# torch.Size([60000, 28, 28])
# torch.Size([60000])
 
plt.imshow(training_data.train_data[0].numpy(), cmap='gray')
plt.title('%i' % training_data.train_labels[0])
plt.show()
 
# 通过torchvision.datasets获取的dataset格式可直接可置于DataLoader
train_loader = Data.DataLoader(dataset=training_data, batch_size=BATCH_SIZE,
                shuffle=True)
 
# 获取测试集dataset
test_data = torchvision.datasets.MNIST(root='./mnist/', train=False)
# 取前2000个测试集样本
test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1),
         volatile=True).type(torch.FloatTensor)[:2000]/255
# (2000, 28, 28) to (2000, 1, 28, 28), in range(0,1)
test_y = test_data.test_labels[:2000]
 
class CNN(nn.Module):
  def __init__(self):
    super(CNN, self).__init__()
    self.conv1 = nn.Sequential( # (1,28,28)
           nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5,
                stride=1, padding=2), # (16,28,28)
    # 想要con2d卷积出来的图片尺寸没有变化, padding=(kernel_size-1)/2
           nn.ReLU(),
           nn.MaxPool2d(kernel_size=2) # (16,14,14)
           )
    self.conv2 = nn.Sequential( # (16,14,14)
           nn.Conv2d(16, 32, 5, 1, 2), # (32,14,14)
           nn.ReLU(),
           nn.MaxPool2d(2) # (32,7,7)
           )
    self.out = nn.Linear(32*7*7, 10)
 
  def forward(self, x):
    x = self.conv1(x)
    x = self.conv2(x)
    x = x.view(x.size(0), -1) # 将(batch,32,7,7)展平为(batch,32*7*7)
    output = self.out(x)
    return output
 
cnn = CNN()
print(cnn)
'''''
CNN (
 (conv1): Sequential (
  (0): Conv2d(1, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (1): ReLU ()
  (2): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
 )
 (conv2): Sequential (
  (0): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (1): ReLU ()
  (2): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
 )
 (out): Linear (1568 -> 10)
)
'''
optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)
loss_function = nn.CrossEntropyLoss()
 
for epoch in range(EPOCH):
  for step, (x, y) in enumerate(train_loader):
    b_x = Variable(x)
    b_y = Variable(y)
 
    output = cnn(b_x)
    loss = loss_function(output, b_y)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
 
    if step % 100 == 0:
      test_output = cnn(test_x)
      pred_y = torch.max(test_output, 1)[1].data.squeeze()
      accuracy = sum(pred_y == test_y) / test_y.size(0)
      print('Epoch:', epoch, '|Step:', step,
         '|train loss:%.4f'%loss.data[0], '|test accuracy:%.4f'%accuracy)
 
test_output = cnn(test_x[:10])
pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
print(pred_y, 'prediction number')
print(test_y[:10].numpy(), 'real number')
'''''
Epoch: 0 |Step: 0 |train loss:2.3145 |test accuracy:0.1040
Epoch: 0 |Step: 100 |train loss:0.5857 |test accuracy:0.8865
Epoch: 0 |Step: 200 |train loss:0.0600 |test accuracy:0.9380
Epoch: 0 |Step: 300 |train loss:0.0996 |test accuracy:0.9345
Epoch: 0 |Step: 400 |train loss:0.0381 |test accuracy:0.9645
Epoch: 0 |Step: 500 |train loss:0.0266 |test accuracy:0.9620
Epoch: 0 |Step: 600 |train loss:0.0973 |test accuracy:0.9685
Epoch: 0 |Step: 700 |train loss:0.0421 |test accuracy:0.9725
Epoch: 0 |Step: 800 |train loss:0.0654 |test accuracy:0.9710
Epoch: 0 |Step: 900 |train loss:0.1333 |test accuracy:0.9740
Epoch: 0 |Step: 1000 |train loss:0.0289 |test accuracy:0.9720
Epoch: 0 |Step: 1100 |train loss:0.0429 |test accuracy:0.9770
[7 2 1 0 4 1 4 9 5 9] prediction number
[7 2 1 0 4 1 4 9 5 9] real number
'''

 三、分析解读

通过利用torchvision.datasets可以快速获取可以直接置于DataLoader中的dataset格式的数据,通过train参数控制是获取训练数据集还是测试数据集,也可以在获取的时候便直接转换成训练所需的数据格式。

卷积神经网络的搭建通过定义一个CNN类来实现,卷积层conv1,conv2及out层以类属性的形式定义,各层之间的衔接信息在forward中定义,定义的时候要留意各层的神经元数量。

CNN的网络结构如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
CNN (
 
 (conv1): Sequential (
 
  (0): Conv2d(1, 16,kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
 
  (1): ReLU ()
 
  (2): MaxPool2d (size=(2,2), stride=(2, 2), dilation=(1, 1))
 
 )
 
 (conv2): Sequential (
 
  (0): Conv2d(16, 32,kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
 
  (1): ReLU ()
 
  (2): MaxPool2d (size=(2,2), stride=(2, 2), dilation=(1, 1))
 
 )
 
 (out): Linear (1568 ->10)
 
)

经过实验可见,在EPOCH=1的训练结果中,测试集准确率可达到97.7%。

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。

原文链接:https://blog.csdn.net/marsjhao/article/details/72179517