深度学习中dropout策略的理解

时间:2022-03-03 00:22:19

现在有空整理一下关于深度学习中怎么加入dropout方法来防止测试过程的过拟合现象。

首先了解一下dropout的实现原理:

这些理论的解释在百度上有很多。。。。

这里重点记录一下怎么实现这一技术

参考别人的博客,主要http://www.cnblogs.com/dupuleng/articles/4340293.html

讲解一下用Matlab中的深度学习工具箱怎么实现dropout

首先要载入工具包。DeepLearn Toolbox是一个非常有用的matlab deep learning工具包,下载地址:https://github.com/rasmusbergpalm/DeepLearnToolbox

要使用它首先要将该工具包添加到matlab的搜索路径中,

1、将包复制到matlab 的toolbox中,作者的路径是D:\program Files\matlab\toolbox\

2、在matlab的命令行中输入:  

cd D:\program Files\matlab\toolbox\deepLearnToolbox\
addpath(gepath('D:\program Files\matlab\toolbox\deepLearnToolbox-master\')
savepath %保存,这样就不需要每次都添加一次

3、验证添加是否成功,在命令行中输入

which saesetup

果成功就会出现,saesetup.m的路径D:\program Files\matlab\toolbox\deepLearnToolbox-master\SAE\saesetup.m

4、使用deepLearnToolbox 工具包,做一个简单的demo,将autoencoder模型使用dropout前后的结果进行比较。

load mnist_uint8;
train_x = double(train_x(:,:)) / ;
test_x = double(test_x(:,:)) / ;
train_y = double(train_y(:,:));
test_y = double(test_y(:,:)); %% //实验一without dropout
rand('state',)
sae = saesetup([ ]);
sae.ae{}.activation_function = 'sigm';
sae.ae{}.learningRate = ;
opts.numepochs = ;
opts.batchsize = ;
sae = saetrain(sae , train_x , opts );
visualize(sae.ae{}.W{}(:,:end)'); nn = nnsetup([ ]);% //初步构造了一个输入-隐含-输出层网络,其中包括了
% //权值的初始化,学习率,momentum,激发函数类型,
% //惩罚系数,dropout等 nn.W{} = sae.ae{}.W{};
opts.numepochs = ; % //Number of full sweeps through data
opts.batchsize = ; % //Take a mean gradient step over this many samples
[nn, ~] = nntrain(nn, train_x, train_y, opts);
[er, ~] = nntest(nn, test_x, test_y);
str = sprintf('testing error rate is: %f',er);
fprintf(str); %% //实验二:with dropout
rand('state',)
sae = saesetup([ ]);
sae.ae{}.activation_function = 'sigm';
sae.ae{}.learningRate = ; opts.numepochs = ;
opts.bachsize = ;
sae = saetrain(sae , train_x , opts );
figure;
visualize(sae.ae{}.W{}(:,:end)'); nn = nnsetup([ ]);% //初步构造了一个输入-隐含-输出层网络,其中包括了
% //权值的初始化,学习率,momentum,激发函数类型,
% //惩罚系数,dropout等
nn.dropoutFraction = 0.5;
nn.W{} = sae.ae{}.W{};
opts.numepochs = ; % //Number of full sweeps through data
opts.batchsize = ; % //Take a mean gradient step over this many samples
[nn, L] = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
str = sprintf('testing error rate is: %f',er);
fprintf(str);