机器学习:手写数字识别(Hand-written digits recognition)小项目

时间:2022-11-04 21:53:49

该项目的所有代码在我的github上,欢迎有兴趣的同学与我探讨研究~

地址:Machine-Learning/machine-learning-ex3/


1. Introduction

  • 手写数字识别(Hand-written digits recognition),顾名思义,就是将带有手写数字的图片输入到已经训练过的机器,且机器能够很快识别图片中的手写数字,并将之作为输出打印出来。

  • 实现原理:

    • 现以我个人的理解,来描述一下手写数字识别的实现原理。

    • 首先,要实现机器学习,前提是要有充足的数据。对于本实验,要有充足的手写数字图片作为训练集进行模型的训练;

    • 其次,要能够提取手写数字图片的特征,这就涉及到了数字图像处理。对于本实验,我使用的是已经提取好图片特征的训练集。每张手写数字图片的分辨率20x20,将其特征值以一维的形式存在文件中,即训练集中每个example 有400个features。我使用的训练集一共有5000行,即有5000个手写图片。

    • 接着,就是本次实验的核心了— 模型的训练。本实验我分别使用两个模型实现了手写数字识别:一个是正规化后的逻辑回归模型,另一个是神经网络模型。但是,只有逻辑回归模型我实现了完整的过程,即包括模型的训练。对于神经网络模型,我直接使用的是训练好的参数。对于神经网络的训练,我在下一篇博文中会实现。

    • 对于模型的训练,可以通过梯度下降法或者其它优化函数来训练。当Cost Function接近0时,得到训练后的参数。

    • 完成模型的训练后,要进行预测准确率的计算。将训练集中的图片feature作为特征值输入目标函数,得到预测的结果。然后将预测的结果与训练集中图片feature对应的真实结果进行比较,计算预测的准确率。

    • 对于手写数字的预测,涉及到的是一个multi-outcomes的逻辑回归函数,使用的是oneVsAll的方法,即定义多个逻辑回归分类器,每个逻辑回归分类器只有两个结果:1和0。0~9都有对应的分类器,使得它们在分类器中的输出为1。所以,每输入一张图片的features,都会得到10个分类器的值,而这10个分类器中拥有最高值的,其对应的数字便是图片中的数字,因为只有在对应分类器中才能得到较高的评分。

    • 当准确率达到一个较高值时,就可以进行手写数字识别了。

    • 以上便是手写数字识别的实现原理

  • 对于two-outcomes的逻辑回归问题, 只需要训练一个逻辑回归分类器;

  • 对于multi-outcomes的逻辑回归问题,需要使用oneVsAll的方法,即同时训练多个逻辑回归分类器。

  • 而神经网络模型,对解决Non-linear Hypotheses非常有效。

  • 对于该模型,以下几点值得关注:

  • 神经网络能够构造复杂的Non-linear Hypotheses的原因(层数越多,函数构造越复杂);

    Adding all these intermediate layers in neural works allows us to more elegantly produce interesting and more complex non-linear hypotheses.

  • 神经网络的分层:输入层、隐藏层(一层或多层)、输出层;

  • 神经网络层与层的关系也是基于Sigmoid函数,称为Sigmoid activation function;

  • 神经网络层与层之间的参数theta,尤其是它的规格mxn;

  • 神经网络中的“bias unit”一般设置为1;

  • 神经网络的呈现:一般化表示和向量化表示;

2. 逻辑回归模型实现手写数字识别

主函数:

%% Initialization
clear ; close all; clc

%
% Setup the parameters you will use for this part of the exercise
input_layer_size = 400; % 20x20 Input Images of Digits
num_labels = 10; % 10 labels, from 1 to 10
% (note that we have mapped "0" to label 10)

%% =========== Part 1: Loading and Visualizing Data ============
%
Load Training Data
fprintf('Loading and Visualizing Data ...\n')

% 该数据文件里存储了Matrix X(5000x400), 向量y。其中X的每一行代表一张20x20的图片
% 即X包含5000张图片的数据
load('ex3data1.mat'); % training data stored in arrays X, y
m = size(X, 1);

% Randomly select 100 data points to display
% 把1到m这些数随机打乱得到的一个数字序列,是一个行向量。
% 在数列中取100个数据,即取X100行数据,也就是10020x20的图
rand_indices = randperm(m);
sel = X(rand_indices(1:100), :);

displayData(sel);

fprintf('Program paused. Press enter to continue.\n');
pause;

%% ============ Part 2a: Vectorize Logistic Regression ===========
%
Test case for lrCostFunction
fprintf('\nTesting lrCostFunction() with regularization');

theta_t = [-2; -1; 1; 2];
X_t = [ones(5,1) reshape(1:15,5,3)/10];
y_t = ([1;0;1;0;1] >= 0.5);
lambda_t = 3;
[J grad] = lrCostFunction(theta_t, X_t, y_t, lambda_t);

fprintf('\nCost: %f\n', J);
fprintf('Expected cost: 2.534819\n');
fprintf('Gradients:\n');
fprintf(' %f \n', grad);
fprintf('Expected gradients:\n');
fprintf(' 0.146561\n -0.548558\n 0.724722\n 1.398003\n');

fprintf('Program paused. Press enter to continue.\n');
pause;
%% ============ Part 2b: One-vs-All Training ============
fprintf('\nTraining One-vs-All Logistic Regression...\n')

lambda = 0.1;
[all_theta] = oneVsAll(X, y, num_labels, lambda);

fprintf('Program paused. Press enter to continue.\n');
pause;


%
% ================ Part 3: Predict for One-Vs-All ================

pred = predictOneVsAll(all_theta, X);

fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

Part1: 将训练集中图片的features转为图片呈现出来(随机取100张图片)

displayData.m

function [h, display_array] = displayData(X, example_width)
%DISPLAYDATA Display 2D data in a nice grid
% [h, display_array] = DISPLAYDATA(X, example_width) displays 2D data
% stored in X in a nice grid. It returns the figure handle h and the
% displayed array if requested.

% Set example_width automatically if not passed in
if ~exist('example_width', 'var') || isempty(example_width)
example_width = round(sqrt(size(X, 2)));
end

% Gray Image
colormap(gray);

% Compute rows, cols
[m n] = size(X);
example_height = (n / example_width);

% Compute number of items to display
display_rows = floor(sqrt(m));
display_cols = ceil(m / display_rows);

% Between images padding
pad = 1;

% Setup blank display
display_array = - ones(pad + display_rows * (example_height + pad), ...
pad + display_cols * (example_width + pad));

% Copy each example into a patch on the display array
curr_ex = 1;
for j = 1:display_rows
for i = 1:display_cols
if curr_ex > m,
break;
end
% Copy the patch

% Get the max value of the patch
max_val = max(abs(X(curr_ex, :)));
display_array(pad + (j - 1) * (example_height + pad) + (1:example_height), ...
pad + (i - 1) * (example_width + pad) + (1:example_width)) = ...
reshape(X(curr_ex, :), example_height, example_width) / max_val;
curr_ex = curr_ex + 1;
end
if curr_ex > m,
break;
end
end

% Display Image
h = imagesc(display_array, [-1 1]);

% Do not show axis
axis image off

drawnow;

end

机器学习:手写数字识别(Hand-written digits recognition)小项目

Part2a: 检验逻辑回归损失函数与梯度计算是否正确;

lrCostFunction.m

function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with
%regularization
% J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));

% ====================== YOUR CODE HERE ======================

% Calulate the function h, we know it's sigmoid function
h = sigmoid(X*theta);
% Calculate the cost function
J = 1/m*(-y'*log(h)-(1-y)'*log(1-h))+lambda/(2*m)*(sum(theta.^2)-theta(1)^2);
% Calculate the Gradient
grad = 1/m*X'*(h-y)+lambda/m*theta;
% It must start at 1
grad(1) = (1/m*X'*(h-y))(1);

Part2b:One-vs-All Training

模型的训练:同时训练10个逻辑回归分类器,使用一个for循环实现。

oneVsAll.m

function [all_theta] = oneVsAll(X, y, num_labels, lambda)
%ONEVSALL trains multiple logistic regression classifiers and returns all
%the classifiers in a matrix all_theta, where the i-th row of all_theta
%corresponds to the classifier for label i
% [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
% logistic regression classifiers and returns each of these classifiers
% in a matrix all_theta, where the i-th row of all_theta corresponds
% to the classifier for label i

% Some useful variables
m = size(X, 1);
n = size(X, 2);

% You need to return the following variables correctly
all_theta = zeros(num_labels, n + 1);

% Add ones to the X data matrix
X = [ones(m, 1) X];

% ====================== YOUR CODE HERE ======================

for c = 1:num_labels
% Set Initial theta
initial_theta = zeros(n+1, 1);
% Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 50);
% Run fmincg to obtain the optimal theta
% This function will return theta and the cost
% fmincg 也是优化方法中的一种,而且效率较之前的梯度下降法与fminunc更高
[theta] = ...
fmincg (@(t) (lrCostFunction(t, X, (y == c), lambda)), ...
initial_theta, option)
% 因为theta是列向量,需要转置一下存到all_theta的每一行
all_theta(c, :) = theta';
end

% =========================================================================


end

Part3: Predict for One-Vs-All

根据训练得到的参数进行预测。

predictOneVsAll.m

function p = predictOneVsAll(all_theta, X)
%PREDICT Predict the label for a trained one-vs-all classifier. The labels
%are in the range 1..K, where K = size(all_theta, 1).
% p = PREDICTONEVSALL(all_theta, X) will return a vector of predictions
% for each example in the matrix X. Note that X contains the examples in
% rows. all_theta is a matrix where the i-th row is a trained logistic
% regression theta vector for the i-th class. You should set p to a vector
% of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2
% for 4 examples)

m = size(X, 1);
num_labels = size(all_theta, 1);

% You need to return the following variables correctly
p = zeros(size(X, 1), 1);

% Add ones to the X data matrix
X = [ones(m, 1) X];

% ====================== YOUR CODE HERE ======================
% max(A) 返回每列最大值
% max(A, [], 1) 也是返回每列最大值
% max(A, [], 2) 返回每行最大值
% 将训练集直接带入假设函数
[maxmum, p] = max(sigmoid(X*all_theta'), [], 2);

% =========================================================================


end

输出结果:
机器学习:手写数字识别(Hand-written digits recognition)小项目

从输出结果可以看到正规化逻辑回归函数能够正常运行,还可以看到One-Vs-All 对每个分类器训练的具体细节。我们能发现,在迭代了50次之后,每个逻辑回归分类器的cost都接近于0,虽然还没有达到忽略不计的程度,但已经能够有较高的预测准确率了。如果有更多的数据,肯定能得到更小的cost,更高的预测准确率。

这里训练后的预测准确率为95.02%

3. 神经网络模型实现手写数字识别

主函数:

%% Initialization
clear ; close all; clc

%% Setup the parameters you will use for this exercise
input_layer_size = 400; % 20x20 Input Images of Digits
hidden_layer_size = 25; % 25 hidden units
num_labels = 10; % 10 labels, from 1 to 10
% (note that we have mapped "0" to label 10)

%% =========== Part 1: Loading and Visualizing Data =============
% We start the exercise by first loading and visualizing the dataset.
% You will be working with a dataset that contains handwritten digits.
%

% Load Training Data
fprintf('Loading and Visualizing Data ...\n')

load('ex3data1.mat');
m = size(X, 1);

% Randomly select 100 data points to display
sel = randperm(size(X, 1));
sel = sel(1:100);

displayData(X(sel, :));

fprintf('Program paused. Press enter to continue.\n');
pause;

%% ================ Part 2: Loading Pameters ================
% In this part of the exercise, we load some pre-initialized
% neural network parameters.

fprintf('\nLoading Saved Neural Network Parameters ...\n')

% Load the weights into variables Theta1 and Theta2
load('ex3weights.mat');

%% ================= Part 3: Implement Predict =================
% After training the neural network, we would like to use it to predict
% the labels. You will now implement the "predict" function to use the
% neural network to predict the labels of the training set. This lets
% you compute the training set accuracy.

pred = predict(Theta1, Theta2, X);

fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

fprintf('Program paused. Press enter to continue.\n');
pause;

% To give you an idea of the network's output, you can also run
% through the examples one at the a time to see what it is predicting.

% Randomly permute examples
rp = randperm(m);

for i = 1:m
% Display
fprintf('\nDisplaying Example Image\n');
displayData(X(rp(i), :));

pred = predict(Theta1, Theta2, X(rp(i),:));
fprintf('\nNeural Network Prediction: %d (digit %d)\n', pred, mod(pred, 10));

% Pause with quit option
s = input('Paused - press enter to continue, q to exit:','s');
if s == 'q'
break
end
end

Part1: 加载并可视化数据,与第二部分的part1一致

机器学习:手写数字识别(Hand-written digits recognition)小项目

Part2: 加载权重,即加载神经网络参数theta

这里没有涉及到神经网络的训练,而是直接使用已经训练好的神经网络参数。目的是对神经网络的入门。在下篇博文就会完成神经网络的训练。

Part3: 利用神经网络模型进行手写数字预测

predict.m

function p = predict(Theta1, Theta2, X)
%PREDICT Predict the label of an input given a trained neural network
% p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
% trained weights of a neural network (Theta1, Theta2)

% Useful values
m = size(X, 1);
num_labels = size(Theta2, 1);

% You need to return the following variables correctly
p = zeros(size(X, 1), 1);

% ====================== YOUR CODE HERE ======================

X = [ones(m,1) X];
% 第二层
temp = sigmoid(X*Theta1');
% 第三层
temp1 = sigmoid([ones(size(temp,1), 1) temp]*Theta2');
[maxmum,p] = max(temp1, [], 2);

% =========================================================================


end

以下是输出样例的举例(由于训练的图片是20x20的,放大后有点模糊):

识别样例一:数字8
机器学习:手写数字识别(Hand-written digits recognition)小项目

机器学习:手写数字识别(Hand-written digits recognition)小项目

机器能准确识别数字8

机器学习:手写数字识别(Hand-written digits recognition)小项目

机器学习:手写数字识别(Hand-written digits recognition)小项目

机器能准确识别数字3

机器学习:手写数字识别(Hand-written digits recognition)小项目

机器学习:手写数字识别(Hand-written digits recognition)小项目

机器能准确识别数字5

输出结果:
机器学习:手写数字识别(Hand-written digits recognition)小项目

从输出结果上看,训练过的该神经网络模型预测的准确率达到了97.52,还是十分不错的。

4. Conclusion

现在来总结该项目的要点:

首先,要了解手写数字识别的原理;

其次,要了解手写数字识别实现的全过程;

然后,要了解oneVsAll以解决multi-outcomes的逻辑回归问题;

还有,要了解神经网络的相关要点;

最后,还是要学会向量化!


以上内容皆为本人观点,欢迎大家提出批评和指导,我们一起探讨!