Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

时间:2022-12-24 18:33:37

http://blog.csdn.net/pipisorry/article/details/44119187

机器学习Machine Learning - Andrew NG courses学习笔记

Advice for Applying Machine Learning机器学习应用上的建议
{解决应用机器学习算法遇到的trainning set和test set预测不高的问题}

机器学习算法表现不佳时怎么办

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

但是不是所有时候增加训练集数据都是有效的!所以选择怎么做之前要学会怎么先去评估evaluate学习算法和诊断diagnostics,这样反而会节省时间。

皮皮blog



假设Hypothesis的评估方法

如何判断模型是否过拟合

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

直接绘制图形可以判断是否过拟合,但是features多了就不行了。

评估假设hypothesis的数据划分

首先分割数据集为训练集和测试集。

1. if data were already randomly sorted,just take the first 70% and last 30%.
2. if data were not randomly ordered,better to randomly shuffle the examples in your training set.

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

训练和测试学习算法过程

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

另一个可选的test sets评估方法可以是错误分类misclassification error,这样解释更简单:
Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

模型选择和Train_Validation_Test集

模型选择:选择features或者选择规格参数regularization parameter。

不同的模型:不同的参数,多项式的度degree

假设还有一个参数d,并使用训练集来确定。不同的d就可以产生很多不同的hypothesis。
Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

使用test set error选择模型

选择test set error最小的那个模型(如d=5时)
Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

但是在测试集上选择参数会导致一个问题:在测试集上选择模型(选择参数d)然后又在测试集上评估模型是不公平的,因为参数d就是在test set上得到的。也即不能在test set中同时选择degree参数和评估hypothesis。 because I had fit this parameter d to my test set is no longer fair to evaluate my hypothesis on this test set, because I fit my parameters(the degree d of polynomial) to this test set,And so my hypothesis is likely to do better on this test set than it would on new examples that hasn't seen before.

使用交叉验证集来选择模型

将数据划分成 train set, cross validation set (also called the validation set), test set.

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

选择cross validation error最小的假设hypothesis。

而使用test set来度量measure或估计estimate选择的模型一般化generalization的误差error。

总结来说就是,通过训练集训练参数得到多个模型,通过交叉验证集来选择最好的模型,使用测试集来评估模型!

皮皮blog



机器学习算法的改进:规格化参数λ 减小 偏差Bias方差Variance

如果学习出来的算法总是表现的不好,会是因为模型存在high bias或者high variance问题,换句话说就是存在underfitting或者overfitting问题。

而改进机器学习算法的一种方法就是添加正则项,通过规格化参数来减小偏差和方差。

鉴别overfit(high variance)和underfit(high bias)

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

模型复杂度(degree of polynomial d增加)太低时,训练误差大,导致欠拟合underfit,bias很大。模型复杂度增加时,训练误差一般当然都会下降(一定范围内交叉验证误差也会降低),但是随着模型复杂度的增加,模型可能过拟合overfit,这时交叉验证误差就会增大,导致variance很大。

Regularization and Bias_Variance 规格化和偏差_方差

规格化参数对偏差和方差的影响

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

如何自动选择规格化参数λ

[ 机器学习模型选择:调参参数选择 ]

皮皮blog



机器学习算法的诊断和改进

学习曲线Learning Curves

learning curves : 诊断diagnose学习算法是否存在high bias(underfit)或者high variance(overfit)或者都存在。

注意下面图中的error是针对回归问题的error,如果是分类问题,train的error可能也会随着数据量增加而变小!

high bias(underfit)的情形

high bias时增加数据(当然是从当前导致欠拟合的数据量大小出发看的)误差都不会减小,high bias通过Jcv 和Jtrain反映。训练误差training error最终会和交叉验证误差cross validation error趋近,因为至少在m很大时,参数太少而数据太多。

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

high variance(overfit)的情形

high varience的显著特征: large gap,此时增加数据量测试集和训练集误差gap会减小。

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

机器学习算法的改进

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

通过绘制learning curves就可以判别模型到底出了什么问题,是high bias(underfit)还是high variance(overfit),再进行相对的改进。

high bias(underfit)的改进:增加features,添加多项式features,减小参数λ。

high variance(overfit)的改进:获取更多数据,减少features,增大参数λ。

过拟合应该怎么办?

就是high variance(overfit)的改进:获取更多数据,减少features,增大参数λ。

过拟合一般是因为数据少而模型复杂,这样就需要

1 增加数据

或者减小模型复杂度2-7

2 减少features数目(feature列采样)Note: disadvantage, throwing away some of the features, is also throwing away some of the information you have about the problem.

3 加入规格化项(其中L1就相当于减小features数目,而L2是减小参数来减小数据波动,shrinkage减小过拟合),当然已有规格化项时应该增大参数λ

4 引入先验分布,应该和增加规格化项等价

5 防止过拟合加入boosting项

6 对于神经网络,为了避免模型过度训练,可以Early stopping。若指标趋*稳(或者看学习曲线),及时终止。效果等价于权值衰减(权值误差也是还没到达训练样本最小值点时停止)。或者使用dropout方法防止过拟合。当然神经网络中也可以使用正则化。

7 Gradient noise。引入一个符合高斯分布的noise项,使得在poor initialization时具有更好的鲁棒性。

示例:神经网络中每层单元个数的选择及hidden layler个数的选择

Note: fixes high bias: e.g. keep increasing the number of features/number of hidden units in neural network until you have a low bias classifier.

practical advice for choose the architecture or the connectivity pattern of the neural networks.
Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)
the other decisions: the number of hidden layers:using a single hidden layer is a reasonable default, but if you want to choose the number of hidden layers, one other thing you can try is find yourself a training cross-validation,and test set split and try training neural networks with one hidden layer or two or three and see which of those neural networks performs best on the cross-validation sets.

皮皮blog



Review

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)


Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

{The poor performance on both the training and test sets suggests a high bias problem,should increase the complexity of the hypothesis, thereby improving the fit to both the train and test data.}


Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

{The learning algorithm finds parameters to minimize training set error, so the performance should be better on the training set than the test set.}


Machine Learning - X. Advice for Applying Machine Learning机器学习算法的诊断和改进 (Week 6)

{A model with high variance will still have high test error, so it will generalize poorly.}

from:http://blog.csdn.net/pipisorry/article/details/44245347

ref:Advice for applying Machine Learning

Andrew Ng-Advice for applying Machine Learning.pdf