【文件属性】:
文件名称:BatchNorm有效性原理探索.pdf
文件大小:1.12MB
文件格式:PDF
更新时间:2022-08-15 08:02:16
DL
Batch Normalization (BatchNorm) is a widely adopted technique that enables
faster and more stable training of deep neural networks (DNNs). Despite its
pervasiveness, the exact reasons for BatchNorm’s effectiveness are still poorly
understood. The popular belief is that this effectiveness stems from controlling
the change of the layers’ input distributions during training to reduce the so-called
“internal covariate shift”. In this work, we demonstrate that such distributional
stability of layer inputs has little to do with the success of BatchNorm. Instead,
we uncover a more fundamental impact of BatchNorm on the training process: it
makes the optimization landscape significantly smoother. This smoothness induces
a more predictive and stable behavior of the gradients, allowing for faster training.
These findings bring us closer to a true understanding of our DNN training toolkit.