CNN 卷积神经网络

时间:2021-05-12 04:25:16
【文件属性】:
文件名称:CNN 卷积神经网络
文件大小:1.07MB
文件格式:ZIP
更新时间:2021-05-12 04:25:16
CNN 卷积神经网络 The first CNN appeared in the work of Fukushima in 1980 and was called Neocognitron. The basic architectural ideas behind the CNN (local receptive fields,shared weights, and spatial or temporal subsampling) allow such networks to achieve some degree of shift and deformation invariance and at the same time reduce the number of training parameters. Since 1989, Yann LeCun and co-workers have introduced a series of CNNs with the general name LeNet, which contrary to the Neocognitron use supervised training. In this case, the major advantage is that the whole network is optimized for the given task, making this approach useable for real-world applications. LeNet has been successfully applied to character recognition, generic object recognition, face detection and pose estimation, obstacle avoidance in an autonomous robot etc. myCNN class allows to create, train and test generic convolutional networks (e.g., LeNet) as well as more general networks with features: - any directed acyclic graph can be used for connecting the layers of the network; - the network can have any number of arbitrarily sized input and output layers; - the neuron’s receptive field (RF) can have an arbitrary stride (step of local RF tiling), which means that in the S-layer, RFs can overlap and in the C-layer the stride can differ from 1; - any layer or feature map of the network can be switched from trainable to nontrainable (and vice versa) mode even during the training; - a new layer type: softmax-like M-layer. The archive contains the myCNN class source (with comments) and a simple example of LeNet5 creation and training. All updates and new releases can be found here: http://sites.google.com/site/chumerin/projects/mycnn
【文件预览】:
example_nonscalar_output_CNN.m
example_LeNet5_SDLM_training.m
mnist2matlab.m
contents.m
myLeNet5-example.mat
example_myLeNet5_SDLM_training.m
demo_myCNN.m
get_MNIST_data.m
newMyLeNet5.m
@single
----qdsquash.m(517B)
----qsquash.m(620B)
----qdsquash_from_squash.m(252B)
----qtanh.m(467B)
----qsquash_and_dsquash.m(1KB)
newLeNet5.m
ChangeLog
@myCNN
----load_lenet_from_lush_data.m(12KB)
----propagate_F_layer.m(1KB)
----train_LM.m(10KB)
----forget_deltas.m(1019B)
----backpropagate_F_layer.m(2KB)
----forget_ddeltas.m(955B)
----set_momentum.m(542B)
----backbackpropagate.m(1KB)
----compute_learning_rates.m(1KB)
----adapt_net.m(2KB)
----adapt_LM.m(2KB)
----subsref.m(1KB)
----backpropagate_C_layer.m(4KB)
----forget_second_derivatives.m(1KB)
----propagate_M_layer.m(1KB)
----propagate.m(5KB)
----private()
--------squash_and_dsquash.m(243B)
--------get_projects_dir_on_host.m(909B)
--------unfold2.m(2KB)
--------unfold.m(2KB)
--------get_data_dir_on_host.m(908B)
--------dsquash.m(143B)
--------squash.m(80B)
--------read_idx_data.m(2KB)
--------draw_plots.m(1KB)
--------prepare_lenet_test_set.m(669B)
--------read_lush_array.m(3KB)
--------host.m(522B)
--------soft_max.m(2KB)
--------subsample2.m(465B)
--------oversample2.m(747B)
--------log_it.m(473B)
--------create_lenet_structure_from_lush_data.m(5KB)
----tag2ind.m(704B)
----backbackpropagate_F_layer.m(3KB)
----adapt_deltas.m(1KB)
----get_diag_Hessian.m(1KB)
----backbackpropagate_C_layer.m(3KB)
----sim.m(698B)
----forget_derivatives.m(1KB)
----adapt.m(2KB)
----get_gradient.m(1KB)
----propagate_S_layer.m(1KB)
----average_ddeltas.m(1KB)
----get_trainable_parameters.m(1KB)
----backpropagate_S_layer.m(3KB)
----get_performance.m(1KB)
----backbackpropagate_M_layer.m(3KB)
----accumulate_ddeltas.m(1KB)
----backpropagate.m(3KB)
----display.m(1KB)
----myCNN.m(4KB)
----backbackpropagate_S_layer.m(3KB)
----set_global_learning_rate.m(566B)
----add_layer.m(13KB)
----init_net.m(3KB)
----update_stat.m(5KB)
----propagate_C_layer.m(2KB)
----propagate_one_sample.m(2KB)
----set_FM.m(2KB)
----backpropagate_M_layer.m(3KB)

网友评论