Ubuntu16.04下caffe CPU版的图片训练和测试

时间:2023-03-09 18:42:47
Ubuntu16.04下caffe CPU版的图片训练和测试

一 数据准备

二、转换为lmdb格式

1、首先,在examples下面创建一个myfile的文件夹,来用存放配置文件和脚本文件。然后编写一个脚本create_filelist.sh,用来生成train.txt和test.txt清单文件

(caffe_src) root@ranxf-TEST:/workdisk/caffe/examples# mkdir myfile
(caffe_src) root@ranxf-TEST:/workdisk/caffe/examples/myfile# vim create_filelist.sh
#!/usr/bin/env sh
DATA=data/re/
MY=examples/myfile echo "Create train.txt..."
rm -rf $MY/train.txt
for i in
do
find $DATA/train -name $i*.jpg | cut -d '/' -f4- | sed "s/$/ $i/">>$MY/train.txt
done
echo “done” echo "Create test.txt..."
rm -rf $MY/test.txt
for i in
do
find $DATA/test -name $i*.jpg | cut -d '/' -f4- | sed "s/$/ $i/">>$MY/test.txt
done
echo "All done"

然后,运行此脚本(注意是在caffe根目录下)

(caffe_src) root@ranxf-TEST:/workdisk/caffe# sh examples/myfile/create_filelist.sh
Create train.txt...
done
Create test.txt...
All done

成功的话,就会在examples/myfile/ 文件夹下生成train.txt和test.txt两个文本文件,里面就是图片的列表清单。

2、接着再编写一个脚本文件,调用convert_imageset命令来转换数据格式。
# sudo vi examples/myfile/create_lmdb.sh

#!/usr/bin/env sh
MY=examples/myfile echo "Create train lmdb.."
rm -rf $MY/img_train_lmdb
build/tools/convert_imageset \
--shuffle \
--resize_height= \
--resize_width= \
/workdisk/caffe/data/re/ \
$MY/train.txt \
$MY/img_train_lmdb
echo "done" echo "Create test lmdb.."
rm -rf $MY/img_test_lmdb
build/tools/convert_imageset \
--shuffle \
--resize_width= \
--resize_height= \
/workdisk/caffe/data/re/ \
$MY/test.txt \
$MY/img_test_lmdb echo "All Done.."
(caffe_src) root@ranxf-TEST:/workdisk/caffe# ./examples/myfile/create_lmdb.sh
Create train lmdb..
I0910 ::20.354158 convert_imageset.cpp:] Shuffling data
I0910 ::20.354992 convert_imageset.cpp:] A total of images.
I0910 ::20.355206 db_lmdb.cpp:] Opened lmdb examples/myfile/img_train_lmdb
I0910 ::21.807344 convert_imageset.cpp:] Processed files.
done
Create test lmdb..
I0910 ::21.852502 convert_imageset.cpp:] Shuffling data
I0910 ::21.852725 convert_imageset.cpp:] A total of images.
I0910 ::21.852886 db_lmdb.cpp:] Opened lmdb examples/myfile/img_test_lmdb
I0910 ::22.201551 convert_imageset.cpp:] Processed files.
All Done..

因为图片大小不一,因此统一转换成256*256大小。运行成功后,会在 examples/myfile下面生成两个文件夹img_train_lmdb和img_test_lmdb,分别用于保存图片转换后的lmdb文件。

(caffe_src) root@ranxf-TEST:/workdisk/caffe/examples/myfile# ls
create_filelist.sh create_lmdb.sh img_test_lmdb img_train_lmdb test.txt train.txt

三、计算均值并保存

图片减去均值再训练,会提高训练速度和精度。因此,一般都会有这个操作。

caffe程序提供了一个计算均值的文件compute_image_mean.cpp,我们直接使用就可以了

(caffe_src) root@ranxf-TEST:/workdisk/caffe# build/tools/compute_image_mean examples/myfile/img_train_lmdb examples/myfile/mean.binaryproto
I0910 15:56:26.287912 7824 db_lmdb.cpp:35] Opened lmdb examples/myfile/img_train_lmdb
I0910 15:56:26.288938 7824 compute_image_mean.cpp:70] Starting iteration
I0910 15:56:26.352404 7824 compute_image_mean.cpp:101] Processed 400 files.
I0910 15:56:26.352833 7824 compute_image_mean.cpp:108] Write to examples/myfile/mean.binaryproto
I0910 15:56:26.354002 7824 compute_image_mean.cpp:114] Number of channels: 3
I0910 15:56:26.354115 7824 compute_image_mean.cpp:119] mean_value channel [0]: 100.254
I0910 15:56:26.365298 7824 compute_image_mean.cpp:119] mean_value channel [1]: 114.454
I0910 15:56:26.365384 7824 compute_image_mean.cpp:119] mean_value channel [2]: 121.707
(caffe_src) root@ranxf-TEST:/workdisk/caffe#
compute_image_mean带两个参数,第一个参数是lmdb训练数据位置,第二个参数设定均值文件的名字及保存路径。
运行成功后,会在 examples/myfile/ 下面生成一个mean.binaryproto的均值文件。
(caffe_src) root@ranxf-TEST:/workdisk/caffe/examples/myfile# ls
create_filelist.sh create_lmdb.sh img_test_lmdb img_train_lmdb mean.binaryproto test.txt train.txt
(caffe_src) root@ranxf-TEST:/workdisk/caffe/examples/myfile#

四、创建模型并编写配置文件

模型就用程序自带的caffenet模型,位置在 models/bvlc_reference_caffenet/文件夹下, 将需要的两个配置文件,复制到myfile文件夹内

(caffe_src) root@ranxf-TEST:/workdisk/caffe# cp models/bvlc_reference_caffenet/solver.prototxt examples/myfile/
(caffe_src) root@ranxf-TEST:/workdisk/caffe#
(caffe_src) root@ranxf-TEST:/workdisk/caffe# cp models/bvlc_reference_caffenet/train_val.prototxt examples/myfile/

修改其中的solver.prototxt

net: "examples/myfile/train_val.prototxt"
test_iter: 2
test_interval: 50
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 100
display: 20
max_iter: 500
momentum: 0.9
weight_decay: 0.0005
solver_mode: CPU

原始配置文件内容为:

net: "models/bvlc_reference_caffenet/train_val.prototxt"
test_iter: 1000
test_interval: 1000
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 100000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "models/bvlc_reference_caffenet/caffenet_train"
solver_mode: GPU

100个测试数据,batch_size为50,因此test_iter设置为2,就能全cover了。在训练过程中,调整学习率,逐步变小。

修改train_val.protxt,只需要修改两个阶段的data层就可以了,其它可以不用管。就是修改两个data layer的mean_file和source这两个地方,其它都没有变化 。

name: "CaffeNet"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 227
mean_file: "examples/myfile/mean.binaryproto"
}
# mean pixel / channel-wise mean instead of mean image
# transform_param {
# crop_size: 227
# mean_value: 104
# mean_value: 117
# mean_value: 123
# mirror: true
# }
data_param {
source: "examples/myfile/img_train_lmdb"
batch_size: 256
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 227
mean_file: "examples/myfile/mean.binaryproto"
}
# mean pixel / channel-wise mean instead of mean image
# transform_param {
# crop_size: 227
# mean_value: 104
# mean_value: 117
# mean_value: 123
# mirror: false
# }
data_param {
source: "examples/myfile/img_train_lmdb"
batch_size: 50
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
……………………

如果前面都没有问题,数据准备好了,配置文件也配置好了,这一步就比较简单了。

运行时间和最后的精确度,会根据机器配置,参数设置的不同而不同。我的是CPU运行500次10个小时20分钟,准确性69%,吐槽机器配置。

I0911 02:42:50.312186  9113 solver.cpp:464] Snapshotting to binary proto file examples/myfile/solver_iter_500.caffemodel
I0911 02:42:52.477775 9113 sgd_solver.cpp:284] Snapshotting solver state to binary proto file examples/myfile/solver_iter_500.solverstate
I0911 02:42:53.719158 9116 data_layer.cpp:73] Restarting data prefetching from start.
I0911 02:43:23.561343 9113 solver.cpp:327] Iteration 500, loss = 0.689866
I0911 02:43:23.648788 9113 solver.cpp:347] Iteration 500, Testing net (#0)
I0911 02:43:23.693032 9119 data_layer.cpp:73] Restarting data prefetching from start.
I0911 02:43:35.412401 9113 solver.cpp:414] Test net output #0: accuracy = 0.69
I0911 02:43:35.412444 9113 solver.cpp:414] Test net output #1: loss = 0.66485 (* 1 = 0.66485 loss)
I0911 02:43:35.412451 9113 solver.cpp:332] Optimization Done.
I0911 02:43:35.425511 9113 caffe.cpp:250] Optimization Done.

参考文章:Caffe学习:从头到尾跑一遍模型的训练和测试

Caffe学习系列(12):训练和测试自己的图片