将自有数据集下yolov训练结果(*.weights) 在进行部署

时间:2022-12-26 21:01:57

 



将自有数据集下yolov训练结果(*.weights) 在进行部署


 

原始的 *.weigths是由darknet训练的(目前也存在tensorflow和其它库的改写版本,这里不做介绍),AlexeyAB版的darknet( ​​https://github.com/AlexeyAB/darknet​​)的配进行了较多改进,是我推荐的版本。在他的git中 非常详细地讲解了配置方法,特别关键的是在训练过程中能够直观回显loss和mAP图。


 



将自有数据集下yolov训练结果(*.weights) 在进行部署


 


它的中文翻译版本(有所简化)


​https://zhuanlan.zhihu.com/p/102628373​


关于自有数据下yolo训练,请参考《YOLOv3自有数据集训练(AlexeyAB等)》


最后我们希望能够部署结果,我认为主要有两种方法:


一是基于darkhelp直接部署。由于darknent网络是C语言类库,所以就有 darkhelp ( DarkHelp is  not  Darknet! DarkHelp is a C++ API layer used to call Darknet's original C API.)  这样纯C的部署,目前这个库运行在Linux上,初步发现在部署的过程中存在一些需要修改的部分,目前还需要进一步研究。


二是 在OpenVINO上面,由于OpenVINO只识别自家的IR模型,所以就需要首先将.weights转换为pb,再转换为IR。这个方面的资料以OpenVINO自家的最为全面( ​​https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html​​ ),简单进行翻译。


 


一、模型转换主要分为两个步骤。


1、由*.weights转换为*.pb


To dump TensorFlow model out of ​​https://github.com/mystic123/tensorflow-yolo-v3​​ GitHub repository (commit ed60b90), follow the instructions below:

  1. Clone the repository:


    ​git clone https://github.com/mystic123/tensorflow-yolo-v3.git
    cd tensorflow-yolo-v3​



  2. (Optional) Checkout to the commit that the conversion was tested on:


    ​git checkout ed60b90​


  3. Download ​​coco.names​​ file from the DarkNet website OR use labels that fit your task.
  4. Download the ​​yolov3.weights​​​ (for the YOLOv3 model) or ​​yolov3-tiny.weights​​ (for the YOLOv3-tiny model) file OR use your pretrained weights with the same structure
  5. Run a converter:
  • for YOLO-v3:


    ​python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3.weights​


  • for YOLOv3-tiny:


    ​python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3-tiny.weights --tiny​


If you have YOLOv3 weights trained for an input image with the size different from 416 (320, 608 or your own), please provide the ​​--size​​ key with the size of your image specified while running the converter. For example, run the following command for an image with size 608:


​python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3_608.weights --size 608​



 2、由*.pb转换为IR(xml+bin)


To solve the problems explained in the ​​YOLOv3 architecture overview​​​ section, use the ​​yolo_v3.json​​​ or ​​yolo_v3_tiny.json​​ (depending on a model) configuration file with custom operations located in the <OPENVINO_INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf repository.

It consists of several attributes:


​​[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"anchors": [10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326],
"coords": 4,
"num": 9,
"masks":[[6, 7, 8], [3, 4, 5], [0, 1, 2]],
"entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]
}
}
]​​

where:

  • ​id​​​ and ​​match_kind​​ are parameters that you cannot change.
  • ​custom_attributes​​ is a parameter that stores all the YOLOv3 specific attributes:
  • ​classes​​​, ​​coords​​​, ​​num​​​, and ​​masks​​​ are attributes that you should copy from the configuration file file that was used for model training. If you used DarkNet officially shared weights, you can use ​​yolov3.cfg​​​ or ​​yolov3-tiny.cfg​​​ configuration file from ​​https://github.com/pjreddie/darknet/tree/master/cfg​​​. Replace the default values in ​​custom_attributes​​​ with the parameters that follow the ​​[yolo]​​ titles in the configuration file.
  • ​anchors​​​ is an optional parameter that is not used while inference of the model, but it used in a demo to parse ​​Region​​ layer output
  • ​entry_points​​ is a node name list to cut off the model and append the Region layer with custom attributes specified above.

To generate the IR of the YOLOv3 TensorFlow model, run:


​python3 mo_tf.py
--input_model /path/to/yolo_v3.pb
--tensorflow_use_custom_operations_config $MO_ROOT/extensions/front/tf/yolo_v3.json
--batch 1​


To generate the IR of the YOLOv3-tiny TensorFlow model, run:


​python3 mo_tf.py
--input_model /path/to/yolo_v3_tiny.pb
--tensorflow_use_custom_operations_config $MO_ROOT/extensions/front/tf/yolo_v3_tiny.json
--batch 1​


where:

--batch defines shape of model input. In the example, --batch is equal to 1, but you can also specify other integers larger than 1.

--tensorflow_use_custom_operations_config adds missing Region layers to the model. In the IR, the Region layer has name RegionYolo.


这部分可能出现的问题比较多,注意事项:


1、出现:'Graph' object has no attribute 'node '


解决方法: pip install networkx==2.3


2、*.json文件作为训练参数文件,需要做较多修改。


以上部分,我都是直接在ubuntu中进行的修改。


 


二、模型部署主要参考官方文档


​https://docs.openvinotoolkit.org/latest/_demos_object_detection_demo_yolov3_async_README.html​


 


对于OpenVINO来说,只需要有最后的IR模型就可以。我发现对于自己训练的IR来说,由于类别比较少,能够以较快的速度运行;但是对于原始的80类数据,运转起来速度就会比较慢—甚至无法运行;此外目前只有tiny下是可以运行的,yolo是无法跑的,这里出现一些新的错误,都是由于采用了复杂模型带来的异常,只有能够进一步懂原理,才能够最终灵活运用。