ASCII码 ASCII码

Jetson Nano配置YOLOv5并实现FPS=25

发布于:2022-05-08 01:06:35  栏目:技术文档

镜像下载、域名解析、时间同步请点击 阿里云开源镜像站

一、版本说明

JetPack 4.6——2021.8yolov5-v6.0版本使用的为yolov5的yolov5n.pt,并利用tensorrtx进行加速推理,在调用摄像头实时检测可以达到FPS=25。

二、配置CUDA

  1. sudo gedit ~/.bashrc

在打开的文档的末尾添加如下:

  1. export CUDA_HOME=/usr/local/cuda-10.2
  2. export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
  3. export PATH=/usr/local/cuda-10.2/bin:$PATH

保持并退出,终端执行

  1. source ~/.bashrc
  2. nvcc -V #如果配置成功可以看到CUDA的版本号

三、修改Nano板显存

1.打开终端输入:

  1. sudo gedit /etc/systemd/nvzramconfig.sh

2.修改nvzramconfig.sh文件

  1. 修改mem = $((("${totalmem}"/2/"${NRDEVICES}")*1024))
  2. 为mem = $((("${totalmem}"*2/"${NRDEVICES}")*1024))

3.重启Jetson Nano

4.终端中输入:

  1. free -h

可查看到swap已经变为7.7G

四、配置Pytorch1.8

1.下载torch-1.8.0-cp36-cp36m-linux_aarch64.whl

下载地址:nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl

说明:建议在电脑上下载后拷贝到Jetson Nano的文件夹下,因为该网站的服务器在国外,可能下载比较慢或网页加载不出来,可以打开VPN进行下载。

2.安装所需的依赖包及pytorch

打开终端输入:

  1. sudo apt-get update
  2. sudo apt-get upgrade
  3. sudo apt-get dist-upgrade
  4. sudo apt-get install python3-pip libopenblas-base libopenmpi-dev

因为下面用pip指令安装时用默认选用的国外源,所以下载比较费时间,建议更换一下国内源,这里提供阿里源,当使用某种国内源pip无法下载某一包时可以尝试切换再下载。具体步骤如下:

打开终端输入:

  1. mkdir ~/.pip
  2. sudo gedit ~/.pip/pip.conf

在空白文件中输入如下内容保存并退出:

以下为阿里源

  1. [global]
  2. index-url=http://mirrors.aliyun.com/pypi/simple/
  3. [install]
  4. trusted-host=mirrors.aliyun.com

终端输入:

  1. pip3 install --upgrade pip #如果pip已是最新,可不执行
  2. pip3 install Cython
  3. pip3 install numpy
  4. pip3 install torch-1.8.0-cp36-cp36m-linux_aarch64.whl #注意要在存放该文件下的位置打开终端并运行
  5. sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
  6. git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision #下载torchvision,会下载一个文件夹
  7. cd torchvision #或者进入到这个文件夹,右键打开终端
  8. export BUILD_VERSION=0.9.0
  9. python3 setup.py install --user #时间较久
  10. #验证torch和torchvision这两个模块是否安装成功
  11. python3
  12. import torch
  13. print(torch.__version__) #注意version前后都是有两个横杠
  14. #如果安装成功会打印出版本号
  15. import torchvision
  16. print(torchvision.__version__)
  17. #如果安装成功会打印出版本号

五、搭建yolov5环境

终端中输入:

  1. git clone https://github.com/ultralytics/yolov5.git #因为不开VPN很容易下载出错,建议在电脑中下载后拷贝到jetson nano中
  2. python3 -m pip install --upgrade pip
  3. cd yolov5 #如果是手动下载的,文件名称为yolov5-master.zip压缩包格式,所以要对用unzip yolov5-master.zip进行解压,然后再进入到该文件夹
  4. pip3 install -r requirements.txt #我的问题是对matplotlib包装不上,解决办法,在下方。如果其他包安装不上,可去重新执行换源那一步,更换另一种国内源。
  5. python3 -m pip list #可查看python中安装的包
  6. 以下指令可以用来测试yolov5
  7. python3 detect.py --source data/images/bus.jpg --weights yolov5n.pt --img 640 #图片测试
  8. python3 detect.py --source video.mp4 --weights yolov5n.pt --img 640 #视频测试,需要自己准备视频
  9. python3 detect.py --source 0 --weights yolov5n.pt --img 640 #摄像头测试

问题1:解决matplotlib安装不上问题

解决:下载matplotlib的whl包(下方有网盘分享)

问题2:在运行yolov5的detect.py文件时出现 “Illegal instruction(core dumped)”

解决:

  1. sudo gedit ~/.bashrc
  2. 末尾添加
  3. export OPENBLAS_CORETYPE=ARMV8
  4. 保持关闭
  5. source ~/.bashrc

file

file

六、利用tensorrtx加速推理

1.下载tensorrtx

下载地址:https://github.com/wang-xinyu/tensorrtx.git

或者

  1. git clone https://github.com/wang-xinyu/tensorrtx.git

2.编译

将下载的tensorrtx项目中的yolov5/gen_wts.py复制到上述的yolov5(注意:不是tensorrtx下的yolov5!!!)下,然后在此处打开终端

打开终端输入:

  1. python3 gen_wts.py -w yolov5n.pt -o yolov5n.wts #生成wts文件,要先把yolov5n.pt文件放在此处再去执行
  2. cd ~/tensorrtx/yolov5/ #如果是手动下载的名称可能是tensorrtx-master
  3. mkdir build
  4. cd build
  5. 将生成的wts文件复制到build下 #手动下载的,名称为yolov5-master
  6. cmake ..
  7. make -j4
  8. sudo ./yolov5 -s yolov5n.wts yolov5n.engine n #生成engine文件
  9. sudo ./yolov5 -d yolov5n.engine ../samples/ #测试图片查看效果,发现在检测zidane.jpg时漏检,这时可以返回上一层文件夹找到yolov5.cpp中的CONF_THRESH=0.25再进入到build中重新make -j4,再重新运行该指令即可

3.调用USB摄像头

参考了该文章https://blog.csdn.net/weixin_54603153/article/details/120079220

(1)在tensorrtx/yolov5下备份yolov5.cpp文件,因为如果更换模型时重新推理加速时需要用到该文件。

(2)然后对yolov5.cpp文件修改为如下内容

修改了12行和342行

  1. #include <iostream>
  2. #include <chrono>
  3. #include "cuda_utils.h"
  4. #include "logging.h"
  5. #include "common.hpp"
  6. #include "utils.h"
  7. #include "calibrator.h"
  8. #define USE_FP32 // set USE_INT8 or USE_FP16 or USE_FP32
  9. #define DEVICE 0 // GPU id
  10. #define NMS_THRESH 0.4 //0.4
  11. #define CONF_THRESH 0.25 //置信度,默认值为0.5,由于效果不好修改为0.25取得了较好的效果
  12. #define BATCH_SIZE 1
  13. // stuff we know about the network and the input/output blobs
  14. static const int INPUT_H = Yolo::INPUT_H;
  15. static const int INPUT_W = Yolo::INPUT_W;
  16. static const int CLASS_NUM = Yolo::CLASS_NUM;
  17. static const int OUTPUT_SIZE = Yolo::MAX_OUTPUT_BBOX_COUNT * sizeof(Yolo::Detection) / sizeof(float) + 1; // we assume the yololayer outputs no more than MAX_OUTPUT_BBOX_COUNT boxes that conf >= 0.1
  18. const char* INPUT_BLOB_NAME = "data";
  19. const char* OUTPUT_BLOB_NAME = "prob";
  20. static Logger gLogger;
  21. char* my_classes[] = { "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
  22. "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
  23. "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
  24. "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard","surfboard",
  25. "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
  26. "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
  27. "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
  28. "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
  29. "hair drier", "toothbrush" };
  30. static int get_width(int x, float gw, int divisor = 8) {
  31. //return math.ceil(x / divisor) * divisor
  32. if (int(x * gw) % divisor == 0) {
  33. return int(x * gw);
  34. }
  35. return (int(x * gw / divisor) + 1) * divisor;
  36. }
  37. static int get_depth(int x, float gd) {
  38. if (x == 1) {
  39. return 1;
  40. }
  41. else {
  42. return round(x * gd) > 1 ? round(x * gd) : 1;
  43. }
  44. }
  45. ICudaEngine* build_engine(unsigned int maxBatchSize, IBuilder* builder, IBuilderConfig* config, DataType dt, float& gd, float& gw, std::string& wts_name) {
  46. INetworkDefinition* network = builder->createNetworkV2(0U);
  47. // Create input tensor of shape {3, INPUT_H, INPUT_W} with name INPUT_BLOB_NAME
  48. ITensor* data = network->addInput(INPUT_BLOB_NAME, dt, Dims3{ 3, INPUT_H, INPUT_W });
  49. assert(data);
  50. std::map<std::string, Weights> weightMap = loadWeights(wts_name);
  51. /* ------ yolov5 backbone------ */
  52. auto focus0 = focus(network, weightMap, *data, 3, get_width(64, gw), 3, "model.0");
  53. auto conv1 = convBlock(network, weightMap, *focus0->getOutput(0), get_width(128, gw), 3, 2, 1, "model.1");
  54. auto bottleneck_CSP2 = C3(network, weightMap, *conv1->getOutput(0), get_width(128, gw), get_width(128, gw), get_depth(3, gd), true, 1, 0.5, "model.2");
  55. auto conv3 = convBlock(network, weightMap, *bottleneck_CSP2->getOutput(0), get_width(256, gw), 3, 2, 1, "model.3");
  56. auto bottleneck_csp4 = C3(network, weightMap, *conv3->getOutput(0), get_width(256, gw), get_width(256, gw), get_depth(9, gd), true, 1, 0.5, "model.4");
  57. auto conv5 = convBlock(network, weightMap, *bottleneck_csp4->getOutput(0), get_width(512, gw), 3, 2, 1, "model.5");
  58. auto bottleneck_csp6 = C3(network, weightMap, *conv5->getOutput(0), get_width(512, gw), get_width(512, gw), get_depth(9, gd), true, 1, 0.5, "model.6");
  59. auto conv7 = convBlock(network, weightMap, *bottleneck_csp6->getOutput(0), get_width(1024, gw), 3, 2, 1, "model.7");
  60. auto spp8 = SPP(network, weightMap, *conv7->getOutput(0), get_width(1024, gw), get_width(1024, gw), 5, 9, 13, "model.8");
  61. /* ------ yolov5 head ------ */
  62. auto bottleneck_csp9 = C3(network, weightMap, *spp8->getOutput(0), get_width(1024, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.9");
  63. auto conv10 = convBlock(network, weightMap, *bottleneck_csp9->getOutput(0), get_width(512, gw), 1, 1, 1, "model.10");
  64. auto upsample11 = network->addResize(*conv10->getOutput(0));
  65. assert(upsample11);
  66. upsample11->setResizeMode(ResizeMode::kNEAREST);
  67. upsample11->setOutputDimensions(bottleneck_csp6->getOutput(0)->getDimensions());
  68. ITensor* inputTensors12[] = { upsample11->getOutput(0), bottleneck_csp6->getOutput(0) };
  69. auto cat12 = network->addConcatenation(inputTensors12, 2);
  70. auto bottleneck_csp13 = C3(network, weightMap, *cat12->getOutput(0), get_width(1024, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.13");
  71. auto conv14 = convBlock(network, weightMap, *bottleneck_csp13->getOutput(0), get_width(256, gw), 1, 1, 1, "model.14");
  72. auto upsample15 = network->addResize(*conv14->getOutput(0));
  73. assert(upsample15);
  74. upsample15->setResizeMode(ResizeMode::kNEAREST);
  75. upsample15->setOutputDimensions(bottleneck_csp4->getOutput(0)->getDimensions());
  76. ITensor* inputTensors16[] = { upsample15->getOutput(0), bottleneck_csp4->getOutput(0) };
  77. auto cat16 = network->addConcatenation(inputTensors16, 2);
  78. auto bottleneck_csp17 = C3(network, weightMap, *cat16->getOutput(0), get_width(512, gw), get_width(256, gw), get_depth(3, gd), false, 1, 0.5, "model.17");
  79. // yolo layer 0
  80. IConvolutionLayer* det0 = network->addConvolutionNd(*bottleneck_csp17->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW{ 1, 1 }, weightMap["model.24.m.0.weight"], weightMap["model.24.m.0.bias"]);
  81. auto conv18 = convBlock(network, weightMap, *bottleneck_csp17->getOutput(0), get_width(256, gw), 3, 2, 1, "model.18");
  82. ITensor* inputTensors19[] = { conv18->getOutput(0), conv14->getOutput(0) };
  83. auto cat19 = network->addConcatenation(inputTensors19, 2);
  84. auto bottleneck_csp20 = C3(network, weightMap, *cat19->getOutput(0), get_width(512, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.20");
  85. //yolo layer 1
  86. IConvolutionLayer* det1 = network->addConvolutionNd(*bottleneck_csp20->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW{ 1, 1 }, weightMap["model.24.m.1.weight"], weightMap["model.24.m.1.bias"]);
  87. auto conv21 = convBlock(network, weightMap, *bottleneck_csp20->getOutput(0), get_width(512, gw), 3, 2, 1, "model.21");
  88. ITensor* inputTensors22[] = { conv21->getOutput(0), conv10->getOutput(0) };
  89. auto cat22 = network->addConcatenation(inputTensors22, 2);
  90. auto bottleneck_csp23 = C3(network, weightMap, *cat22->getOutput(0), get_width(1024, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.23");
  91. IConvolutionLayer* det2 = network->addConvolutionNd(*bottleneck_csp23->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW{ 1, 1 }, weightMap["model.24.m.2.weight"], weightMap["model.24.m.2.bias"]);
  92. auto yolo = addYoLoLayer(network, weightMap, "model.24", std::vector<IConvolutionLayer*>{det0, det1, det2});
  93. yolo->getOutput(0)->setName(OUTPUT_BLOB_NAME);
  94. network->markOutput(*yolo->getOutput(0));
  95. // Build engine
  96. builder->setMaxBatchSize(maxBatchSize);
  97. config->setMaxWorkspaceSize(16 * (1 << 20)); // 16MB
  98. #if defined(USE_FP16)
  99. config->setFlag(BuilderFlag::kFP16);
  100. #elif defined(USE_INT8)
  101. std::cout << "Your platform support int8: " << (builder->platformHasFastInt8() ? "true" : "false") << std::endl;
  102. assert(builder->platformHasFastInt8());
  103. config->setFlag(BuilderFlag::kINT8);
  104. Int8EntropyCalibrator2* calibrator = new Int8EntropyCalibrator2(1, INPUT_W, INPUT_H, "./coco_calib/", "int8calib.table", INPUT_BLOB_NAME);
  105. config->setInt8Calibrator(calibrator);
  106. #endif
  107. std::cout << "Building engine, please wait for a while..." << std::endl;
  108. ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
  109. std::cout << "Build engine successfully!" << std::endl;
  110. // Don't need the network any more
  111. network->destroy();
  112. // Release host memory
  113. for (auto& mem : weightMap)
  114. {
  115. free((void*)(mem.second.values));
  116. }
  117. return engine;
  118. }
  119. ICudaEngine* build_engine_p6(unsigned int maxBatchSize, IBuilder* builder, IBuilderConfig* config, DataType dt, float& gd, float& gw, std::string& wts_name) {
  120. INetworkDefinition* network = builder->createNetworkV2(0U);
  121. // Create input tensor of shape {3, INPUT_H, INPUT_W} with name INPUT_BLOB_NAME
  122. ITensor* data = network->addInput(INPUT_BLOB_NAME, dt, Dims3{ 3, INPUT_H, INPUT_W });
  123. assert(data);
  124. std::map<std::string, Weights> weightMap = loadWeights(wts_name);
  125. /* ------ yolov5 backbone------ */
  126. auto focus0 = focus(network, weightMap, *data, 3, get_width(64, gw), 3, "model.0");
  127. auto conv1 = convBlock(network, weightMap, *focus0->getOutput(0), get_width(128, gw), 3, 2, 1, "model.1");
  128. auto c3_2 = C3(network, weightMap, *conv1->getOutput(0), get_width(128, gw), get_width(128, gw), get_depth(3, gd), true, 1, 0.5, "model.2");
  129. auto conv3 = convBlock(network, weightMap, *c3_2->getOutput(0), get_width(256, gw), 3, 2, 1, "model.3");
  130. auto c3_4 = C3(network, weightMap, *conv3->getOutput(0), get_width(256, gw), get_width(256, gw), get_depth(9, gd), true, 1, 0.5, "model.4");
  131. auto conv5 = convBlock(network, weightMap, *c3_4->getOutput(0), get_width(512, gw), 3, 2, 1, "model.5");
  132. auto c3_6 = C3(network, weightMap, *conv5->getOutput(0), get_width(512, gw), get_width(512, gw), get_depth(9, gd), true, 1, 0.5, "model.6");
  133. auto conv7 = convBlock(network, weightMap, *c3_6->getOutput(0), get_width(768, gw), 3, 2, 1, "model.7");
  134. auto c3_8 = C3(network, weightMap, *conv7->getOutput(0), get_width(768, gw), get_width(768, gw), get_depth(3, gd), true, 1, 0.5, "model.8");
  135. auto conv9 = convBlock(network, weightMap, *c3_8->getOutput(0), get_width(1024, gw), 3, 2, 1, "model.9");
  136. auto spp10 = SPP(network, weightMap, *conv9->getOutput(0), get_width(1024, gw), get_width(1024, gw), 3, 5, 7, "model.10");
  137. auto c3_11 = C3(network, weightMap, *spp10->getOutput(0), get_width(1024, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.11");
  138. /* ------ yolov5 head ------ */
  139. auto conv12 = convBlock(network, weightMap, *c3_11->getOutput(0), get_width(768, gw), 1, 1, 1, "model.12");
  140. auto upsample13 = network->addResize(*conv12->getOutput(0));
  141. assert(upsample13);
  142. upsample13->setResizeMode(ResizeMode::kNEAREST);
  143. upsample13->setOutputDimensions(c3_8->getOutput(0)->getDimensions());
  144. ITensor* inputTensors14[] = { upsample13->getOutput(0), c3_8->getOutput(0) };
  145. auto cat14 = network->addConcatenation(inputTensors14, 2);
  146. auto c3_15 = C3(network, weightMap, *cat14->getOutput(0), get_width(1536, gw), get_width(768, gw), get_depth(3, gd), false, 1, 0.5, "model.15");
  147. auto conv16 = convBlock(network, weightMap, *c3_15->getOutput(0), get_width(512, gw), 1, 1, 1, "model.16");
  148. auto upsample17 = network->addResize(*conv16->getOutput(0));
  149. assert(upsample17);
  150. upsample17->setResizeMode(ResizeMode::kNEAREST);
  151. upsample17->setOutputDimensions(c3_6->getOutput(0)->getDimensions());
  152. ITensor* inputTensors18[] = { upsample17->getOutput(0), c3_6->getOutput(0) };
  153. auto cat18 = network->addConcatenation(inputTensors18, 2);
  154. auto c3_19 = C3(network, weightMap, *cat18->getOutput(0), get_width(1024, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.19");
  155. auto conv20 = convBlock(network, weightMap, *c3_19->getOutput(0), get_width(256, gw), 1, 1, 1, "model.20");
  156. auto upsample21 = network->addResize(*conv20->getOutput(0));
  157. assert(upsample21);
  158. upsample21->setResizeMode(ResizeMode::kNEAREST);
  159. upsample21->setOutputDimensions(c3_4->getOutput(0)->getDimensions());
  160. ITensor* inputTensors21[] = { upsample21->getOutput(0), c3_4->getOutput(0) };
  161. auto cat22 = network->addConcatenation(inputTensors21, 2);
  162. auto c3_23 = C3(network, weightMap, *cat22->getOutput(0), get_width(512, gw), get_width(256, gw), get_depth(3, gd), false, 1, 0.5, "model.23");
  163. auto conv24 = convBlock(network, weightMap, *c3_23->getOutput(0), get_width(256, gw), 3, 2, 1, "model.24");
  164. ITensor* inputTensors25[] = { conv24->getOutput(0), conv20->getOutput(0) };
  165. auto cat25 = network->addConcatenation(inputTensors25, 2);
  166. auto c3_26 = C3(network, weightMap, *cat25->getOutput(0), get_width(1024, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.26");
  167. auto conv27 = convBlock(network, weightMap, *c3_26->getOutput(0), get_width(512, gw), 3, 2, 1, "model.27");
  168. ITensor* inputTensors28[] = { conv27->getOutput(0), conv16->getOutput(0) };
  169. auto cat28 = network->addConcatenation(inputTensors28, 2);
  170. auto c3_29 = C3(network, weightMap, *cat28->getOutput(0), get_width(1536, gw), get_width(768, gw), get_depth(3, gd), false, 1, 0.5, "model.29");
  171. auto conv30 = convBlock(network, weightMap, *c3_29->getOutput(0), get_width(768, gw), 3, 2, 1, "model.30");
  172. ITensor* inputTensors31[] = { conv30->getOutput(0), conv12->getOutput(0) };
  173. auto cat31 = network->addConcatenation(inputTensors31, 2);
  174. auto c3_32 = C3(network, weightMap, *cat31->getOutput(0), get_width(2048, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.32");
  175. /* ------ detect ------ */
  176. IConvolutionLayer* det0 = network->addConvolutionNd(*c3_23->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW{ 1, 1 }, weightMap["model.33.m.0.weight"], weightMap["model.33.m.0.bias"]);
  177. IConvolutionLayer* det1 = network->addConvolutionNd(*c3_26->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW{ 1, 1 }, weightMap["model.33.m.1.weight"], weightMap["model.33.m.1.bias"]);
  178. IConvolutionLayer* det2 = network->addConvolutionNd(*c3_29->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW{ 1, 1 }, weightMap["model.33.m.2.weight"], weightMap["model.33.m.2.bias"]);
  179. IConvolutionLayer* det3 = network->addConvolutionNd(*c3_32->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW{ 1, 1 }, weightMap["model.33.m.3.weight"], weightMap["model.33.m.3.bias"]);
  180. auto yolo = addYoLoLayer(network, weightMap, "model.33", std::vector<IConvolutionLayer*>{det0, det1, det2, det3});
  181. yolo->getOutput(0)->setName(OUTPUT_BLOB_NAME);
  182. network->markOutput(*yolo->getOutput(0));
  183. // Build engine
  184. builder->setMaxBatchSize(maxBatchSize);
  185. config->setMaxWorkspaceSize(16 * (1 << 20)); // 16MB
  186. #if defined(USE_FP16)
  187. config->setFlag(BuilderFlag::kFP16);
  188. #elif defined(USE_INT8)
  189. std::cout << "Your platform support int8: " << (builder->platformHasFastInt8() ? "true" : "false") << std::endl;
  190. assert(builder->platformHasFastInt8());
  191. config->setFlag(BuilderFlag::kINT8);
  192. Int8EntropyCalibrator2* calibrator = new Int8EntropyCalibrator2(1, INPUT_W, INPUT_H, "./coco_calib/", "int8calib.table", INPUT_BLOB_NAME);
  193. config->setInt8Calibrator(calibrator);
  194. #endif
  195. std::cout << "Building engine, please wait for a while..." << std::endl;
  196. ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
  197. std::cout << "Build engine successfully!" << std::endl;
  198. // Don't need the network any more
  199. network->destroy();
  200. // Release host memory
  201. for (auto& mem : weightMap)
  202. {
  203. free((void*)(mem.second.values));
  204. }
  205. return engine;
  206. }
  207. void APIToModel(unsigned int maxBatchSize, IHostMemory** modelStream, float& gd, float& gw, std::string& wts_name) {
  208. // Create builder
  209. IBuilder* builder = createInferBuilder(gLogger);
  210. IBuilderConfig* config = builder->createBuilderConfig();
  211. // Create model to populate the network, then set the outputs and create an engine
  212. ICudaEngine* engine = build_engine(maxBatchSize, builder, config, DataType::kFLOAT, gd, gw, wts_name);
  213. assert(engine != nullptr);
  214. // Serialize the engine
  215. (*modelStream) = engine->serialize();
  216. // Close everything down
  217. engine->destroy();
  218. builder->destroy();
  219. config->destroy();
  220. }
  221. void doInference(IExecutionContext& context, cudaStream_t& stream, void** buffers, float* input, float* output, int batchSize) {
  222. // DMA input batch data to device, infer on the batch asynchronously, and DMA output back to host
  223. CUDA_CHECK(cudaMemcpyAsync(buffers[0], input, batchSize * 3 * INPUT_H * INPUT_W * sizeof(float), cudaMemcpyHostToDevice, stream));
  224. context.enqueue(batchSize, buffers, stream, nullptr);
  225. CUDA_CHECK(cudaMemcpyAsync(output, buffers[1], batchSize * OUTPUT_SIZE * sizeof(float), cudaMemcpyDeviceToHost, stream));
  226. cudaStreamSynchronize(stream);
  227. }
  228. bool parse_args(int argc, char** argv, std::string& engine) {
  229. if (argc < 3) return false;
  230. if (std::string(argv[1]) == "-v" && argc == 3) {
  231. engine = std::string(argv[2]);
  232. }
  233. else {
  234. return false;
  235. }
  236. return true;
  237. }
  238. int main(int argc, char** argv) {
  239. cudaSetDevice(DEVICE);
  240. //std::string wts_name = "";
  241. std::string engine_name = "";
  242. //float gd = 0.0f, gw = 0.0f;
  243. //std::string img_dir;
  244. if (!parse_args(argc, argv, engine_name)) {
  245. std::cerr << "arguments not right!" << std::endl;
  246. std::cerr << "./yolov5 -v [.engine] // run inference with camera" << std::endl;
  247. return -1;
  248. }
  249. std::ifstream file(engine_name, std::ios::binary);
  250. if (!file.good()) {
  251. std::cerr << " read " << engine_name << " error! " << std::endl;
  252. return -1;
  253. }
  254. char* trtModelStream{ nullptr };
  255. size_t size = 0;
  256. file.seekg(0, file.end);
  257. size = file.tellg();
  258. file.seekg(0, file.beg);
  259. trtModelStream = new char[size];
  260. assert(trtModelStream);
  261. file.read(trtModelStream, size);
  262. file.close();
  263. // prepare input data ---------------------------
  264. static float data[BATCH_SIZE * 3 * INPUT_H * INPUT_W];
  265. //for (int i = 0; i < 3 * INPUT_H * INPUT_W; i++)
  266. // data[i] = 1.0;
  267. static float prob[BATCH_SIZE * OUTPUT_SIZE];
  268. IRuntime* runtime = createInferRuntime(gLogger);
  269. assert(runtime != nullptr);
  270. ICudaEngine* engine = runtime->deserializeCudaEngine(trtModelStream, size);
  271. assert(engine != nullptr);
  272. IExecutionContext* context = engine->createExecutionContext();
  273. assert(context != nullptr);
  274. delete[] trtModelStream;
  275. assert(engine->getNbBindings() == 2);
  276. void* buffers[2];
  277. // In order to bind the buffers, we need to know the names of the input and output tensors.
  278. // Note that indices are guaranteed to be less than IEngine::getNbBindings()
  279. const int inputIndex = engine->getBindingIndex(INPUT_BLOB_NAME);
  280. const int outputIndex = engine->getBindingIndex(OUTPUT_BLOB_NAME);
  281. assert(inputIndex == 0);
  282. assert(outputIndex == 1);
  283. // Create GPU buffers on device
  284. CUDA_CHECK(cudaMalloc(&buffers[inputIndex], BATCH_SIZE * 3 * INPUT_H * INPUT_W * sizeof(float)));
  285. CUDA_CHECK(cudaMalloc(&buffers[outputIndex], BATCH_SIZE * OUTPUT_SIZE * sizeof(float)));
  286. // Create stream
  287. cudaStream_t stream;
  288. CUDA_CHECK(cudaStreamCreate(&stream));
  289. cv::VideoCapture capture("/home/cao-yolox/yolov5/tensorrtx-master/yolov5/samples/1.mp4"); #修改为自己要检测的视频或者图片,注意要写全路径,如果调用摄像头,则括号内的参数设为0,注意引号要去掉。
  290. //cv::VideoCapture capture("../overpass.mp4");
  291. //int fourcc = cv::VideoWriter::fourcc('M','J','P','G');
  292. //capture.set(cv::CAP_PROP_FOURCC, fourcc);
  293. if (!capture.isOpened()) {
  294. std::cout << "Error opening video stream or file" << std::endl;
  295. return -1;
  296. }
  297. int key;
  298. int fcount = 0;
  299. while (1)
  300. {
  301. cv::Mat frame;
  302. capture >> frame;
  303. if (frame.empty())
  304. {
  305. std::cout << "Fail to read image from camera!" << std::endl;
  306. break;
  307. }
  308. fcount++;
  309. //if (fcount < BATCH_SIZE && f + 1 != (int)file_names.size()) continue;
  310. for (int b = 0; b < fcount; b++) {
  311. //cv::Mat img = cv::imread(img_dir + "/" + file_names[f - fcount + 1 + b]);
  312. cv::Mat img = frame;
  313. if (img.empty()) continue;
  314. cv::Mat pr_img = preprocess_img(img, INPUT_W, INPUT_H); // letterbox BGR to RGB
  315. int i = 0;
  316. for (int row = 0; row < INPUT_H; ++row) {
  317. uchar* uc_pixel = pr_img.data + row * pr_img.step;
  318. for (int col = 0; col < INPUT_W; ++col) {
  319. data[b * 3 * INPUT_H * INPUT_W + i] = (float)uc_pixel[2] / 255.0;
  320. data[b * 3 * INPUT_H * INPUT_W + i + INPUT_H * INPUT_W] = (float)uc_pixel[1] / 255.0;
  321. data[b * 3 * INPUT_H * INPUT_W + i + 2 * INPUT_H * INPUT_W] = (float)uc_pixel[0] / 255.0;
  322. uc_pixel += 3;
  323. ++i;
  324. }
  325. }
  326. }
  327. // Run inference
  328. auto start = std::chrono::system_clock::now();
  329. doInference(*context, stream, buffers, data, prob, BATCH_SIZE);
  330. auto end = std::chrono::system_clock::now();
  331. //std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << "ms" << std::endl;
  332. int fps = 1000.0 / std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
  333. std::vector<std::vector<Yolo::Detection>> batch_res(fcount);
  334. for (int b = 0; b < fcount; b++) {
  335. auto& res = batch_res[b];
  336. nms(res, &prob[b * OUTPUT_SIZE], CONF_THRESH, NMS_THRESH);
  337. }
  338. for (int b = 0; b < fcount; b++) {
  339. auto& res = batch_res[b];
  340. //std::cout << res.size() << std::endl;
  341. //cv::Mat img = cv::imread(img_dir + "/" + file_names[f - fcount + 1 + b]);
  342. for (size_t j = 0; j < res.size(); j++) {
  343. cv::Rect r = get_rect(frame, res[j].bbox);
  344. cv::rectangle(frame, r, cv::Scalar(0x27, 0xC1, 0x36), 2);
  345. std::string label = my_classes[(int)res[j].class_id];
  346. cv::putText(frame, label, cv::Point(r.x, r.y - 1), cv::FONT_HERSHEY_PLAIN, 1.2, cv::Scalar(0xFF, 0xFF, 0xFF), 2);
  347. std::string jetson_fps = "Jetson Nano FPS: " + std::to_string(fps);
  348. cv::putText(frame, jetson_fps, cv::Point(11, 80), cv::FONT_HERSHEY_PLAIN, 3, cv::Scalar(0, 0, 255), 2, cv::LINE_AA);
  349. }
  350. //cv::imwrite("_" + file_names[f - fcount + 1 + b], img);
  351. }
  352. cv::imshow("yolov5", frame);
  353. key = cv::waitKey(1);
  354. if (key == 'q') {
  355. break;
  356. }
  357. fcount = 0;
  358. }
  359. capture.release();
  360. // Release stream and buffers
  361. cudaStreamDestroy(stream);
  362. CUDA_CHECK(cudaFree(buffers[inputIndex]));
  363. CUDA_CHECK(cudaFree(buffers[outputIndex]));
  364. // Destroy the engine
  365. context->destroy();
  366. engine->destroy();
  367. runtime->destroy();
  368. return 0;
  369. }

4.重新编译

进入到buid下重新make。注意只要修改了yolov5.cpp就要重新make。执行

  1. sudo ./yolov5 -v yolov5n.engine #注意要提前插好摄像头

问题:出现Failed to load module “canberra-gtk-module”

解决:

  1. sudo apt-get install libcanberra-gtk-module

5.效果

如下的测试,是在一个公用的行人检测的视频上进行的,如果想用可在如下链接下载:

链接:https://pan.baidu.com/s/1HivF1OifVA8pHnGKtkXPfg

提取码:jr7o

file

本文转自:https://blog.csdn.net/carrymingteng/article/details/120978053

相关推荐
阅读 +