Blog:
NXP iMX8基于eIQ框架测试 Machine Learning

2021年3月12日星期五

简介

随着嵌入式处理器性能的提升甚至一些嵌入式处理器已经开始集成针对人工智能和机器学习的硬件加速单元NPU,机器学习应用在嵌入式边缘设备的应用也慢慢展现。为此,NXP也发布了eIQ for i.MX软件工具包,用于在NXP的i.MX系列嵌入式处理器上面来支持目前比较常见的各种机器学习推理引擎,比如TensorFlow、Caffe等,具体的支持情况可以参考下图,其中ArmNN、TensorFlowLite、ONNX可以支持GPU/NPU硬件加速,而OpenCV和PyTorch目前只支持在CPU运行。

NXP eIQ协议栈通过Neural Network Runtime (NNRT)模块来对不同的前端Runtime进行硬件加速支持,具体的架构可以参考下图,对于很多机器学习算法场景,通过硬件加速引擎可以很大提升算法推理性能。

 

本文的演示的平台来自于Toradex Apalis iMX8 ARM嵌入式平台,这是一个基于NXP iMX8QM ARM处理器,支持Cortex-A72+A53和Coretex-M4架构的计算机模块平台。

准备

Apalis iMX8QM 4GB WB IT ARM 核心版配合Ioxra 载板,连接调试串口UART1(载板X22)到开发主机方便调试。载板连接HDMI显示器。

Apalis iMX8 Yocto Linux 编译部署以及配置

Apalis iMX8 Yocto Linux 通过Yocto/Openembedded 框架编译,具体的配置方法请参考这里,参考如下修改后编译Reference-Multimedia image镜像。如果使用 Linux BSP V6 版本,对应 Yocto Project branch 为 kirkstone,请联系 Toradex 获取最新资料。

iMX8 Yocto layer 中默认没有包含NXP Machine Learning和OpenCV 4.4.0版本支持,因此首先需要通过下面修改添加相关layer,详细的NXP Yocto指南请参考i.MX Yocto Project User's Guide Rev. L5.4.70_2.3.0

  • download related layers from NXP official repository 

$ repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-zeus -m imx-5.4.70-2.3.0.xml
$ repo sync 
$ DISTRO=fsl-imx-wayland MACHINE=imx8qmmek source imx-setup-release.sh -b build

  • copy mechine learning layer meta-ml to Toradex yocto environment

$ cp -r …/sources/meta-imx/meta-ml …/oe-core/layers/

  • modify meta-ml layer …/layers/meta-ml/conf/layer.conf file to support yocto dunfell

--- a/layers/meta-ml/conf/layer.conf 2021-03-03 15:50:59.718815084 +0800
+++ b/layers/meta-ml/conf/layer.conf 2021-03-03 16:55:46.791158625 +0800
@@ -8,4 +8,4 @@
BBFILE_COLLECTIONS += "meta-ml"
BBFILE_PATTERN_meta-ml := "^${LAYERDIR}/"
BBFILE_PRIORITY_meta-ml = "8"
-LAYERSERIES_COMPAT_meta-ml = "warrior zeus"
+LAYERSERIES_COMPAT_meta-ml = "warrior zeus dunfell"

  • copy opencv 4.4.0 related to Toradex yocto environment

$ cp -r …/sources/meta-imx/meta-bsp/recipes-support/opencv/ …/oe-core/layers/meta-toradex-nxp/recipes-support/opencv/

  • ### modify build/conf/bblayer.conf to add above extra layers

--- a/build/conf/bblayers.conf    
+++ b/build/conf/bblayers.conf    
@@ -24,6 +24,9 @@
 ${TOPDIR}/../layers/meta-openembedded/meta-python \
 ${TOPDIR}/../layers/meta-freescale-distro \
 ${TOPDIR}/../layers/meta-toradex-demos \
+  ${TOPDIR}/../layers/meta-ml \
 ${TOPDIR}/../layers/meta-qt5 \
 \
 \


修改 local.conf,增加mechine learning 相关支持

  • add python and opencv support ###

+IMAGE_INSTALL_append = " python3 python3-pip opencv python3-opencv python3-pillow"

  • add eIQ support

+IMAGE_INSTALL_append = " arm-compute-library nn-imx tensorflow-lite armnn onnxruntime"
+PACKAGECONFIG_append_pn-opencv_mx8 = " dnn jasper qt5 test"

  • remove opencl conflict

+PACKAGECONFIG_remove_pn-opencv_mx8 = "opencl"
+PACKAGECONFIG_remove_pn-arm-compute-library = "opencl"

  • option, add onnxruntime and armnn dev support to SDK

+TOOLCHAIN_TARGET_TASK_append += " onnxruntime-dev armnn-dev "

  • accept NXP EULA

ACCEPT_FSL_EULA = "1"

编译image和SDK

  • compile Reference-Multimedia image

$ bitbake bitbake tdx-reference-multimedia-image

  • compile SDK

bitbake tdx-reference-multimedia-image -c populate_sdk


Yocto Linux image部署

参考这里通过Toradex Easy installer将上面编译好的image更新部署到模块,目前最新稳定版本为Yocto Linux V5.1,最新测试版本为Yocto Linux V5.2

TensorFlow Lite 测试

NXP iMX8 eIQ TensorFlow Lite 支持特性和协议栈框图如下

  • TensorFlow Lite v2.3.1
  • Multithreaded computation with acceleration using Arm Neon SIMD instructions on Cortex-A cores
  • Parallel computation using GPU/NPU hardware acceleration (on shader or convolution units)
  • C++ and Python API (supported Python version 3)
  • Per-tensor and Per-channel quantized models support

 

示例应用测试

Image预装的TensorFlow Lite测试示例应用位置

/usr/bin/tensorflow-lite-2.3.1/examples

基于mobilenet model测试 “label_image”示例应用

$ cd /usr/bin/tensorflow-lite-2.3.1/examples/

  • Run on CPU

$ ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt
Loaded model mobilenet_v1_1.0_224_quant.tflite
resolved reporter
invoked 
average time: 44.999 ms 
0.780392: 653 military uniform
0.105882: 907 Windsor tie
0.0156863: 458 bow tie
0.0117647: 466 bulletproof vest
0.00784314: 835 suit


  • Run with GPU acceleration

$ ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt -a 1
Loaded model mobilenet_v1_1.0_224_quant.tflite
resolved reporter
INFO: Created TensorFlow Lite delegate for NNAPI.
Applied NNAPI delegate.
invoked 
average time: 13.103 ms 
0.784314: 653 military uniform
0.105882: 907 Windsor tie
0.0156863: 458 bow tie
0.0117647: 466 bulletproof vest
0.00784314: 668 mortarboard

  • TensorFlow Lite Python API predefined example script run, no option to choose CPU or GPU, run with GPU
    acceleration by default if libneuralnetworks.so or libneuralnetworks.so.1 is found in the /usr/lib directory,
    otherwise run on CPU

$ python3 label_image.py
INFO: Created TensorFlow Lite delegate for NNAPI.
Applied NNAPI delegate.
Warm-up time: 5052.5 ms

Inference time: 12.7 ms

0.674510: military uniform
0.129412: Windsor tie
0.039216: bow tie
0.027451: mortarboard
0.019608: bulletproof vest

更多示例和benchmark测试,C++ API应用开发以及当前eIQ对于TensorFlow Lite不同模型的限制等更多信息可以参考NXP i.MX Machine Learning User's Guide Rev. L5.4.70_2.3.0 Chapter 3 TensorFlow Lite,从上面简单测试可以看出NPU加速下mobilenet 模型要比CPU运行性能更好。

Arm Compute Library 测试

ACL(ARM-Compute Library)是专为ARM CPU & GPU优化设计的计算机视觉和机器学习库,基于NEON & OpenCL支持的 SIMD 技术,但在iMX8平台目前只支持CPU NEON加速,另外因为其为ARM NN架构的计算引擎,因此一般来说建议直接使用ARM NN。NXP iMX8 eIQ ACL支持特性如下

  • Arm Compute Library 20.02.01
  • Multithreaded computation with acceleration using Arm Neon SIMD instructions on Cortex-A cores
  • C++ API only
  • Low-level control over computation

示例应用测试

Image预装的ACL测试示例应用位置

/usr/share/arm-compute-library/build/examples

MobileNet v2 DNN model,随机输入量测试

$ cd /usr/share/arm-compute-library/build/examples
$ ./graph_mobilenet_v2 
Threads : 1
Target : NEON
Data type : F32
Data layout : NHWC
Tuner enabled? : false
Cache enabled? : false
Tuner mode : Normal
Tuner file : 
Fast math enabled? : false

Test passed

更多示例测试和参数说明可以参考NXP i.MX Machine Learning User's Guide Rev. L5.4.70_2.3.0 Chapter 4 Arm Compute Library。

Arm NN 测试

Arm NN是适用于CPU,GPU和NPU的开源推理引擎,该软件桥接了现有神经网络框架(例如 TensorFlow 、TensorFlow Lite、Caffe或 ONNX)与在嵌入式 Linux 平台上运行的底层处理硬件(例如 CPU、GPU 或NPU)。这样,开发人员能够继续使用他们首选的框架和工具,经 Arm NN 无缝转换结果后可在底层平台上运行,NXP iMX8 eIQ ARM NN支持特性和协议栈框图如下

  • Arm NN 20.02.01
  • Multithreaded computation with acceleration using Arm Neon SIMD instructions on Cortex-A cores provided by the ACL
  • Neon backend
  • Parallel computation using GPU/NPU hardware acceleration (on shader or convolution units) provided by the VSI
  • NPU backend
  • C++ and Python API (supported Python version 3)
  • Supports multiple input formats (TensorFlow, TensorFlow Lite, Caffe, ONNX)
  • Off-line tools for serialization, deserialization, and quantization (must be built from source)


Apalis iMX8 $Home 目录下创建如下测试使用目录以供后续测试使用

$ mkdir ArmnnTests
$ cd ArmnnTests
$ mkdir data
$ mkdir models

Caffe示例应用测试

Image包含如下ARM NN Caffe模型测试示例,本文随机选择CaffeAlexNet-Armnn进行测试

/usr/bin/CaffeAlexNet-Armnn
/usr/bin/CaffeCifar10AcrossChannels-Armnn
/usr/bin/CaffeInception_BN-Armnn
/usr/bin/CaffeMnist-Armnn
/usr/bin/CaffeResNet-Armnn
/usr/bin/CaffeVGG-Armnn
/usr/bin/CaffeYolo-Armnn

部署模型和输入数据文件到模块

  • 这里下载,bvlc_alexnet_1.caffemodel 模型文件,部署到 Apalis iMX8 ~/ArmnnTests/models;shark.jpg 输入文件,部署到Apalis iMX8 ~/ArmnnTests/data

$ cd ArmnnTests

  • Run with C++ backend, CPU without NEON

$ CaffeAlexNet-Armnn --data-dir=data --model-dir=models --compute=CpuRef 
Info: ArmNN v20200200
Info: = Prediction values for test #0
Info: Top(1) prediction is 2 with value: 0.706227
Info: Top(2) prediction is 0 with value: 1.26575e-05
Info: Total time for 1 test cases: 15.842 seconds
Info: Average time per test case: 15841.653 ms
Info: Overall accuracy: 1.000

  • Run with ACL NEON backend, CPU with NEON

$ CaffeAlexNet-Armnn --data-dir=data --model-dir=models --compute=CpuAcc
Info: ArmNN v20200200 
Info: = Prediction values for test #0
Info: Top(1) prediction is 2 with value: 0.706226
Info: Top(2) prediction is 0 with value: 1.26573e-05
Info: Total time for 1 test cases: 0.237 seconds
Info: Average time per test case: 236.571 ms
Info: Overall accuracy: 1.000

  • Run with GPU/NPU backend

$ CaffeAlexNet-Armnn --data-dir=data --model-dir=models --compute=VsiNpu
Info: ArmNN v20200200
size = 618348Warn-Start NN executionInfo: = Prediction values for test #0
Info: Top(1) prediction is 2 with value: 0.706227
Info: Top(2) prediction is 0 with value: 1.26573e-05
Info: Total time for 1 test cases: 0.304 seconds
Info: Average time per test case: 304.270 ms
Info: Overall accuracy: 1.000

TensorFlow示例应用测试

Image包含如下ARM NN TensorFlow模型测试示例,本文随机选择TfInceptionV3-Armnn进行测试

/usr/bin/TfCifar10-Armnn
/usr/bin/TfInceptionV3-Armnn
/usr/bin/TfMnist-Armnn
/usr/bin/TfMobileNet-Armnn
/usr/bin/TfResNext-Armnn

部署模型和输入数据文件到模块

  • 这里下载,inception_v3_2016_08_28_frozen.pb 模型文件,部署到 Apalis iMX8 ~/ArmnnTests/models;shark.jpg, Dog.jpg, Cat.jpg 输入文件,部署到Apalis iMX8 ~/ArmnnTests/data

$ cd ArmnnTests

  • Run with C++ backend, CPU without NEON

$ TfInceptionV3-Armnn --data-dir=data --model-dir=models --compute=CpuRef 
Info: ArmNN v20200200 
Info: = Prediction values for test #0
Info: Top(1) prediction is 208 with value: 0.454895
Info: Top(2) prediction is 160 with value: 0.00278846
Info: Top(3) prediction is 131 with value: 0.000483914
Info: Top(4) prediction is 56 with value: 0.000304587
Info: Top(5) prediction is 27 with value: 0.000220489
Info: = Prediction values for test #1
Info: Top(1) prediction is 283 with value: 0.481285
Info: Top(2) prediction is 282 with value: 0.268979
Info: Top(3) prediction is 151 with value: 0.000375892
Info: Top(4) prediction is 24 with value: 0.00036751
Info: Top(5) prediction is 13 with value: 0.000330214
Info: = Prediction values for test #2
Info: Top(1) prediction is 3 with value: 0.986568
Info: Top(2) prediction is 0 with value: 1.51615e-05
Info: Total time for 3 test cases: 1477.627 seconds
Info: Average time per test case: 492542.205 ms
Info: Overall accuracy: 1.000

  • Run with ACL NEON backend, CPU with NEON

$ TfInceptionV3-Armnn --data-dir=data --model-dir=models --compute=CpuAcc
Info: ArmNN v20200200
Info: = Prediction values for test #0
Info: Top(1) prediction is 208 with value: 0.454888
Info: Top(2) prediction is 160 with value: 0.00278851
Info: Top(3) prediction is 131 with value: 0.00048392
Info: Top(4) prediction is 56 with value: 0.000304589
Info: Top(5) prediction is 27 with value: 0.000220489
Info: = Prediction values for test #1
Info: Top(1) prediction is 283 with value: 0.481286
Info: Top(2) prediction is 282 with value: 0.268977
Info: Top(3) prediction is 151 with value: 0.000375891
Info: Top(4) prediction is 24 with value: 0.000367506
Info: Top(5) prediction is 13 with value: 0.000330212
Info: = Prediction values for test #2
Info: Top(1) prediction is 3 with value: 0.98657
Info: Top(2) prediction is 0 with value: 1.51611e-05
Info: Total time for 3 test cases: 4.541 seconds
Info: Average time per test case: 1513.509 ms
Info: Overall accuracy: 1.000

  • Run with GPU/NPU backend

$ TfInceptionV3-Armnn --data-dir=data --model-dir=models --compute=VsiNpu
Info: ArmNN v20200200
, size = 1072812Warn-Start NN executionInfo: = Prediction values for test #0
Info: Top(1) prediction is 208 with value: 0.454892
Info: Top(2) prediction is 160 with value: 0.00278848
Info: Top(3) prediction is 131 with value: 0.000483917
Info: Top(4) prediction is 56 with value: 0.000304589
Info: Top(5) prediction is 27 with value: 0.00022049
Warn-Start NN executionInfo: = Prediction values for test #1
Info: Top(1) prediction is 283 with value: 0.481285
Info: Top(2) prediction is 282 with value: 0.268977
Info: Top(3) prediction is 151 with value: 0.000375891
Info: Top(4) prediction is 24 with value: 0.000367508
Info: Top(5) prediction is 13 with value: 0.000330214
Warn-Start NN executionInfo: = Prediction values for test #2
Info: Top(1) prediction is 3 with value: 0.986568
Info: Top(2) prediction is 0 with value: 1.51615e-05
Info: Total time for 3 test cases: 5.617 seconds
Info: Average time per test case: 1872.355 ms
Info: Overall accuracy: 1.000


ONNX示例应用测试

Image包含如下ARM NN ONNX模型测试示例,本文随机选择OnnxMobileNet-Armnn进行测试

/usr/bin/OnnxMnist-Armnn
/usr/bin/OnnxMobileNet-Armnn

部署模型和输入数据文件到模块

  • 这里下载,mobilenetv2-1.0.onnx 模型文件,部署到 Apalis iMX8 ~/ArmnnTests/models;shark.jpg, Dog.jpg, Cat.jpg 输入文件,部署到Apalis iMX8 ~/ArmnnTests/data

$ cd ArmnnTests

  • Run with C++ backend, CPU without NEON

$ OnnxMobileNet-Armnn --data-dir=data --model-dir=models --compute=CpuRef 
Info: ArmNN v20200200
Info: = Prediction values for test #0
Info: Top(1) prediction is 208 with value: 17.1507
Info: Top(2) prediction is 207 with value: 15.3666
Info: Top(3) prediction is 159 with value: 11.0918
Info: Top(4) prediction is 151 with value: 5.26187
Info: Top(5) prediction is 112 with value: 4.09802
Info: = Prediction values for test #1
Info: Top(1) prediction is 281 with value: 13.6938
Info: Top(2) prediction is 43 with value: 6.8851
Info: Top(3) prediction is 39 with value: 6.33825
Info: Top(4) prediction is 24 with value: 5.8566
Info: Top(5) prediction is 8 with value: 3.78032
Info: = Prediction values for test #2
Info: Top(1) prediction is 2 with value: 22.6968
Info: Top(2) prediction is 0 with value: 5.99574
Info: Total time for 3 test cases: 163.569 seconds
Info: Average time per test case: 54523.023 ms
Info: Overall accuracy: 1.000

  • Run with ACL NEON backend, CPU with NEON

$ OnnxMobileNet-Armnn --data-dir=data --model-dir=models --compute=CpuAcc
Info: ArmNN v20200200 
Info: = Prediction values for test #0
Info: Top(1) prediction is 208 with value: 17.1507
Info: Top(2) prediction is 207 with value: 15.3666Info: Top(3) prediction is 159 with value: 11.0918
Info: Top(4) prediction is 151 with value: 5.26187
Info: Top(5) prediction is 112 with value: 4.09802
Info: = Prediction values for test #1
Info: Top(1) prediction is 281 with value: 13.6938
Info: Top(2) prediction is 43 with value: 6.88511
Info: Top(3) prediction is 39 with value: 6.33825
Info: Top(4) prediction is 24 with value: 5.8566
Info: Top(5) prediction is 8 with value: 3.78032
Info: = Prediction values for test #2
Info: Top(1) prediction is 2 with value: 22.6968
Info: Top(2) prediction is 0 with value: 5.99574
Info: Total time for 3 test cases: 1.222 seconds
Info: Average time per test case: 407.494 ms
Info: Overall accuracy: 1.000

  • Run with GPU/NPU backend

$ OnnxMobileNet-Armnn --data-dir=data --model-dir=models --compute=VsiNpu
Info: ArmNN v20200200
, size = 602112Warn-Start NN executionInfo: = Prediction values for test #0
Info: Top(1) prediction is 208 with value: 8.0422
Info: Top(2) prediction is 207 with value: 7.98566
Info: Top(3) prediction is 159 with value: 6.76481
Info: Top(4) prediction is 151 with value: 4.16534
Info: Top(5) prediction is 60 with value: 2.40269
Warn-Start NN executionInfo: = Prediction values for test #1
Info: Top(1) prediction is 287 with value: 5.98563
Info: Top(2) prediction is 24 with value: 5.49244
Info: Top(3) prediction is 8 with value: 2.24259
Info: Top(4) prediction is 7 with value: 1.36127
Info: Top(5) prediction is 5 with value: -1.69145
Error: Prediction for test case 1 (287) is incorrect (should be 281)
Warn-Start NN executionInfo: = Prediction values for test #2
Info: Top(1) prediction is 2 with value: 11.099
Info: Top(2) prediction is 0 with value: 3.42508
Info: Total time for 3 test cases: 0.258 seconds
Info: Average time per test case: 86.134 ms
Error: One or more test cases failed


除了上述推理引擎前端,TensorFlow Lite也是支持的,更多示例测试和参数说明以及ARMNN C++ API/Python API开发流程可以参考NXP i.MX Machine Learning User's Guide Rev. L5.4.70_2.3.0 Chapter 5 Arm NN。

ONNX 测试

a). ONNX也是一款开源的机器学习推理引擎,NXP iMX8 eIQ ONNX支持特性和协议栈框图如下

  • ONNX Runtime 1.1.2
  • Multithreaded computation with acceleration using Arm Neon SIMD instructions on Cortex-A cores provided by the ACL and Arm NN execution providers
  • Parallel computation using GPU/NPU hardware acceleration (on shader or convolution units) provided by the VSI NPU execution provider
  • C++ and Python API (supported Python version 3)

 

示例应用测试

ONNX Runtime 提供了一个onnx_test_runner (BSP以及预装于/usr/bin)用于运行ONNX model zoo提供的测试模型,下面几个模型是在iMX8 eIQ测试过的模型

MobileNet v2, ResNet50 v2, ResNet50 v1, SSD Mobilenet v1, Yolo v3

MobileNet v2 模型测试


  • 从这里下载模型文件压缩包,然后在Apalis iMX8设备上$Home 目录解压出文件夹 mobilenetv2-7

$ cd /home/root/

  • Run with ARMNN backend with CPU NEON

$ onnx_test_runner -j 1 -c 1 -r 1 -e armnn ./mobilenetv2-7/ 
…[E:onnxruntime:Default, runner.cc:217 operator()] Test mobilenetv2-7 finished in 0.907 seconds, t
result: 
      Models: 1
      Total test cases: 3
              Succeeded: 3
              Not implemented: 0
              Failed: 0
      Stats by Operator type:
              Not implemented(0): 
              Failed:
Failed Test Cases:

  • Run with ACL backend with CPU NEON

$ onnx_test_runner -j 1 -c 1 -r 1 -e acl ./mobilenetv2-7/
…[E:onnxruntime:Default, runner.cc:217 operator()] Test mobilenetv2-7 finished in 0.606 seconds, t
result: 
      Models: 1
      Total test cases: 3
              Succeeded: 3
              Not implemented: 0
              Failed: 0
      Stats by Operator type:
              Not implemented(0): 
              Failed:
Failed Test Cases:

  • Run with GPU/NPU backend

$ onnx_test_runner -j 1 -c 1 -r 1 -e vsi_npu ./mobilenetv2-7/
…[E:onnxruntime:Default, runner.cc:217 operator()] Test mobilenetv2-7 finished in 0.446 seconds, t
result: 
      Models: 1
      Total test cases: 3
              Succeeded: 3
              Not implemented: 0
              Failed: 0
      Stats by Operator type:
              Not implemented(0): 
              Failed:
Failed Test Cases:

更多示例测试和参数说明以及C++ API可以参考NXP i.MX Machine Learning User's Guide Rev. L5.4.70_2.3.0 Chapter 6 ONNX Runtime。

OpenCV 测试

OpenCV是大家熟知的一款开源的传统机器视觉库,它包含一个ML模块可以提供传统的机器学习算法,可以支持神经网络推理(DNN模型)和传统机器学习算法(ML模型),NXP iMX8 eIQ OpenCV支持特性如下

  • OpenCV 4.4.0
  • C++ and Python API (supported Python version 3)
  • Only CPU computation is supported
  • Input image or live camera (webcam) is supported

示例应用测试

BSP 预装 OpenCV测试模型数据如下

DNN示例应用 - /usr/share/OpenCV/samples/bin

输入数据、模型配置文件 - /usr/share/opencv4/testdata/dnn

Image classification DNN示例应用测试


  • 从这里下载,模型文件squeezenet_v1.1.caffemodel和配置文件model.yml复制到 /usr/share/OpenCV/samples/bin
  • 复制数据文件到执行目录

$ cp /usr/share/opencv4/testdata/dnn/dog416.png /usr/share/OpenCV/samples/bin/
$ cp /usr/share/opencv4/testdata/dnn/squeezenet_v1.1.prototxt /usr/share/OpenCV/samples/bin/
$ cp /usr/share/OpenCV/samples/data/dnn/classification_classes_ILSVRC2012.txt /usr/share/OpenCV/samples/bin/
$ cd /usr/share/OpenCV/samples/bin/

  • Run with default image

$ ./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet 

 

  • Run with actual camera(/dev/video2) input

./example_dnn_classification --device=2 --zoo=models.yml squeezenet

 

更多示例测试和说明可以参考NXP i.MX Machine Learning User's Guide Rev. L5.4.70_2.3.0 Chapter 8 OpenCV machine learning demos。

总结

本文基于NXP eIQ 机器学习工具库在iMX8嵌入式平台简单演示了多种机器学习推理引擎示例应用,并简单对比了CPU NEON和GPU进行模型推理的性能表现,实际进行相关应用开发的时候还会遇到很多学习模型到实际推理模型转换的问题,本文就不做涉及。

参考文献

i.MX Machine Learning User's Guide Rev. L5.4.70_2.3.0
i.MX Yocto Project User's Guide Rev. L5.4.70_2.3.0
https://developer.toradex.cn/knowledge-base/board-support-package/openembedded-core

作者: 秦海,技术销售工程师,韬睿(上海)

评论

Please login to leave a comment!
Have a Question?