Although the raspberry pie of this generation has increased the memory, the limitation of the main frequency makes it impossible to use the pure motherboard for reasoning directly. In this article, we introduce the deployment of openvino nerve rods on raspberry pies to further improve the reasoning speed of AI.
Intel’s neural compute stick 2 / NCS 2 is still the size of a U-disk, with a size of 72.5 × 27 × 14 mm, and the latest Intel movidius myriad x is built in Vpu vision processor, integrated with 16 ship computing cores and dedicated deep neural network hardware accelerator, can perform high-performance visual and AI reasoning operations with extremely low power consumption, and support tensorflow and Caffe development frameworks.
According to the data given by Intel, the performance of NCS 2 is greatly improved compared with the previous movidius computing rod. The image classification performance is about 5 times higher, and the object detection performance is about 4 times higher.
The main features of ncs2 are as follows
- Deep learning reasoning at the edge
- Pre trained models on open model Zoo
- A library and pre optimized kernel for faster time to market
- Supports heterogeneous execution across various computer vision accelerators (CPU, GPU, Vpu, and FPGA) using a common API
- Raspberry PI hardware support
Install openvino Toolkit
Ncs2 supports raspberry pie. As a feature of ncs2, Intel has made a special theme document, which is very friendly to deploy.
1. Download the installation package
I chose the April 2020 version:
cd ~/Downloads/ sudo wget https://download.01.org/opencv/2020/openvinotoolkit/2020.4/l_openvino_toolkit_runtime_raspbian_p_2020.4.287.tgz #Create installation folder sudo mkdir -p /opt/intel/openvino #Extract the file sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2020.4.287.tgz --strip 1 -C /opt/intel/openvino
2. Install external software dependencies
Cmake has been installed before. In fact, this step can be skipped.
sudo apt install cmake
3. Set environment variables
source /opt/intel/openvino/bin/setupvars.sh #Setting environment variables permanently echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
Now every time you open a new command line terminal. The following message will appear:
[setupvars.sh] OpenVINO environment initialized
4. Add USB rules
Add the current Linux user to the users group: log out and log in to make the settings take effect.
sudo usermod -a -G users "$(whoami)"
USB installation rules
5. USB is inserted into Intel nerve rod ncs2
Re insert ncs2 and prepare to run the program.
Construction of object detection samples
1. Create a new compilation directory
mkdir openvino && cd openvino mkdir build && cd build
2. Construct the object detection sample
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp make -j2 object_detection_sample_ssd
3. Download the weight file, network topology file and test image
To download a. Bin weighted file:
wget --no-check-certificate https://download.01.org/opencv/2020/openvinotoolkit/2020.1/open_model_zoo/models_bin/1/face-detection-adas-0001/FP16/face-detection-adas-0001.bin
To download a file with network topology. XML:
wget --no-check-certificate https://download.01.org/opencv/2020/openvinotoolkit/2020.1/open_model_zoo/models_bin/1/face-detection-adas-0001/FP16/face-detection-adas-0001.xml
Search some images containing faces as detected samples and save them to ~ / downloads / image directory.
4. Operation procedure
Where – M specifies the model topology. XML file, and the program will automatically find the. Bin weight file with the same name;
-D myriad stands for using nerve rods as reasoning equipment;
-I specifies the path of the image under test.
./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i ~/Downloads/image
Build performance test program
1. Build test program
make -j2 benchmark_app
2. Construct the object detection sample
Where – I is the input image to be detected;
-M is the input model parameter;
-Niter is the number of iterations to run the inference.
./armv7l/Release/benchmark_app -i car.png -m squeezenet1.1/FP16/squeezenet1.1.xml -pc -d MYRIAD -niter 1000
The reasoning speed of raspberry pie + nerve stick can reach 280 FPS, which is fast enough. Try to insert the nerve rod into the computer and compare it with it.
3. Contrast performance
It is still about 280 frames, and there is no difference in running speed. It can be seen that the bottleneck of computing is concentrated on ncs2, and there is little difference between the main equipment using computer or raspberry pie. In the case of inserting nerve rods, it seems a bit wasteful to use PC.
Then compare an openvino acceleration model running directly with the Intel CPU of notebook computer
340 FPS, it is true that the CPU of the computer is stronger.
Development process of raspberry pie
- Select the pre training model;
- Use the model optimizer to transform the model;
- Finally, reasoning model is put on raspberry pie.
In the conventional development mode, we need to find a suitable model in the open model zoo, which can meet the basic needs of most businesses. If you need to run some more cutting-edge models or self-designed neural networks, then all kinds of model transformation methods are necessary skills, and the difficulty will be greater correspondingly.
The relevant documents and information can be answered in the background of official account: rpi08, get the download link.
We will do some model transformation work,
Let yolov5 be on raspberry pie,
Run it with openvino