Powering Edge AI with the Powerful Jetson Nano
NVidia Jetson Nano Deep Learning Edge Device
Nano The Cat
Hardware:
Jetson Nano developer kit. Built around a 128-core Maxwell GPU and quad-core ARM A57 CPU running at 1.43 GHz and coupled with 4GB of LPDDR4 memory! This is power at the edge. I now have a favorite new device.
You need to add some kind of USB WiFi adaptor if you are not hardwired to ethernet. This is cheap and easy, I added a tiny $15 WiFi adapter and was off to the races.
Operating System:
Ubuntu 18.04
Library Setup:
sudo apt-get update -y
sudo apt-get install git cmake -y
sudo apt-get install libatlas-base-dev gfortran -y
sudo apt-get install libhdf5-serial-dev hdf5-tools -y
sudo apt-get install python3-dev -y
sudo apt-get install libcv-dev libopencv-dev -y
sudo apt-get install fswebcam -y
sudo apt-get install libv4l-dev -y
sudo apt-get install python-opencv -y
pip3 install psutil
pip2 install psutil
pip3.6 install easydict -U
pip3.6 install scikit-learn -U
pip3.6 install opencv-python -U --user
pip3.6 install numpy -U
pip3.6 install mxnet -U
pip3.6 install mxnet-mkl -U
pip3.6 install gluoncv --upgrade
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev -ysudo apt-get install python3-pip
sudo pip3 install -U pip
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu
sudo nvpmodel -q --verbose
pip3 install numpy
pip3 install keras
git clone https://github.com/dusty-nv/jetson-inference
cd jetson-inference
git submodule update --init
tegrastats
pip3 install -U jetson-stats
Source:
https://github.com/tspannhw/iot-device-install
https://github.com/tspannhw/minifi-jetson-nano
IoT Setup
Download MiNiFi 0.6.0 Source from Cloudera and Build.
Download MiNiFi Java Agent (Binary) and Unzip.
Follow these instructions.
On a Server
We want to hookup to EFM to make flow development, deploy, management and monitoring of MiNiFi agents trivial. Download NiFi Registry. You will also need Apache NiFi.
For a good walkthrough and hands-on demonstration see this workshop.
See these cool Jetson Nano Projects: https://developer.nvidia.com/embedded/community/jetson-projects
Monitor Status
https://github.com/rbonghi/jetson_stats
Example Flow
It's easy to add MiNiFi Java or CPP Agents to the Jetson Nano. I did a custom NiFi CPP 0.6.0 build for Jetson. I did a quick flow to run the jetson-inference imagenet-console CPP binary on an image captured from a compatible Logitech USB Webcam with fswebcam. I store the images to /opt/demo/images and pass it on the command line to the CPP console as a proof of concept.
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
fswebcam -q -r 1280x720 --no-banner /opt/demo/images/$DATE.jpg
/opt/demo/jetson-inference/build/aarch64/bin/imagenet-console /opt/demo/images/$DATE.jpg /opt/demo/images/out_$DATE.jpg
==
imagenet-console
args (3): 0 [/opt/demo/jetson-inference/build/aarch64/bin/imagenet-console] 1 [/opt/demo/images/2019-07-01_1405.jpg] 2 [/opt/demo/images/out_2019-07-01_1405.jpg]
imageNet -- loading classification network model from:
-- prototxt networks/googlenet.prototxt
-- model networks/bvlc_googlenet.caffemodel
-- class_labels networks/ilsvrc12_synset_words.txt
-- input_blob 'data'
-- output_blob 'prob'
-- batch_size 2
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] loading network profile from engine cache... /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] device GPU, /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel loaded
[TRT] device GPU, CUDA engine context initialized with 2 bindings
[TRT] binding -- index 0
-- name 'data'
-- type FP32
-- in/out INPUT
-- # dims 3
-- dim #0 3 (CHANNEL)
-- dim #1 224 (SPATIAL)
-- dim #2 224 (SPATIAL)
[TRT] binding -- index 1
-- name 'prob'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1000 (CHANNEL)
-- dim #1 1 (SPATIAL)
-- dim #2 1 (SPATIAL)
[TRT] binding to input 0 data binding index: 0
[TRT] binding to input 0 data dims (b=2 c=3 h=224 w=224) size=1204224
[cuda] cudaAllocMapped 1204224 bytes, CPU 0x100e30000 GPU 0x100e30000
[TRT] binding to output 0 prob binding index: 1
[TRT] binding to output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000
[cuda] cudaAllocMapped 8000 bytes, CPU 0x100f60000 GPU 0x100f60000
device GPU, /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel initialized.
[TRT] networks/bvlc_googlenet.caffemodel loaded
imageNet -- loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
Reference:
- https://nvidianews.nvidia.com/news/nvidia-announces-jetson-nano-99-tiny-yet-mighty-nvidia-cuda-x-ai-computer-that-runs-all-ai-models
- https://www.seeedstudio.com/NVIDIA-Jetson-Nano-Development-Kit-p-2916.html
- https://developer.nvidia.com/embedded/buy/jetson-nano-devkit?nvid=nv-int-mn-78462
- https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit
- https://elinux.org/Jetson_Nano
- https://www.jetsonhacks.com/2019/04/25/jetson-nano-run-on-usb-drive/
- https://github.com/JetsonHacksNano
- https://www.jetsonhacks.com/2019/03/25/nvidia-jetson-nano-developer-kit/
- https://elinux.org/Jetson_Nano#Enclosures
- https://www.e-consystems.com/nvidia-cameras/jetson-nano-cameras/3mp-mipi-camera.asp
- https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/
- https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9
- https://www.jetsonhacks.com/2019/04/14/jetson-nano-use-more-memory/
- https://www.jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/
- https://github.com/JetsonHacksNano/CSI-Camera
- https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification
- https://www.pyimagesearch.com/2019/05/06/getting-started-with-the-nvidia-jetson-nano/
- https://github.com/rbonghi/jetson_stats
- pip install mxnet-jetson
- https://developer.nvidia.com/embedded/community/jetson-projects
- https://github.com/autorope/donkeycar
- https://medium.com/@feicheung2016/getting-started-with-jetson-nano-and-autonomous-donkey-car-d4f25bbd1c83
- https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-console.md
- https://medium.com/@feicheung2016/getting-started-with-jetson-nano-and-autonomous-donkey-car-d4f25bbd1c83
- https://www.seeedstudio.com/blog/2019/04/19/instruction-on-how-to-use-nvidia-jetson-nano-with-grove/
- https://www.dlology.com/blog/how-to-run-keras-model-on-jetson-nano/