Week1&2: Introduction & Setup JetsonNano

Hello AI World

Hello AI World can be run completely onboard your Jetson, including inferencing with TensorRT and transfer learning with PyTorch. The inference portion of Hello AI World - which includes coding your own image classification and object detection applications for Python or C++, and live camera demos - can be run on your Jetson in roughly two hours or less, while transfer learning is best left to leave running overnight.

System Setup (previously done)

Camera Setup

Assignment 1

Test your camera is running according as the new instructions.

Please send a message to the professor as soon as you finished

nano@jetson-nano:~$ nvgstcapture-1.0 
Encoder null, cannot set bitrate!
Encoder Profile = High
Supported resolutions in case of ARGUS Camera
  (2) : 640x480
  (3) : 1280x720
  (4) : 1920x1080
.....
nano@jetson-nano:~$ nvgstcapture-1.0 --orientation 2
.....

Take a picture and save to disk

  1. Connect CSI camera
  2. Execute in a shell the command nvgstcapture-1.0 --automate --capture-auto
  3. Open File with eog nvcamtest_XX.jpg

Capture a video and save to disk

  1. Connect CSI camera
  2. Execute in a shell the command nvgstcapture-1.0 --mode=2 --automate --capture-auto
  3. Application will record 10 seconds of video
  4. Play File recorded with totem nvcamtest_XX.mp4

Homework (Optional)

According to the options available in the nvgstcapture-1.0 functionality, control and adjust the lighting conditions

Please send a message to the professor as soon as you finished

Setup Container

Running Docker Container

foo

Inference instructions

nano@jetson-nano:~$ git clone --recursive https://github.com/dusty-nv/jetson-inference
Cloning into 'jetson-inference'...
remote: Enumerating objects: 20861, done.
....

Launching the Container

nano@jetson-nano:~$ cd jetson-inference/
nano@jetson-nano:~/jetson-inference$ docker/run.sh --volume /tmp/argus_socket:/tmp/argus_socket
reading L4T version from /etc/nv_tegra_release
L4T BSP Version:  L4T R32.6.1
[sudo] password for nano: 
size of data/networks:  79397 bytes
.....

Running applications

root@jetson-nano:/jetson-inference# cd build/aarch64/bin
root@jetson-nano:/jetson-inference/build/aarch64/bin# ./video-viewer
# (press Ctrl+D to exit the container)

Assignment 3

Test your camera is running in the Docker Image through the video-viewer script

Please send a message to the professor as soon as you finished

Inference of Image Classification

root@jetson-nano:/jetson-inference# cd build/aarch64/bin
root@jetson-nano:/jetson-inference/build/aarch64/bin# ./imagenet images/jellyfish.jpg images/test/jellyfish.jpg

foo

Assignment 4

Test Image Classification example running in the Docker Image through the imagenet script

Please send a message to the professor as soon as you finished

Inference of Object Detection

root@jetson-nano:/jetson-inference# cd build/aarch64/bin
root@jetson-nano:/jetson-inference/build/aarch64/bin# ./detectnet images/peds_0.jpg images/test/peds_0.jpg

foo

Assignment 5

Test Pedestrian Detection example running in the Docker Image through the detectnet script

Please send a message to the professor as soon as you finished

Using other IA Models

foo

nano@jetson-nano:~$ cd jetson-inference/tools
nano@jetson-nano:~/jetson-inference$ ./download-models.sh

Homework (Optional)

You can test other image classification models, object detection, etc. by making use of the download-models.sh script and launching the inference with the option --network

Please send a message to the professor as soon as you finished