Hands On With Nvidia’s New Jetson Nano
Until recently, experimenting with AI-driven robots has been limited to those with substantial training and resources. Nvidia has done as much as any other company to change that. Its latest effort is the new Jetson Nano developer kit. Built around a 128-core Maxwell GPU and quad-core ARM A57 CPU running at 1.43 GHz and coupled with 4GB of LPDDR4 memory, the Nano developer kit packs a rich set of I/O capabilities through the included carrier card.
The kit natively supports Ubuntu, which can easily be configured using Nvidia’s JetPack software. The developer kit moniker is a bit misleading, I think, because the kit is really just the module and carrier board (well, and an origami paper holder to prop it up), and doesn’t even include any accessories like a camera. But at $99 there is still plenty of value packed into it. Of course, you can also get all the software you need to do quite a bit with it for free. So I couldn’t resist plunking down my $108 (with tax) to snag one of the first available at Nvidia’s San Jose GTC. I’ve started to work with it, so I can share my experience with you.
What You Get With Nvidia’s Jetson Nano
In addition to the Jetson Nano module itself, there’s a well-thought-out carrier board. For ports, it has four USB 3.0 ports, HDMI, DisplayPort, an M.2 slot hidden under the Nano daughter card that can be used for a Wi-Fi/BT card or possibly other cards, as well as a 40-pin GPIO connector and a CSI connector for IMX291 or similar cameras. There are a few sets of jumpers to allow some configuration flexibility, and of course the needed microSD slot for your system “drive.” The integrated passive heat sink has mounting points for a fan if needed. There is a header provided for fan power and control as well. You can also power it by providing a 4-amp, 5-volt DC power adapter if you need more power for peripherals.
JetPack, Nvidia’s free software stack for Jetson developers, supports the Nano as of release 4.2 and comes packed with lots of AI goodies including TensorRT, cuDNN, VisionWorks, and OpenCV. Nvidia benchmarks the Nano at 472 GFLOPS of compute performance while consuming as little as 5 watts. By default the module ships in 10-watt mode.
Getting Started With a Jetson Nano
Because the Nano is just a board, you’ll need to provide your own USB mouse and keyboard (the board doesn’t have native Wi-Fi or Bluetooth support), as well as a monitor that supports either HDMI or DisplayPort. A 16GB or larger microSD card is also needed for the system image and as your system drive. Nvidia recommends a UHS-I or better performing version because you’ll be working from it. Finally, a good quality (2-amp or better) 5-watt micro USB power supply is needed to get you started; you can use a dedicated 5v power supply to power the system with up to 20 watts if needed for accessories.
Writing the JetPack system image to a microSD card is trivial with a Linux, Mac, or Windows computer. I suspect many existing tools will work, but Nvidia provides links to a free card formatter and a free image writer that I used. It took about 10 minutes to write the image to my 16GB 633x microSD card, although the system image is nearly 13GB, so after using the system for a bit I moved to a 64GB Sandisk Extreme Pro card. After creating my system image, I plugged in the keyboard, mouse, and monitor, stashed the card into the sort-of-hard-to-see slot underneath the board and plugged in an old, but high-quality, 2A Samsung phone charger and cable. The Nano booted right away into a full Ubuntu desktop.
Plugging in an Ethernet cable had me on the network instantly, so I was able to check for package updates and browse Nvidia’s Getting Started and other Tutorial pages directly from the Nano. However, the Nano is unlikely to replace your current PC as a daily driver. As you might expect, it’s noticeably slow navigating web pages compared with a full-fledged laptop or desktop.
Hello AI World
In the now-traditional paean to Dennis Ritchie’s “hello, world” C program, Nvidia provides a basic AI tutorial in the form of a “Hello AI World” program. It does simple inferencing using a pre-trained neural network (AlexNet and GoogleNet are downloaded by default). By itself, it isn’t any more sophisticated than what you could do in a similar amount of time (typically a couple hours) by following any of the dozens of basic image recognition tutorials on a PC. The machine is also a lot slower, so if you simply want to learn about AI software you’re probably better off doing it on your desktop or laptop. However, using the Nano gets you acquainted with several of the components of the JetPack toolset, and working with the Nano itself, as preparation for more sophisticated hardware-related projects.
All the steps in the tutorial were easy to follow and worked correctly in my tests. However, when it came time to actually run the inferencing engine to identify the sample image of an Orange, I thought the machine had hung. I was also browsing the tutorial information on Nano, so I’m sure having a few Chromium tabs open didn’t help my available memory situation, but it certainly would have been nice to have some interactive feedback during the time-consuming model loading process.
Unfortunately, because the Nano kit doesn’t include a camera, you can’t move to the fun part of the demo — identifying objects from the world around you — until you purchase one. I added a Raspberry Pi
v2 camera module, one of those supported by the Nano. Without a camera, you can still write your own recognition code and run it against the sample images, but this is still in the category of things easier done on a full PC. Adding the camera let me run the various included recognition demos. In general, they worked correctly, although until I upgraded the power supply to a 4-amp model, sometimes the system would mysteriously shut off. If you do purchase the Raspberry Pi Camera module, note that the lens requires manual focusing by twisting using either an included plastic tool or carefully with very small pliers.
Next Step: JetBot
By itself, even with the addition of a camera or two, the Nano isn’t super exciting. After all, you can run all the same code on any decent system with a GPU and a webcam. Where it gets interesting is when you build a Jetson into something. To bootstrap this process, Nvidia has created JetBot, an open-source robot kit. It’s a wheeled bot with cameras that can be remotely driven or programmed. The Bill of Materials for the kit is fairly extensive and requires ordering from several vendors, as well as including several parts that need to be 3D printed. Fortunately, there are links for ordering and for the model files needed for printing. However, several of the parts are out of stock at the listed vendors, so it may take a little while to get to a completed JetBot. I’m in the middle of the process, so stay tuned for my further adventures once I get a JetBot up and running.
In addition to various projects using a JetBot, I’m sure we’ll start to see quite a number of other open source efforts built around the Jetson Nano, especially once it starts to ship in volume in June. One that appeals to me is creating an open-source alternative to the proprietary security cameras on the market. It should be possible to do most of the monitoring and recognition that subscription-based cameras do except on your own.
You can pre-order the Jetson Nano for $99 from various online retailers including Adafruit. Since Nvidia was selling them at GTC, they obviously exist already, but it appears general retail availability may not be until June. There will also be a product-ready commercial version of the Nano with slightly better specs that business can purchase in bulk for $129.
- Dennis Ritchie, creator of C, bids “goodbye, world”
- Nvidia’s Jetson Xavier Stuffs Volta Performance Into Tiny Form Factor
- Everything We Know About the Raspberry Pi 4