Jetson Nano Developer Kit Benchmarks for a future article on Phoronix.com. ,,"Jetson TX1 Max-P","Jetson TX2 Max-Q","Jetson TX2 Max-P","Jetson AGX Xavier","Jetson Nano","Raspberry Pi 3 Model B+","ASUS TinkerBoard","ODROID-XU4" Processor,,ARMv8 rev 1 @ 1.73GHz (4 Cores),ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads),ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads),ARMv8 rev 0 @ 2.27GHz (8 Cores),ARMv8 rev 1 @ 1.43GHz (4 Cores),ARMv7 rev 4 @ 1.40GHz (4 Cores),ARMv7 rev 1 @ 1.80GHz (4 Cores),ARMv7 rev 3 @ 1.50GHz (8 Cores) Motherboard,,jetson_tx1,quill,quill,jetson-xavier,jetson-nano,BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3,Rockchip (Device Tree),ODROID-XU4 Hardkernel Odroid XU4 Memory,,4096MB,8192MB,8192MB,16384MB,4096MB,926MB,2048MB,2048MB Disk,,16GB 016G32,31GB 032G34,31GB 032G34,31GB HBG4a2,32GB GB1QT,32GB GB2MW,32GB GB1QT,16GB AJTD4R Graphics,,NVIDIA Tegra X1,NVIDIA TEGRA,NVIDIA TEGRA,NVIDIA Tegra Xavier,NVIDIA TEGRA,BCM2708,,llvmpipe 2GB Monitor,,VE228,VE228,VE228,VE228,VE228,,,VE228 Network,,,,,,Realtek RTL8111/8168/8411,,, OS,,Ubuntu 16.04,Ubuntu 16.04,Ubuntu 16.04,Ubuntu 18.04,Ubuntu 18.04,Raspbian 9.6,Debian 9.0,Ubuntu 18.04 Kernel,,4.4.38-tegra (aarch64),4.4.38-tegra (aarch64),4.4.38-tegra (aarch64),4.9.108-tegra (aarch64),4.9.140-tegra (aarch64),4.19.23-v7+ (armv7l),4.4.16-00006-g4431f98-dirty (armv7l),4.14.37-135 (armv7l) Desktop,,Unity 7.4.5,Unity 7.4.0,Unity 7.4.0,Unity 7.5.0,Unity 7.5.0,LXDE,LXDE, Display Server,,X Server 1.18.4,X Server 1.18.4,X Server 1.18.4,X Server 1.19.6,X Server 1.19.6,X Server 1.19.2,X Server 1.18.4,X Server 1.19.6 Display Driver,,NVIDIA 28.1.0,NVIDIA 28.2.1,NVIDIA 28.2.1,NVIDIA 31.0.2,NVIDIA 1.0.0,,, OpenGL,,4.5.0,4.5.0,4.5.0,4.6.0,,,,3.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits) Vulkan,,1.0.8,,,1.1.76,1.1.85,,, Compiler,,GCC 5.4.0 20160609,GCC 5.4.0 20160609 + CUDA 9.0,GCC 5.4.0 20160609 + CUDA 9.0,GCC 7.3.0 + CUDA 10.0,GCC 7.3.0 + CUDA 10.0,GCC 6.3.0 20170516,GCC 6.3.0 20170516,GCC 7.3.0 File-System,,ext4,ext4,ext4,ext4,ext4,ext4,ext4,ext4 Screen Resolution,,1920x1080,1920x1080,1920x1080,1920x1080,1920x1080,656x416,1024x768,1920x1080 ,,"Jetson TX1 Max-P","Jetson TX2 Max-Q","Jetson TX2 Max-P","Jetson AGX Xavier","Jetson Nano","Raspberry Pi 3 Model B+","ASUS TinkerBoard","ODROID-XU4" "CUDA Mini-Nbody - Test: Original ((NBody^2)/s)",HIB,,6.77,8.24,47.13,4.07,,, "GLmark2 - Resolution: 1920 x 1080 (Score)",HIB,,,,2876,646,,, "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,25.99,32.64,208.76,14.35,,, "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,14.24,17.56,303.78,,,, "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,21.04,26.56,172.50,11.59,,, "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,11.45,14.32,265.81,,,, "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,29.83,36.87,247.95,,,, "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,15.79,19.91,475.08,,,, "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,23.94,29.83,203.96,,,, "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,12.59,15.92,394.66,,,, "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,216,264,1200,118,,, "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,148,184,1143,84.10,,, "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,374,462,2038,201,,, "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,237,301,3143,128,,, "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,72.01,92.28,547.50,41.04,,, "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,39.15,49.97,902.78,20.96,,, "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,156,197,796,83.37,,, "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,88.88,113,1146,47.82,,, "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,27.34,35.11,224.19,15.76,,, "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec)",HIB,,14.50,18.29,372.73,7.76,,, "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,86.08,111,636,46.51,,, "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,47.15,59.69,1215.08,25.08,,, "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,179,233,1006,98.93,,, "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,104,130,1693,55.66,,, "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,32.67,41.91,259.82,17.38,,, "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec)",HIB,,17.36,22.07,493.22,,,, "LeelaChessZero - Backend: BLAS (Nodes/s)",HIB,,,,47.62,15.37,,, "LeelaChessZero - Backend: CUDA + cuDNN (Nodes/s)",HIB,,,,953,140,,, "LeelaChessZero - Backend: CUDA + cuDNN FP16 (Nodes/s)",HIB,,,,2515.01,,,, "TTSIOD 3D Renderer - Phong Rendering With Soft-Shadow Mapping (FPS)",HIB,45.09,28.85,49.26,133,40.94,17.66,21.22,41.96 "7-Zip Compression - Compress Speed Test (MIPS)",HIB,4508,3294,5593,19212,4049,2013,2836,4120 "C-Ray - Total Time - 4K, 16 Rays Per Pixel (sec)",LIB,753,869,585,355,921,2030,1718,827 "Rust Prime Benchmark - Prime Number Test To 200,000,000 (sec)",LIB,128.45,170.25,104.96,32.37,150.19,1097.69,1821.05,574.11 "Zstd Compression - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 (sec)",LIB,145.80,253.80,144.97,80.06,129.87,342.23,496.62, "FLAC Audio Encoding - WAV To FLAC (sec)",LIB,79.20,104.28,65.07,54.47,104.77,339.53,279.05,97.03 "OpenCV Benchmark - (sec)",LIB,,493,296,128,271.04,2.74,,520.70 "PyBench - Total For Average Test Times (Milliseconds)",LIB,6339,8735,5408,3007,7084,20913,11502,5009 "Tesseract OCR - Time To OCR 7 Images (sec)",LIB,,,,71.94,132.67,,,180.66 "TTSIOD 3D Renderer - Performance / Cost - Phong Rendering With Soft-Shadow Mapping (FPS/Dollar)",HIB,0.09,0.05,0.08,0.10,0.41,0.50,0.32,0.68 "7-Zip Compression - Performance / Cost - Compress Speed Test (MIPS/Dollar)",HIB,9.03,5.50,9.34,14.79,40.90,57.51,42.97,66.45 "C-Ray - Performance / Cost - Total Time - 4K, 16 Rays Per Pixel (sec x Dollar)",LIB,375747.00,520531.00,350415.00,461145.00,91179.00,71050.00,113388.00,51274.00 "Rust Prime Benchmark - Performance / Cost - Prime Number Test To 200,000,000 (sec x Dollar)",LIB,64096.55,101979.75,62871.04,42048.63,14868.81,38419.15,120189.30,35594.82 "Zstd Compression - Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 (sec x Dollar)",LIB,72754.20,152026.20,86837.03,103997.94,12857.13,11978.05,32776.92, "FLAC Audio Encoding - Performance / Cost - WAV To FLAC (sec x Dollar)",LIB,39520.80,62463.72,38976.93,70756.53,10372.23,11883.55,18417.30,6015.86 "PyBench - Performance / Cost - Total For Average Test Times (Milliseconds x Dollar)",LIB,3163161.00,5232265.00,3239392.00,3906093.00,701316.00,731955.00,759132.00,310558.00 "CUDA Mini-Nbody - Performance / Cost - Test: Original ((NBody^2)/s/Dollar)",HIB,,0.01,0.01,0.04,0.04,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.04,0.05,0.16,0.14,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.02,0.03,0.23,,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.04,0.04,0.13,0.12,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.02,0.02,0.20,,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.05,0.06,0.19,,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.03,0.03,0.37,,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.04,0.05,0.16,,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.02,0.03,0.30,,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.36,0.44,0.92,1.19,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.25,0.31,0.88,0.85,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.62,0.77,1.57,2.03,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.40,0.50,2.42,1.29,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.12,0.15,0.42,0.41,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.07,0.08,0.69,0.21,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.26,0.33,0.61,0.84,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.15,0.19,0.88,0.48,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.05,0.06,0.17,0.16,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.02,0.03,0.29,0.08,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.14,0.19,0.49,0.47,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.08,0.10,0.94,0.25,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.30,0.39,0.77,1.00,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.17,0.22,1.30,0.56,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.05,0.07,0.20,0.18,,, "NVIDIA TensorRT Inference - Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled (Images/sec/Dollar)",HIB,,0.03,0.04,0.38,,,, "OpenCV Benchmark - Performance / Cost - (sec x Dollar)",LIB,,295307.00,177304.00,166272.00,26832.96,95.90,,32283.40 "GLmark2 - Performance / Cost - Resolution: 1920 x 1080 (Score/Dollar)",HIB,,,,2.21,6.53,,, "LeelaChessZero - Performance / Cost - Backend: BLAS (Nodes/s/Dollar)",HIB,,,,0.04,0.16,,, "LeelaChessZero - Performance / Cost - Backend: CUDA + cuDNN (Nodes/s/Dollar)",HIB,,,,0.73,1.41,,, "LeelaChessZero - Performance / Cost - Backend: CUDA + cuDNN FP16 (Nodes/s/Dollar)",HIB,,,,1.94,,,, "Tesseract OCR - Performance / Cost - Time To OCR 7 Images (sec x Dollar)",LIB,,,,93450.06,13134.33,,,11200.92