Jetson TX2

ARMv8 rev 3 testing with a quill and NVIDIA TEGRA on Ubuntu 16.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1812243-SP-JETSONTX207
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Jetson
December 24 2018
  7 Hours, 7 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Jetson TX2OpenBenchmarking.orgPhoronix Test SuiteARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA TEGRAUbuntu 16.044.4.38-tegra (aarch64)X Server 1.18.4NVIDIA 1.0.0GCC 5.4.0 20160609 + CUDA 9.0ext4640x960ProcessorMotherboardMemoryDiskGraphicsOSKernelDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionJetson TX2 BenchmarksSystem Logs- Scaling Governor: tegra_cpufreq schedutil

Jetson TX2tensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 8 - Disabledtensorrt-inference: VGG16 - INT8 - 4 - Disabledtensorrt-inference: VGG16 - INT8 - 8 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: VGG19 - FP16 - 8 - Disabledtensorrt-inference: VGG19 - INT8 - 4 - Disabledtensorrt-inference: VGG19 - INT8 - 8 - Disabledtensorrt-inference: VGG16 - FP16 - 16 - Disabledtensorrt-inference: VGG16 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 16 - Disabledtensorrt-inference: VGG16 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 16 - Disabledtensorrt-inference: VGG19 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 16 - Disabledtensorrt-inference: VGG19 - INT8 - 32 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 8 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - INT8 - 8 - Disabledtensorrt-inference: AlexNet - FP16 - 16 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 16 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 8 - Disabledtensorrt-inference: ResNet50 - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 8 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 8 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: GoogleNet - INT8 - 8 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: ResNet152 - FP16 - 8 - Disabledtensorrt-inference: ResNet152 - INT8 - 4 - Disabledtensorrt-inference: ResNet152 - INT8 - 8 - Disabledtensorrt-inference: ResNet50 - FP16 - 16 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 16 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 16 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 16 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 16 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: ResNet152 - INT8 - 16 - Disabledtensorrt-inference: ResNet152 - INT8 - 32 - DisabledJetson32.6034.3517.2620.0126.6028.0114.5716.1736.3237.5220.6620.1128.9429.5416.5116.04281.60314.27185.75220.27357.10469.83266.68299.4894.3099.8050.5252.26196.39208.49110.39117.3735.7938.0118.1619.50106.98109.3155.5060.12230.32232.84123.93130.2640.1542.0620.8922.11OpenBenchmarking.org

NVIDIA TensorRT Inference

This test profile uses any existing system installation of NVIDIA TensorRT for carrying out inference benchmarks with various neural networks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson816243240SE +/- 0.45, N = 632.60

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson816243240SE +/- 0.51, N = 334.35

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson48121620SE +/- 0.20, N = 1217.26

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson510152025SE +/- 0.03, N = 320.01

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson612182430SE +/- 0.07, N = 326.60

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson714212835SE +/- 0.29, N = 328.01

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson48121620SE +/- 0.15, N = 1114.57

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson48121620SE +/- 0.04, N = 316.17

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson816243240SE +/- 0.27, N = 336.32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson918273645SE +/- 0.07, N = 337.52

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson510152025SE +/- 0.01, N = 320.66

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson510152025SE +/- 0.02, N = 320.11

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson714212835SE +/- 0.19, N = 328.94

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson714212835SE +/- 0.16, N = 329.54

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson48121620SE +/- 0.01, N = 316.51

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson48121620SE +/- 0.02, N = 316.04

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson60120180240300SE +/- 4.86, N = 3281.60

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson70140210280350SE +/- 3.41, N = 12314.27

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson4080120160200SE +/- 2.01, N = 10185.75

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson50100150200250SE +/- 2.64, N = 12220.27

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson80160240320400SE +/- 8.64, N = 12357.10

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson100200300400500SE +/- 6.63, N = 6469.83

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson60120180240300SE +/- 2.74, N = 3266.68

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson70140210280350SE +/- 4.29, N = 5299.48

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson20406080100SE +/- 1.41, N = 1294.30

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson20406080100SE +/- 1.38, N = 399.80

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson1122334455SE +/- 0.83, N = 350.52

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson1224364860SE +/- 0.30, N = 352.26

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson4080120160200SE +/- 0.58, N = 3196.39

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson50100150200250SE +/- 1.78, N = 3208.49

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson20406080100SE +/- 1.59, N = 5110.39

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson306090120150SE +/- 1.08, N = 3117.37

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson816243240SE +/- 0.52, N = 535.79

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson918273645SE +/- 0.42, N = 338.01

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson48121620SE +/- 0.05, N = 318.16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson510152025SE +/- 0.06, N = 319.50

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson20406080100SE +/- 1.18, N = 3106.98

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson20406080100SE +/- 0.95, N = 3109.31

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson1224364860SE +/- 0.66, N = 355.50

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson1326395265SE +/- 0.06, N = 360.12

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson50100150200250SE +/- 0.59, N = 3230.32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson50100150200250SE +/- 3.57, N = 4232.84

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson306090120150SE +/- 1.27, N = 3123.93

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson306090120150SE +/- 0.61, N = 3130.26

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson918273645SE +/- 0.09, N = 340.15

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson1020304050SE +/- 0.04, N = 342.06

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson510152025SE +/- 0.06, N = 320.89

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson510152025SE +/- 0.04, N = 322.11

48 Results Shown

NVIDIA TensorRT Inference:
  VGG16 - FP16 - 4 - Disabled
  VGG16 - FP16 - 8 - Disabled
  VGG16 - INT8 - 4 - Disabled
  VGG16 - INT8 - 8 - Disabled
  VGG19 - FP16 - 4 - Disabled
  VGG19 - FP16 - 8 - Disabled
  VGG19 - INT8 - 4 - Disabled
  VGG19 - INT8 - 8 - Disabled
  VGG16 - FP16 - 16 - Disabled
  VGG16 - FP16 - 32 - Disabled
  VGG16 - INT8 - 16 - Disabled
  VGG16 - INT8 - 32 - Disabled
  VGG19 - FP16 - 16 - Disabled
  VGG19 - FP16 - 32 - Disabled
  VGG19 - INT8 - 16 - Disabled
  VGG19 - INT8 - 32 - Disabled
  AlexNet - FP16 - 4 - Disabled
  AlexNet - FP16 - 8 - Disabled
  AlexNet - INT8 - 4 - Disabled
  AlexNet - INT8 - 8 - Disabled
  AlexNet - FP16 - 16 - Disabled
  AlexNet - FP16 - 32 - Disabled
  AlexNet - INT8 - 16 - Disabled
  AlexNet - INT8 - 32 - Disabled
  ResNet50 - FP16 - 4 - Disabled
  ResNet50 - FP16 - 8 - Disabled
  ResNet50 - INT8 - 4 - Disabled
  ResNet50 - INT8 - 8 - Disabled
  GoogleNet - FP16 - 4 - Disabled
  GoogleNet - FP16 - 8 - Disabled
  GoogleNet - INT8 - 4 - Disabled
  GoogleNet - INT8 - 8 - Disabled
  ResNet152 - FP16 - 4 - Disabled
  ResNet152 - FP16 - 8 - Disabled
  ResNet152 - INT8 - 4 - Disabled
  ResNet152 - INT8 - 8 - Disabled
  ResNet50 - FP16 - 16 - Disabled
  ResNet50 - FP16 - 32 - Disabled
  ResNet50 - INT8 - 16 - Disabled
  ResNet50 - INT8 - 32 - Disabled
  GoogleNet - FP16 - 16 - Disabled
  GoogleNet - FP16 - 32 - Disabled
  GoogleNet - INT8 - 16 - Disabled
  GoogleNet - INT8 - 32 - Disabled
  ResNet152 - FP16 - 16 - Disabled
  ResNet152 - FP16 - 32 - Disabled
  ResNet152 - INT8 - 16 - Disabled
  ResNet152 - INT8 - 32 - Disabled