n1n1

ARMv8 Neoverse-N1 testing with a GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCP: 2.10.20220531 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403174-NE-N1N13670960
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 6 Tests
Creator Workloads 7 Tests
Encoding 2 Tests
HPC - High Performance Computing 3 Tests
Imaging 2 Tests
Machine Learning 3 Tests
Multi-Core 7 Tests
Intel oneAPI 2 Tests
Python Tests 2 Tests
Server CPU Tests 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 17
  15 Minutes
aa
March 17
  7 Hours, 43 Minutes
b
March 17
  2 Hours, 32 Minutes
c
March 17
  2 Hours, 15 Minutes
Invert Hiding All Results Option
  3 Hours, 11 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


n1n1 OpenBenchmarking.orgPhoronix Test SuiteARMv8 Neoverse-N1 @ 3.00GHz (128 Cores)GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCPAmpere Computing LLC Altra PCI Root Complex A16 x 32 GB DDR4-3200MT/s Samsung M393A4K40DB3-CWE800GB Micron_7450_MTFDKBA800TFSASPEEDVGA HDMI2 x Intel I350Ubuntu 23.106.5.0-15-generic (aarch64)GCC 13.2.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelCompilerFile-SystemScreen ResolutionN1n1 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - Scaling Governor: cppc_cpufreq performance (Boost: Disabled)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUcbaa0.6391.2781.9172.5563.195SE +/- 0.01, N = 32.842.842.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUbaac48121620SE +/- 0.01, N = 314.7714.7714.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUbcaa48121620SE +/- 0.02, N = 314.8414.8014.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUbaac50100150200250SE +/- 0.10, N = 3223.85222.86222.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUcbaa0.61881.23761.85642.47523.094SE +/- 0.00, N = 32.752.752.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUaacb150300450600750SE +/- 8.52, N = 3676.59670.19664.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUcaab1530456075SE +/- 0.12, N = 365.6065.6065.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUaacb20406080100SE +/- 0.03, N = 389.3589.3089.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUbcaa60120180240300SE +/- 0.30, N = 3297.48294.58293.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUaacb70140210280350SE +/- 0.61, N = 3333.15331.77329.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUaacb816243240SE +/- 0.02, N = 334.9034.8834.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUcbaa918273645SE +/- 0.05, N = 340.2140.1540.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUbcaa50100150200250SE +/- 0.18, N = 3221.47219.27217.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUcbaa50100150200250SE +/- 0.66, N = 3207.24205.33204.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUaacb4080120160200SE +/- 0.03, N = 3164.82164.79164.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUcbaa4080120160200SE +/- 0.06, N = 3164.13163.98163.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUaabc306090120150SE +/- 0.34, N = 3142.60142.58142.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUcbaa30060090012001500SE +/- 3.07, N = 31403.651402.971402.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUaabc306090120150SE +/- 0.83, N = 3147.76147.08146.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUbaac30060090012001500SE +/- 1.48, N = 31473.231462.941460.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kacbaa0.59671.19341.79012.38682.9835SE +/- 0.004, N = 32.6522.6502.6502.6441. (CXX) g++ options: -march=native

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kbcaaa612182430SE +/- 0.01, N = 325.0124.9524.9524.931. (CXX) g++ options: -march=native

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kbcaaa20406080100SE +/- 0.28, N = 375.1775.0274.6874.471. (CXX) g++ options: -march=native

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kbaaac20406080100SE +/- 0.19, N = 374.9674.9074.9074.601. (CXX) g++ options: -march=native

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pcaaba246810SE +/- 0.010, N = 38.9268.9258.9218.9141. (CXX) g++ options: -march=native

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080paabac1326395265SE +/- 0.06, N = 357.1457.0356.9056.791. (CXX) g++ options: -march=native

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pabaac60120180240300SE +/- 0.05, N = 3265.74265.44264.98264.281. (CXX) g++ options: -march=native

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pbacaa80160240320400SE +/- 0.57, N = 3365.10364.40363.61363.351. (CXX) g++ options: -march=native

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambcaa816243240SE +/- 0.02, N = 333.7133.6833.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcaab612182430SE +/- 0.11, N = 326.3326.0925.95

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamaabc2004006008001000SE +/- 2.84, N = 31149.471144.801144.77

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streambaac306090120150SE +/- 0.27, N = 3132.99132.18131.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcbaa100200300400500SE +/- 1.33, N = 3479.99475.82474.90

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambaac306090120150SE +/- 0.11, N = 3134.00133.53133.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streambaac6001200180024003000SE +/- 6.53, N = 32688.962678.242630.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcaab70140210280350SE +/- 0.75, N = 3316.35315.75312.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamcbaa0.51381.02761.54142.05522.569SE +/- 0.0074, N = 32.28362.27542.2602

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamcaab3691215SE +/- 0.02, N = 312.9612.9312.90

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambcaa100200300400500SE +/- 1.18, N = 3478.64478.37476.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcbaa306090120150SE +/- 0.17, N = 3133.89133.63133.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamaabc4080120160200SE +/- 0.34, N = 3202.64202.15201.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcbaa306090120150SE +/- 0.16, N = 3112.85112.83112.53

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambaac80160240320400SE +/- 0.25, N = 3346.67345.11339.90

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcaab20406080100SE +/- 0.86, N = 3111.16109.95109.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambcaa1122334455SE +/- 0.11, N = 346.7246.6846.61

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambcaa714212835SE +/- 0.01, N = 330.7230.6730.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streambaac100200300400500SE +/- 0.42, N = 3439.60438.71438.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streambcaa1122334455SE +/- 0.08, N = 350.7350.6550.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcbaa816243240SE +/- 0.04, N = 333.6733.5833.53

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambaac612182430SE +/- 0.02, N = 326.3526.2526.20

srsRAN Project

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totalabaa3K6K9K12K15KSE +/- 42.60, N = 314099.813999.613936.11. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -ldl

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Totala300600900120015001602.1MIN: 947.21. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -ldl

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threadaaa4080120160200SE +/- 0.03, N = 3175.8175.71. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -ldl

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Threada112233445546.7MIN: 28.91. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -ldl

JPEG-XL libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 80acbaa1020304050SE +/- 0.30, N = 343.1041.3541.3140.281. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 90bcaaa918273645SE +/- 0.55, N = 1539.6739.2539.2537.901. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 80caaab918273645SE +/- 0.12, N = 339.3239.2738.9237.771. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 90baaac918273645SE +/- 0.45, N = 1537.7937.5937.4235.841. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 100acbaa714212835SE +/- 0.04, N = 329.6029.5429.4929.241. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 100acbaa714212835SE +/- 0.00, N = 331.6731.6231.6231.121. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL Decoding libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: 1bcaaa612182430SE +/- 0.01, N = 327.4227.4027.2427.15

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: Allbacaa120240360480600SE +/- 1.96, N = 3564.89558.57542.10523.02

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess Benchmarkaaacb13M26M39M52M65MSE +/- 1497045.19, N = 12594497255902877553514996519018531. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -flto -flto-partition=one -flto=jobserver

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUaabc1.09992.19983.29974.39965.4995SE +/- 0.01022, N = 34.840654.880154.88858MIN: 4.25MIN: 4.23MIN: 4.31. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUcbaa0.48510.97021.45531.94042.4255SE +/- 0.00137, N = 32.148782.151782.15582MIN: 2.06MIN: 2.06MIN: 2.061. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUbcaa0.96631.93262.89893.86524.8315SE +/- 0.01638, N = 34.280364.284614.29470MIN: 4.17MIN: 4.14MIN: 4.161. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUbcaa510152025SE +/- 0.20, N = 320.4320.8920.93MIN: 19.32MIN: 19.81MIN: 19.341. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUbaac0.63091.26181.89272.52363.1545SE +/- 0.01912, N = 122.782382.796262.80386MIN: 2.72MIN: 2.68MIN: 2.71. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUbaac8001600240032004000SE +/- 2.30, N = 33737.153738.393738.53MIN: 3730.87MIN: 3728.79MIN: 3730.991. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUaabc30060090012001500SE +/- 3.72, N = 31460.941461.001469.65MIN: 1436.36MIN: 1442.49MIN: 1448.431. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl -lpthread

Google Draco

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionbcaa16003200480064008000SE +/- 1.86, N = 37320733273511. (CXX) g++ options: -O3

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadebcaa2K4K6K8K10KSE +/- 6.24, N = 398479848101001. (CXX) g++ options: -O3

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUcaab2K4K6K8K10KSE +/- 17.40, N = 310876.7010877.5310891.93MIN: 3255.92 / MAX: 18738.42MIN: 4104.89 / MAX: 18949.05MIN: 3821.31 / MAX: 19031.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUaabc5001000150020002500SE +/- 1.19, N = 32150.302151.852157.45MIN: 491.1 / MAX: 2996.72MIN: 500.93 / MAX: 2975.2MIN: 644.54 / MAX: 2962.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUbcaa5001000150020002500SE +/- 2.39, N = 32140.202146.072156.87MIN: 527.18 / MAX: 2951.37MIN: 439.17 / MAX: 2969.83MIN: 504.09 / MAX: 29901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUbaac306090120150SE +/- 0.06, N = 3142.79143.42143.48MIN: 60 / MAX: 245.21MIN: 62.82 / MAX: 295.2MIN: 44.55 / MAX: 252.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUcbaa2K4K6K8K10KSE +/- 9.32, N = 311196.5411206.1311232.43MIN: 7222.84 / MAX: 20603.63MIN: 7011.32 / MAX: 20429.17MIN: 6926.76 / MAX: 21113.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUaacb1122334455SE +/- 0.60, N = 347.2847.7248.10MIN: 10.17 / MAX: 121.04MIN: 9.97 / MAX: 99.86MIN: 9.92 / MAX: 115.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUaacb110220330440550SE +/- 0.88, N = 3486.11486.11486.90MIN: 118.22 / MAX: 849.31MIN: 171.7 / MAX: 813.73MIN: 119.18 / MAX: 852.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUcaab80160240320400SE +/- 0.11, N = 3357.41357.86358.50MIN: 204.13 / MAX: 519.56MIN: 301.59 / MAX: 522.85MIN: 300.19 / MAX: 528.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUbcaa20406080100SE +/- 0.11, N = 3107.50108.56108.97MIN: 57.15 / MAX: 1202.08MIN: 17.21 / MAX: 1188.34MIN: 17.48 / MAX: 1207.621. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUaacb20406080100SE +/- 0.18, N = 395.9896.3896.90MIN: 71.43 / MAX: 140.32MIN: 69.36 / MAX: 140.93MIN: 70.14 / MAX: 141.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUcaab2004006008001000SE +/- 0.49, N = 3913.21913.41915.17MIN: 718.49 / MAX: 1350.67MIN: 742.17 / MAX: 1356.42MIN: 711.5 / MAX: 1350.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUcbaa2004006008001000SE +/- 0.93, N = 3792.00793.31794.06MIN: 568.74 / MAX: 1657.2MIN: 559.01 / MAX: 1581.54MIN: 604.52 / MAX: 1620.51. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUbcaa306090120150SE +/- 0.12, N = 3144.38145.82146.71MIN: 96.65 / MAX: 1566.66MIN: 96.38 / MAX: 1563.28MIN: 96.02 / MAX: 1572.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUcbaa306090120150SE +/- 0.49, N = 3154.30155.74156.22MIN: 44.57 / MAX: 239.56MIN: 48.23 / MAX: 240.13MIN: 44.3 / MAX: 240.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUaacb4080120160200SE +/- 0.04, N = 3193.84193.87193.93MIN: 183.19 / MAX: 407.14MIN: 182.85 / MAX: 406.51MIN: 182.93 / MAX: 402.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUcbaa4080120160200SE +/- 0.08, N = 3194.65194.84194.88MIN: 185.45 / MAX: 358.03MIN: 185.09 / MAX: 355.83MIN: 185.7 / MAX: 356.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUaabc50100150200250SE +/- 0.53, N = 3224.22224.22224.31MIN: 29.21 / MAX: 400.61MIN: 36.4 / MAX: 368.76MIN: 31.77 / MAX: 351.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUcbaa510152025SE +/- 0.05, N = 322.7822.7922.80MIN: 1.63 / MAX: 162.11MIN: 1.59 / MAX: 165.35MIN: 1.57 / MAX: 164.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUaabc50100150200250SE +/- 1.21, N = 3216.18217.16217.41MIN: 206.9 / MAX: 376.9MIN: 208.82 / MAX: 374.93MIN: 210.44 / MAX: 372.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUbaac510152025SE +/- 0.02, N = 321.7121.8621.89MIN: 2.05 / MAX: 156.88MIN: 2 / MAX: 157.1MIN: 2.07 / MAX: 156.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcbaa400800120016002000SE +/- 1.78, N = 31833.451834.831844.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcaab918273645SE +/- 0.16, N = 337.9638.3238.52

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamaacb1224364860SE +/- 0.11, N = 355.0355.2455.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streambaac246810SE +/- 0.0154, N = 37.50617.55207.5924

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcbaa306090120150SE +/- 0.39, N = 3131.52132.74132.95

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambaac246810SE +/- 0.0064, N = 37.44847.47417.4767

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streambaac612182430SE +/- 0.06, N = 323.4123.5023.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcaab0.71631.43262.14892.86523.5815SE +/- 0.0074, N = 33.14493.15083.1835

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamcbaa5K10K15K20K25KSE +/- 55.68, N = 321169.3021231.5621332.89

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamcaab20406080100SE +/- 0.12, N = 377.1277.3077.49

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcbaa306090120150SE +/- 0.34, N = 3131.86131.95132.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcaab246810SE +/- 0.0095, N = 37.45407.46917.4692

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamaabc70140210280350SE +/- 0.51, N = 3310.51311.17312.79

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcbaa246810SE +/- 0.0129, N = 38.84618.84838.8709

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambaac4080120160200SE +/- 0.08, N = 3181.90182.88185.72

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcaab3691215SE +/- 0.0713, N = 38.98209.08199.1198

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcbaa30060090012001500SE +/- 3.19, N = 31333.721335.351337.59

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambcaa816243240SE +/- 0.01, N = 332.5332.5832.66

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streambcaa306090120150SE +/- 0.07, N = 3143.48143.60143.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streambcaa510152025SE +/- 0.03, N = 319.6919.7319.75

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambcaa400800120016002000SE +/- 1.00, N = 31835.261836.791840.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambaac918273645SE +/- 0.04, N = 337.9438.0738.15

Timed Linux Kernel Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigaaabc20406080100SE +/- 0.90, N = 392.7694.2794.4394.50

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigaacb80160240320400SE +/- 0.68, N = 3348.02349.92350.29

Parallel BZIP2 Compression

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionaacb0.54891.09781.64672.19562.7445SE +/- 0.001512, N = 32.4135532.4386312.4393381. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12bcaa0.6551.311.9652.623.275SE +/- 0.003, N = 32.8722.8932.9111. (CXX) g++ options: -O3

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13caab1020304050SE +/- 0.07, N = 342.2942.3142.441. (CXX) g++ options: -O3

WavPack Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.7WAV To WavPackaacb612182430SE +/- 0.00, N = 525.2025.2025.21

120 Results Shown

OpenVINO:
  Face Detection FP16 - CPU
  Person Detection FP16 - CPU
  Person Detection FP32 - CPU
  Vehicle Detection FP16 - CPU
  Face Detection FP16-INT8 - CPU
  Face Detection Retail FP16 - CPU
  Road Segmentation ADAS FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
  Face Detection Retail FP16-INT8 - CPU
  Road Segmentation ADAS FP16-INT8 - CPU
  Machine Translation EN To DE FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
  Person Vehicle Bike Detection FP16 - CPU
  Noise Suppression Poconet-Like FP16 - CPU
  Handwritten English Recognition FP16 - CPU
  Person Re-Identification Retail FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Handwritten English Recognition FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Baseline - Synchronous Single-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Synchronous Single-Stream
  Llama2 Chat 7b Quantized - Asynchronous Multi-Stream
  Llama2 Chat 7b Quantized - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
srsRAN Project:
  PDSCH Processor Benchmark, Throughput Total
  PUSCH Processor Benchmark, Throughput Total
  PDSCH Processor Benchmark, Throughput Thread
  PUSCH Processor Benchmark, Throughput Thread
JPEG-XL libjxl:
  PNG - 80
  PNG - 90
  JPEG - 80
  JPEG - 90
  PNG - 100
  JPEG - 100
JPEG-XL Decoding libjxl:
  1
  All
Stockfish
oneDNN:
  IP Shapes 1D - CPU
  IP Shapes 3D - CPU
  Convolution Batch Shapes Auto - CPU
  Deconvolution Batch shapes_1d - CPU
  Deconvolution Batch shapes_3d - CPU
  Recurrent Neural Network Training - CPU
  Recurrent Neural Network Inference - CPU
Google Draco:
  Lion
  Church Facade
OpenVINO:
  Face Detection FP16 - CPU
  Person Detection FP16 - CPU
  Person Detection FP32 - CPU
  Vehicle Detection FP16 - CPU
  Face Detection FP16-INT8 - CPU
  Face Detection Retail FP16 - CPU
  Road Segmentation ADAS FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
  Face Detection Retail FP16-INT8 - CPU
  Road Segmentation ADAS FP16-INT8 - CPU
  Machine Translation EN To DE FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
  Person Vehicle Bike Detection FP16 - CPU
  Noise Suppression Poconet-Like FP16 - CPU
  Handwritten English Recognition FP16 - CPU
  Person Re-Identification Retail FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Handwritten English Recognition FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Baseline - Synchronous Single-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Synchronous Single-Stream
  Llama2 Chat 7b Quantized - Asynchronous Multi-Stream
  Llama2 Chat 7b Quantized - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
Parallel BZIP2 Compression
Primesieve:
  1e12
  1e13
WavPack Audio Encoding