nlp-benchmarks

AWS EC2 Amazon Linux 2023 Benchmarking

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402097-NE-2402012NE97
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 4 Tests
Multi-Core 2 Tests
Intel oneAPI 2 Tests
Python 2 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
c6i.2xlarge
February 01
  2 Hours, 25 Minutes
m7i-flex.2xlarge
February 08
  2 Hours, 59 Minutes
c7a.2xlarge
February 09
  1 Hour, 48 Minutes
r7a.xlarge
February 09
  2 Hours, 14 Minutes
m7i.2xlarge
February 09
  2 Hours, 11 Minutes
Invert Hiding All Results Option
  2 Hours, 19 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nlp-benchmarksProcessorMotherboardChipsetMemoryDiskNetworkOSKernelCompilerFile-SystemSystem Layerc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlargeIntel Xeon Platinum 8375C (4 Cores / 8 Threads)Amazon EC2 c6i.2xlarge (1.0 BIOS)Intel 440FX 82441FX PMC1 x 16GB DDR4-3200MT/s215GB Amazon Elastic Block StoreAmazon ElasticAmazon Linux 20236.1.61-85.141.amzn2023.x86_64 (x86_64)GCC 11.4.1 20230605xfsamazonIntel Xeon Platinum 8488C (4 Cores / 8 Threads)Amazon EC2 m7i-flex.2xlarge (1.0 BIOS)1 x 32GB 4800MT/s6.1.72-96.166.amzn2023.x86_64 (x86_64)AMD EPYC 9R14 (8 Cores)Amazon EC2 c7a.2xlarge (1.0 BIOS)1 x 16GB 4800MT/sAMD EPYC 9R14 (4 Cores)Amazon EC2 r7a.xlarge (1.0 BIOS)1 x 32GB 4800MT/sIntel Xeon Platinum 8488C (4 Cores / 8 Threads)Amazon EC2 m7i.2xlarge (1.0 BIOS)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-amazon-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver Processor Details- c6i.2xlarge: CPU Microcode: 0xd0003a5- m7i-flex.2xlarge: CPU Microcode: 0x2b000571- c7a.2xlarge: CPU Microcode: 0xa10113e- r7a.xlarge: CPU Microcode: 0xa10113e- m7i.2xlarge: CPU Microcode: 0x2b000571Python Details- Python 3.11.6Security Details- c6i.2xlarge: gather_data_sampling: Unknown: Dependent on hypervisor status + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT Host state unknown + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - m7i-flex.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - c7a.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - r7a.xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - m7i.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlargeResult OverviewPhoronix Test Suite100%152%204%256%308%OpenVINOoneDNNPyTorchNumpy BenchmarkPyBench

nlp-benchmarkspytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUnumpy: pybench: Total For Average Test Timesonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge26.7810.5715.9615.816.366.387.994.064.041.776.5322.26374.9910005.9366912.66188.098036.243912.038361.772862496.8433.192046.698634.71662492.292501.502252.04610.59179.5329.8912.1018.8418.347.367.558.995.385.478.4316.5753.87438.257365.842618.100077.948787.194621.0163271.0032692382.463.689931.763503.110752389.642318.31474.46241.2174.2050.1620.0031.8931.2912.9513.0311.748.758.715.169.7654.51590.108877.351015.014375.103307.746191.094291.264051482.353.282456.902003.123391480.931478.51774.23409.7873.3433.3913.1919.3519.327.687.688.445.665.672.624.930.77595.018878.226919.8544110.16797.387322.180292.529822856.845.4078413.73186.259072857.772862.86764.30408.0264.9831.3612.4819.6219.437.657.668.845.265.287.8014.5149.39452.508155.880198.617188.344626.619741.030111.066022320.943.376101.793603.207922310.902303.44511.54275.6580.94OpenBenchmarking.org

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50c6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge1122334455SE +/- 0.13, N = 3SE +/- 0.32, N = 3SE +/- 0.05, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 326.7850.1629.8931.3633.39MIN: 13.67 / MAX: 27.8MIN: 33.27 / MAX: 51.43MIN: 7.96 / MAX: 34.56MIN: 26.54 / MAX: 32.62MIN: 24.56 / MAX: 33.78

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152c6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge510152025SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 310.5720.0012.1012.4813.19MIN: 9.04 / MAX: 10.77MIN: 15.6 / MAX: 20.36MIN: 2.89 / MAX: 13.92MIN: 8.7 / MAX: 12.92MIN: 10.96 / MAX: 13.33

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50c6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge714212835SE +/- 0.12, N = 3SE +/- 0.24, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.18, N = 315.9631.8918.8419.6219.35MIN: 11.65 / MAX: 17.13MIN: 23.54 / MAX: 32.58MIN: 4.37 / MAX: 21.77MIN: 15.92 / MAX: 20.23MIN: 15.18 / MAX: 19.82

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50c6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge714212835SE +/- 0.17, N = 4SE +/- 0.42, N = 15SE +/- 0.29, N = 15SE +/- 0.21, N = 5SE +/- 0.15, N = 1015.8131.2918.3419.4319.32MIN: 9.11 / MAX: 17.08MIN: 20.73 / MAX: 33.27MIN: 4.11 / MAX: 22.15MIN: 13.34 / MAX: 20.17MIN: 13.74 / MAX: 19.98

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152c6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge3691215SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 5SE +/- 0.02, N = 3SE +/- 0.02, N = 36.3612.957.367.657.68MIN: 5.43 / MAX: 6.61MIN: 4.2 / MAX: 13.21MIN: 2.24 / MAX: 8.66MIN: 6.27 / MAX: 7.82MIN: 6.45 / MAX: 7.76

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152c6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge3691215SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.3813.037.557.667.68MIN: 3.7 / MAX: 6.57MIN: 10.5 / MAX: 13.2MIN: 2.93 / MAX: 8.62MIN: 4.36 / MAX: 7.82MIN: 6.31 / MAX: 7.76

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge3691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 12SE +/- 0.04, N = 3SE +/- 0.01, N = 37.9911.748.998.848.44MIN: 6.59 / MAX: 8.31MIN: 8.95 / MAX: 11.9MIN: 3.08 / MAX: 10.39MIN: 7.72 / MAX: 9.09MIN: 2.94 / MAX: 8.53

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.068.755.385.265.66MIN: 3.29 / MAX: 4.32MIN: 5.72 / MAX: 8.84MIN: 2.12 / MAX: 6.08MIN: 3.36 / MAX: 5.38MIN: 4.27 / MAX: 5.72

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 34.048.715.475.285.67MIN: 3.5 / MAX: 4.36MIN: 5.53 / MAX: 8.82MIN: 2.31 / MAX: 6.19MIN: 4.01 / MAX: 5.41MIN: 4.37 / MAX: 5.73

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 31.775.168.437.802.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge48121620SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.16, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 36.539.7616.5714.514.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge1224364860SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.19, N = 3SE +/- 0.02, N = 3SE +/- 0.18, N = 322.2654.5153.8749.3930.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge130260390520650SE +/- 1.37, N = 3SE +/- 1.02, N = 3SE +/- 3.53, N = 3SE +/- 0.98, N = 3SE +/- 0.90, N = 3374.99590.10438.25452.50595.01

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge2004006008001000SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 3.18, N = 3SE +/- 2.33, N = 3SE +/- 1.33, N = 31000887736815887

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge246810SE +/- 0.03862, N = 3SE +/- 0.01338, N = 3SE +/- 0.02435, N = 3SE +/- 0.00970, N = 3SE +/- 0.00015, N = 35.936697.351015.842615.880198.22691MIN: 5.66MIN: 7.22MIN: 5MIN: 5.62MIN: 8.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge3691215SE +/- 0.07954, N = 3SE +/- 0.00229, N = 3SE +/- 0.01302, N = 3SE +/- 0.08548, N = 12SE +/- 0.00467, N = 312.661805.014378.100078.617189.85441MIN: 10.8MIN: 4.93MIN: 6.7MIN: 7.87MIN: 9.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge3691215SE +/- 0.01712, N = 3SE +/- 0.00261, N = 3SE +/- 0.08711, N = 3SE +/- 0.02786, N = 3SE +/- 0.01406, N = 38.098035.103307.948788.3446210.16790MIN: 8.02MIN: 5.05MIN: 6.79MIN: 8.05MIN: 10.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge246810SE +/- 0.08834, N = 3SE +/- 0.01540, N = 3SE +/- 0.03815, N = 3SE +/- 0.02083, N = 3SE +/- 0.01020, N = 36.243917.746197.194626.619747.38732MIN: 5.78MIN: 7.55MIN: 6.66MIN: 6.24MIN: 7.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge0.49060.98121.47181.96242.453SE +/- 0.002826, N = 3SE +/- 0.002971, N = 3SE +/- 0.013619, N = 15SE +/- 0.000901, N = 3SE +/- 0.000469, N = 32.0383601.0942901.0163271.0301102.180290MIN: 1.97MIN: 1.08MIN: 0.79MIN: 0.94MIN: 2.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge0.56921.13841.70762.27682.846SE +/- 0.013170, N = 3SE +/- 0.004187, N = 3SE +/- 0.011545, N = 4SE +/- 0.003802, N = 3SE +/- 0.006412, N = 31.7728601.2640501.0032691.0660202.529820MIN: 1.73MIN: 1.25MIN: 0.84MIN: 1MIN: 2.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge6001200180024003000SE +/- 7.01, N = 3SE +/- 1.52, N = 3SE +/- 26.45, N = 4SE +/- 7.66, N = 3SE +/- 2.56, N = 32496.841482.352382.462320.942856.84MIN: 2465.66MIN: 1476.16MIN: 2219.69MIN: 2290.27MIN: 2845.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge816243240SE +/- 0.00337, N = 3SE +/- 0.00275, N = 3SE +/- 0.03984, N = 3SE +/- 0.00955, N = 3SE +/- 0.02244, N = 333.192003.282453.689933.376105.40784MIN: 33.13MIN: 5.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge1122334455SE +/- 0.01032, N = 3SE +/- 0.00810, N = 3SE +/- 0.00827, N = 3SE +/- 0.00405, N = 3SE +/- 0.02961, N = 346.698606.902001.763501.7936013.73180MIN: 46.42MIN: 13.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge816243240SE +/- 0.00790, N = 3SE +/- 0.00414, N = 3SE +/- 0.01261, N = 3SE +/- 0.00738, N = 3SE +/- 0.01228, N = 334.716603.123393.110753.207926.25907MIN: 34.6MIN: 6.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge6001200180024003000SE +/- 2.42, N = 3SE +/- 2.09, N = 3SE +/- 24.80, N = 3SE +/- 2.59, N = 3SE +/- 1.65, N = 32492.291480.932389.642310.902857.77MIN: 2460.06MIN: 1474.24MIN: 2260.99MIN: 2291.18MIN: 2848.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge6001200180024003000SE +/- 2.60, N = 3SE +/- 1.00, N = 3SE +/- 15.72, N = 3SE +/- 1.63, N = 3SE +/- 3.75, N = 32501.501478.512318.312303.442862.86MIN: 2476.86MIN: 1472.49MIN: 2205.71MIN: 2276.46MIN: 2849.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge5001000150020002500SE +/- 3.74, N = 3SE +/- 0.28, N = 3SE +/- 5.97, N = 3SE +/- 2.36, N = 3SE +/- 0.15, N = 32252.04774.23474.46511.54764.30MIN: 2202.18 / MAX: 2317.24MIN: 767.78 / MAX: 795.49MIN: 427.39 / MAX: 558.54MIN: 337.11 / MAX: 534.54MIN: 761.82 / MAX: 783.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge130260390520650SE +/- 3.28, N = 3SE +/- 0.07, N = 3SE +/- 2.35, N = 3SE +/- 0.26, N = 3SE +/- 0.14, N = 3610.59409.78241.21275.65408.02MIN: 497.95 / MAX: 643.84MIN: 407.35 / MAX: 424.19MIN: 91.81 / MAX: 374.94MIN: 267.85 / MAX: 299.94MIN: 406.22 / MAX: 426.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargec7a.2xlargem7i-flex.2xlargem7i.2xlarger7a.xlarge4080120160200SE +/- 0.44, N = 3SE +/- 0.02, N = 3SE +/- 0.25, N = 3SE +/- 0.04, N = 3SE +/- 0.40, N = 3179.5373.3474.2080.9464.98MIN: 100.26 / MAX: 344MIN: 65.17 / MAX: 80.62MIN: 34.84 / MAX: 96.18MIN: 69.04 / MAX: 153.39MIN: 61.57 / MAX: 90.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie