nlp-benchmarks

AWS EC2 Amazon Linux 2023 Benchmarking

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402094-NE-2402012NE30
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 4 Tests
Multi-Core 2 Tests
Intel oneAPI 2 Tests
Python 2 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
c6i.2xlarge
February 01
  2 Hours, 25 Minutes
m7i-flex.2xlarge
February 08
  2 Hours, 59 Minutes
c7a.2xlarge
February 09
  1 Hour, 48 Minutes
r7a.xlarge
February 09
  2 Hours, 14 Minutes
Invert Hiding All Results Option
  2 Hours, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nlp-benchmarksProcessorMotherboardChipsetMemoryDiskNetworkOSKernelCompilerFile-SystemSystem Layerc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargeIntel Xeon Platinum 8375C (4 Cores / 8 Threads)Amazon EC2 c6i.2xlarge (1.0 BIOS)Intel 440FX 82441FX PMC1 x 16GB DDR4-3200MT/s215GB Amazon Elastic Block StoreAmazon ElasticAmazon Linux 20236.1.61-85.141.amzn2023.x86_64 (x86_64)GCC 11.4.1 20230605xfsamazonIntel Xeon Platinum 8488C (4 Cores / 8 Threads)Amazon EC2 m7i-flex.2xlarge (1.0 BIOS)1 x 32GB 4800MT/s6.1.72-96.166.amzn2023.x86_64 (x86_64)AMD EPYC 9R14 (8 Cores)Amazon EC2 c7a.2xlarge (1.0 BIOS)1 x 16GB 4800MT/sAMD EPYC 9R14 (4 Cores)Amazon EC2 r7a.xlarge (1.0 BIOS)1 x 32GB 4800MT/sOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-amazon-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver Processor Details- c6i.2xlarge: CPU Microcode: 0xd0003a5- m7i-flex.2xlarge: CPU Microcode: 0x2b000571- c7a.2xlarge: CPU Microcode: 0xa10113e- r7a.xlarge: CPU Microcode: 0xa10113ePython Details- Python 3.11.6Security Details- c6i.2xlarge: gather_data_sampling: Unknown: Dependent on hypervisor status + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT Host state unknown + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - m7i-flex.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - c7a.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - r7a.xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargeResult OverviewPhoronix Test Suite100%152%204%256%308%OpenVINOoneDNNPyTorchNumpy BenchmarkPyBench

nlp-benchmarkspytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lnumpy: pytorch: CPU - 16 - ResNet-50onednn: Recurrent Neural Network Inference - f32 - CPUpytorch: CPU - 1 - ResNet-152onednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUpytorch: CPU - 1 - ResNet-50onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUpybench: Total For Average Test Timesonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge4.064.0415.816.366.387.99374.9915.962496.8410.572492.292501.502252.041.77610.596.53179.5322.262.0383626.7846.698612.661810005.936696.2439133.19201.7728634.71668.098035.385.4718.347.367.558.99438.2518.842382.4612.102389.642318.31474.468.43241.2116.5774.2053.871.01632729.891.763508.100077365.842617.194623.689931.0032693.110757.948788.758.7131.2912.9513.0311.74590.1031.891482.3520.001480.931478.51774.235.16409.789.7673.3454.511.0942950.166.902005.014378877.351017.746193.282451.264053.123395.103305.665.6719.327.687.688.44595.0119.352856.8413.192857.772862.86764.302.62408.024.964.9830.772.1802933.3913.73189.854418878.226917.387325.407842.529826.2590710.1679OpenBenchmarking.org

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.065.388.755.66MIN: 3.29 / MAX: 4.32MIN: 2.12 / MAX: 6.08MIN: 5.72 / MAX: 8.84MIN: 4.27 / MAX: 5.72

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 34.045.478.715.67MIN: 3.5 / MAX: 4.36MIN: 2.31 / MAX: 6.19MIN: 5.53 / MAX: 8.82MIN: 4.37 / MAX: 5.73

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge714212835SE +/- 0.17, N = 4SE +/- 0.29, N = 15SE +/- 0.42, N = 15SE +/- 0.15, N = 1015.8118.3431.2919.32MIN: 9.11 / MAX: 17.08MIN: 4.11 / MAX: 22.15MIN: 20.73 / MAX: 33.27MIN: 13.74 / MAX: 19.98

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge3691215SE +/- 0.05, N = 3SE +/- 0.08, N = 5SE +/- 0.09, N = 3SE +/- 0.02, N = 36.367.3612.957.68MIN: 5.43 / MAX: 6.61MIN: 2.24 / MAX: 8.66MIN: 4.2 / MAX: 13.21MIN: 6.45 / MAX: 7.76

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 36.387.5513.037.68MIN: 3.7 / MAX: 6.57MIN: 2.93 / MAX: 8.62MIN: 10.5 / MAX: 13.2MIN: 6.31 / MAX: 7.76

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge3691215SE +/- 0.02, N = 3SE +/- 0.09, N = 12SE +/- 0.02, N = 3SE +/- 0.01, N = 37.998.9911.748.44MIN: 6.59 / MAX: 8.31MIN: 3.08 / MAX: 10.39MIN: 8.95 / MAX: 11.9MIN: 2.94 / MAX: 8.53

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge130260390520650SE +/- 1.37, N = 3SE +/- 3.53, N = 3SE +/- 1.02, N = 3SE +/- 0.90, N = 3374.99438.25590.10595.01

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge714212835SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.24, N = 3SE +/- 0.18, N = 315.9618.8431.8919.35MIN: 11.65 / MAX: 17.13MIN: 4.37 / MAX: 21.77MIN: 23.54 / MAX: 32.58MIN: 15.18 / MAX: 19.82

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge6001200180024003000SE +/- 7.01, N = 3SE +/- 26.45, N = 4SE +/- 1.52, N = 3SE +/- 2.56, N = 32496.842382.461482.352856.84MIN: 2465.66MIN: 2219.69MIN: 1476.16MIN: 2845.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge510152025SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 310.5712.1020.0013.19MIN: 9.04 / MAX: 10.77MIN: 2.89 / MAX: 13.92MIN: 15.6 / MAX: 20.36MIN: 10.96 / MAX: 13.33

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge6001200180024003000SE +/- 2.42, N = 3SE +/- 24.80, N = 3SE +/- 2.09, N = 3SE +/- 1.65, N = 32492.292389.641480.932857.77MIN: 2460.06MIN: 2260.99MIN: 1474.24MIN: 2848.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge6001200180024003000SE +/- 2.60, N = 3SE +/- 15.72, N = 3SE +/- 1.00, N = 3SE +/- 3.75, N = 32501.502318.311478.512862.86MIN: 2476.86MIN: 2205.71MIN: 1472.49MIN: 2849.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge5001000150020002500SE +/- 3.74, N = 3SE +/- 5.97, N = 3SE +/- 0.28, N = 3SE +/- 0.15, N = 32252.04474.46774.23764.30MIN: 2202.18 / MAX: 2317.24MIN: 427.39 / MAX: 558.54MIN: 767.78 / MAX: 795.49MIN: 761.82 / MAX: 783.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge246810SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.778.435.162.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge130260390520650SE +/- 3.28, N = 3SE +/- 2.35, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 3610.59241.21409.78408.02MIN: 497.95 / MAX: 643.84MIN: 91.81 / MAX: 374.94MIN: 407.35 / MAX: 424.19MIN: 406.22 / MAX: 426.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge48121620SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 36.5316.579.764.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge4080120160200SE +/- 0.44, N = 3SE +/- 0.25, N = 3SE +/- 0.02, N = 3SE +/- 0.40, N = 3179.5374.2073.3464.98MIN: 100.26 / MAX: 344MIN: 34.84 / MAX: 96.18MIN: 65.17 / MAX: 80.62MIN: 61.57 / MAX: 90.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge1224364860SE +/- 0.06, N = 3SE +/- 0.19, N = 3SE +/- 0.02, N = 3SE +/- 0.18, N = 322.2653.8754.5130.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge0.49060.98121.47181.96242.453SE +/- 0.002826, N = 3SE +/- 0.013619, N = 15SE +/- 0.002971, N = 3SE +/- 0.000469, N = 32.0383601.0163271.0942902.180290MIN: 1.97MIN: 0.79MIN: 1.08MIN: 2.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge1122334455SE +/- 0.13, N = 3SE +/- 0.05, N = 3SE +/- 0.32, N = 3SE +/- 0.04, N = 326.7829.8950.1633.39MIN: 13.67 / MAX: 27.8MIN: 7.96 / MAX: 34.56MIN: 33.27 / MAX: 51.43MIN: 24.56 / MAX: 33.78

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge1122334455SE +/- 0.01032, N = 3SE +/- 0.00827, N = 3SE +/- 0.00810, N = 3SE +/- 0.02961, N = 346.698601.763506.9020013.73180MIN: 46.42MIN: 13.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge3691215SE +/- 0.07954, N = 3SE +/- 0.01302, N = 3SE +/- 0.00229, N = 3SE +/- 0.00467, N = 312.661808.100075.014379.85441MIN: 10.8MIN: 6.7MIN: 4.93MIN: 9.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge2004006008001000SE +/- 0.33, N = 3SE +/- 3.18, N = 3SE +/- 0.67, N = 3SE +/- 1.33, N = 31000736887887

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge246810SE +/- 0.03862, N = 3SE +/- 0.02435, N = 3SE +/- 0.01338, N = 3SE +/- 0.00015, N = 35.936695.842617.351018.22691MIN: 5.66MIN: 5MIN: 7.22MIN: 8.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge246810SE +/- 0.08834, N = 3SE +/- 0.03815, N = 3SE +/- 0.01540, N = 3SE +/- 0.01020, N = 36.243917.194627.746197.38732MIN: 5.78MIN: 6.66MIN: 7.55MIN: 7.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge816243240SE +/- 0.00337, N = 3SE +/- 0.03984, N = 3SE +/- 0.00275, N = 3SE +/- 0.02244, N = 333.192003.689933.282455.40784MIN: 33.13MIN: 5.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge0.56921.13841.70762.27682.846SE +/- 0.013170, N = 3SE +/- 0.011545, N = 4SE +/- 0.004187, N = 3SE +/- 0.006412, N = 31.7728601.0032691.2640502.529820MIN: 1.73MIN: 0.84MIN: 1.25MIN: 2.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge816243240SE +/- 0.00790, N = 3SE +/- 0.01261, N = 3SE +/- 0.00414, N = 3SE +/- 0.01228, N = 334.716603.110753.123396.25907MIN: 34.6MIN: 6.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlarge3691215SE +/- 0.01712, N = 3SE +/- 0.08711, N = 3SE +/- 0.00261, N = 3SE +/- 0.01406, N = 38.098037.948785.1033010.16790MIN: 8.02MIN: 6.79MIN: 5.05MIN: 10.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl