nlp-benchmarks

AWS EC2 Amazon Linux 2023 Benchmarking

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402097-NE-2402012NE97
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 4 Tests
Multi-Core 2 Tests
Intel oneAPI 2 Tests
Python 2 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
c6i.2xlarge
February 01
  2 Hours, 25 Minutes
m7i-flex.2xlarge
February 08
  2 Hours, 59 Minutes
c7a.2xlarge
February 09
  1 Hour, 48 Minutes
r7a.xlarge
February 09
  2 Hours, 14 Minutes
m7i.2xlarge
February 09
  2 Hours, 11 Minutes
Invert Hiding All Results Option
  2 Hours, 19 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


nlp-benchmarksProcessorMotherboardChipsetMemoryDiskNetworkOSKernelCompilerFile-SystemSystem Layerc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlargeIntel Xeon Platinum 8375C (4 Cores / 8 Threads)Amazon EC2 c6i.2xlarge (1.0 BIOS)Intel 440FX 82441FX PMC1 x 16GB DDR4-3200MT/s215GB Amazon Elastic Block StoreAmazon ElasticAmazon Linux 20236.1.61-85.141.amzn2023.x86_64 (x86_64)GCC 11.4.1 20230605xfsamazonIntel Xeon Platinum 8488C (4 Cores / 8 Threads)Amazon EC2 m7i-flex.2xlarge (1.0 BIOS)1 x 32GB 4800MT/s6.1.72-96.166.amzn2023.x86_64 (x86_64)AMD EPYC 9R14 (8 Cores)Amazon EC2 c7a.2xlarge (1.0 BIOS)1 x 16GB 4800MT/sAMD EPYC 9R14 (4 Cores)Amazon EC2 r7a.xlarge (1.0 BIOS)1 x 32GB 4800MT/sIntel Xeon Platinum 8488C (4 Cores / 8 Threads)Amazon EC2 m7i.2xlarge (1.0 BIOS)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-amazon-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver Processor Details- c6i.2xlarge: CPU Microcode: 0xd0003a5- m7i-flex.2xlarge: CPU Microcode: 0x2b000571- c7a.2xlarge: CPU Microcode: 0xa10113e- r7a.xlarge: CPU Microcode: 0xa10113e- m7i.2xlarge: CPU Microcode: 0x2b000571Python Details- Python 3.11.6Security Details- c6i.2xlarge: gather_data_sampling: Unknown: Dependent on hypervisor status + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT Host state unknown + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - m7i-flex.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - c7a.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - r7a.xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - m7i.2xlarge: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlargeResult OverviewPhoronix Test Suite100%152%204%256%308%OpenVINOoneDNNPyTorchNumpy BenchmarkPyBench

nlp-benchmarksonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUnumpy: pytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUpybench: Total For Average Test Timesc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge5.9366912.66188.098036.243912.038361.772862496.8433.192046.698634.71662492.292501.50374.9926.7810.5715.9615.816.366.387.994.064.041.772252.046.53610.5922.26179.5310005.842618.100077.948787.194621.0163271.0032692382.463.689931.763503.110752389.642318.31438.2529.8912.1018.8418.347.367.558.995.385.478.43474.4616.57241.2153.8774.207367.351015.014375.103307.746191.094291.264051482.353.282456.902003.123391480.931478.51590.1050.1620.0031.8931.2912.9513.0311.748.758.715.16774.239.76409.7854.5173.348878.226919.8544110.16797.387322.180292.529822856.845.4078413.73186.259072857.772862.86595.0133.3913.1919.3519.327.687.688.445.665.672.62764.304.9408.0230.7764.988875.880198.617188.344626.619741.030111.066022320.943.376101.793603.207922310.902303.44452.5031.3612.4819.6219.437.657.668.845.265.287.80511.5414.51275.6549.3980.94815OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge246810SE +/- 0.03862, N = 3SE +/- 0.02435, N = 3SE +/- 0.01338, N = 3SE +/- 0.00015, N = 3SE +/- 0.00970, N = 35.936695.842617.351018.226915.88019MIN: 5.66MIN: 5MIN: 7.22MIN: 8.1MIN: 5.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215Min: 5.89 / Avg: 5.94 / Max: 6.01Min: 5.81 / Avg: 5.84 / Max: 5.89Min: 7.34 / Avg: 7.35 / Max: 7.38Min: 8.23 / Avg: 8.23 / Max: 8.23Min: 5.86 / Avg: 5.88 / Max: 5.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215SE +/- 0.07954, N = 3SE +/- 0.01302, N = 3SE +/- 0.00229, N = 3SE +/- 0.00467, N = 3SE +/- 0.08548, N = 1212.661808.100075.014379.854418.61718MIN: 10.8MIN: 6.7MIN: 4.93MIN: 9.75MIN: 7.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge48121620Min: 12.54 / Avg: 12.66 / Max: 12.81Min: 8.07 / Avg: 8.1 / Max: 8.12Min: 5.01 / Avg: 5.01 / Max: 5.02Min: 9.85 / Avg: 9.85 / Max: 9.86Min: 8.44 / Avg: 8.62 / Max: 9.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215SE +/- 0.01712, N = 3SE +/- 0.08711, N = 3SE +/- 0.00261, N = 3SE +/- 0.01406, N = 3SE +/- 0.02786, N = 38.098037.948785.1033010.167908.34462MIN: 8.02MIN: 6.79MIN: 5.05MIN: 10.11MIN: 8.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215Min: 8.07 / Avg: 8.1 / Max: 8.13Min: 7.79 / Avg: 7.95 / Max: 8.09Min: 5.1 / Avg: 5.1 / Max: 5.11Min: 10.15 / Avg: 10.17 / Max: 10.2Min: 8.31 / Avg: 8.34 / Max: 8.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge246810SE +/- 0.08834, N = 3SE +/- 0.03815, N = 3SE +/- 0.01540, N = 3SE +/- 0.01020, N = 3SE +/- 0.02083, N = 36.243917.194627.746197.387326.61974MIN: 5.78MIN: 6.66MIN: 7.55MIN: 7.27MIN: 6.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215Min: 6.15 / Avg: 6.24 / Max: 6.42Min: 7.12 / Avg: 7.19 / Max: 7.25Min: 7.72 / Avg: 7.75 / Max: 7.78Min: 7.37 / Avg: 7.39 / Max: 7.41Min: 6.58 / Avg: 6.62 / Max: 6.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge0.49060.98121.47181.96242.453SE +/- 0.002826, N = 3SE +/- 0.013619, N = 15SE +/- 0.002971, N = 3SE +/- 0.000469, N = 3SE +/- 0.000901, N = 32.0383601.0163271.0942902.1802901.030110MIN: 1.97MIN: 0.79MIN: 1.08MIN: 2.15MIN: 0.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge246810Min: 2.04 / Avg: 2.04 / Max: 2.04Min: 0.97 / Avg: 1.02 / Max: 1.16Min: 1.09 / Avg: 1.09 / Max: 1.1Min: 2.18 / Avg: 2.18 / Max: 2.18Min: 1.03 / Avg: 1.03 / Max: 1.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge0.56921.13841.70762.27682.846SE +/- 0.013170, N = 3SE +/- 0.011545, N = 4SE +/- 0.004187, N = 3SE +/- 0.006412, N = 3SE +/- 0.003802, N = 31.7728601.0032691.2640502.5298201.066020MIN: 1.73MIN: 0.84MIN: 1.25MIN: 2.5MIN: 11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge246810Min: 1.75 / Avg: 1.77 / Max: 1.79Min: 0.98 / Avg: 1 / Max: 1.03Min: 1.26 / Avg: 1.26 / Max: 1.27Min: 2.52 / Avg: 2.53 / Max: 2.54Min: 1.06 / Avg: 1.07 / Max: 1.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge6001200180024003000SE +/- 7.01, N = 3SE +/- 26.45, N = 4SE +/- 1.52, N = 3SE +/- 2.56, N = 3SE +/- 7.66, N = 32496.842382.461482.352856.842320.94MIN: 2465.66MIN: 2219.69MIN: 1476.16MIN: 2845.85MIN: 2290.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge5001000150020002500Min: 2486.35 / Avg: 2496.84 / Max: 2510.14Min: 2317.56 / Avg: 2382.46 / Max: 2447.12Min: 1480.48 / Avg: 1482.35 / Max: 1485.36Min: 2852.25 / Avg: 2856.84 / Max: 2861.11Min: 2310.19 / Avg: 2320.94 / Max: 2335.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge816243240SE +/- 0.00337, N = 3SE +/- 0.03984, N = 3SE +/- 0.00275, N = 3SE +/- 0.02244, N = 3SE +/- 0.00955, N = 333.192003.689933.282455.407843.37610MIN: 33.13MIN: 5.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge714212835Min: 33.19 / Avg: 33.19 / Max: 33.2Min: 3.61 / Avg: 3.69 / Max: 3.75Min: 3.28 / Avg: 3.28 / Max: 3.29Min: 5.38 / Avg: 5.41 / Max: 5.45Min: 3.36 / Avg: 3.38 / Max: 3.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge1122334455SE +/- 0.01032, N = 3SE +/- 0.00827, N = 3SE +/- 0.00810, N = 3SE +/- 0.02961, N = 3SE +/- 0.00405, N = 346.698601.763506.9020013.731801.79360MIN: 46.42MIN: 13.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge1020304050Min: 46.68 / Avg: 46.7 / Max: 46.71Min: 1.75 / Avg: 1.76 / Max: 1.78Min: 6.89 / Avg: 6.9 / Max: 6.92Min: 13.7 / Avg: 13.73 / Max: 13.79Min: 1.79 / Avg: 1.79 / Max: 1.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge816243240SE +/- 0.00790, N = 3SE +/- 0.01261, N = 3SE +/- 0.00414, N = 3SE +/- 0.01228, N = 3SE +/- 0.00738, N = 334.716603.110753.123396.259073.20792MIN: 34.6MIN: 6.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge714212835Min: 34.71 / Avg: 34.72 / Max: 34.73Min: 3.09 / Avg: 3.11 / Max: 3.13Min: 3.12 / Avg: 3.12 / Max: 3.13Min: 6.24 / Avg: 6.26 / Max: 6.28Min: 3.19 / Avg: 3.21 / Max: 3.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge6001200180024003000SE +/- 2.42, N = 3SE +/- 24.80, N = 3SE +/- 2.09, N = 3SE +/- 1.65, N = 3SE +/- 2.59, N = 32492.292389.641480.932857.772310.90MIN: 2460.06MIN: 2260.99MIN: 1474.24MIN: 2848.47MIN: 2291.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge5001000150020002500Min: 2489.1 / Avg: 2492.29 / Max: 2497.05Min: 2353.7 / Avg: 2389.64 / Max: 2437.21Min: 1477.31 / Avg: 1480.93 / Max: 1484.54Min: 2854.58 / Avg: 2857.77 / Max: 2860.06Min: 2307 / Avg: 2310.9 / Max: 2315.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge6001200180024003000SE +/- 2.60, N = 3SE +/- 15.72, N = 3SE +/- 1.00, N = 3SE +/- 3.75, N = 3SE +/- 1.63, N = 32501.502318.311478.512862.862303.44MIN: 2476.86MIN: 2205.71MIN: 1472.49MIN: 2849.87MIN: 2276.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge5001000150020002500Min: 2497.61 / Avg: 2501.5 / Max: 2506.43Min: 2287.05 / Avg: 2318.31 / Max: 2336.83Min: 1476.64 / Avg: 1478.51 / Max: 1480.07Min: 2856.66 / Avg: 2862.86 / Max: 2869.62Min: 2301.7 / Avg: 2303.44 / Max: 2306.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge130260390520650SE +/- 1.37, N = 3SE +/- 3.53, N = 3SE +/- 1.02, N = 3SE +/- 0.90, N = 3SE +/- 0.98, N = 3374.99438.25590.10595.01452.50
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge110220330440550Min: 372.52 / Avg: 374.99 / Max: 377.24Min: 432.57 / Avg: 438.25 / Max: 444.72Min: 588.54 / Avg: 590.1 / Max: 592.02Min: 593.67 / Avg: 595.01 / Max: 596.71Min: 450.55 / Avg: 452.5 / Max: 453.7

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge1122334455SE +/- 0.13, N = 3SE +/- 0.05, N = 3SE +/- 0.32, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 326.7829.8950.1633.3931.36MIN: 13.67 / MAX: 27.8MIN: 7.96 / MAX: 34.56MIN: 33.27 / MAX: 51.43MIN: 24.56 / MAX: 33.78MIN: 26.54 / MAX: 32.62
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge1020304050Min: 26.52 / Avg: 26.78 / Max: 26.94Min: 29.8 / Avg: 29.89 / Max: 29.97Min: 49.7 / Avg: 50.16 / Max: 50.79Min: 33.31 / Avg: 33.39 / Max: 33.44Min: 31.16 / Avg: 31.36 / Max: 31.66

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge510152025SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 310.5712.1020.0013.1912.48MIN: 9.04 / MAX: 10.77MIN: 2.89 / MAX: 13.92MIN: 15.6 / MAX: 20.36MIN: 10.96 / MAX: 13.33MIN: 8.7 / MAX: 12.92
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge510152025Min: 10.56 / Avg: 10.57 / Max: 10.58Min: 11.93 / Avg: 12.1 / Max: 12.28Min: 19.95 / Avg: 20 / Max: 20.09Min: 13.16 / Avg: 13.19 / Max: 13.23Min: 12.3 / Avg: 12.48 / Max: 12.6

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge714212835SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.24, N = 3SE +/- 0.18, N = 3SE +/- 0.08, N = 315.9618.8431.8919.3519.62MIN: 11.65 / MAX: 17.13MIN: 4.37 / MAX: 21.77MIN: 23.54 / MAX: 32.58MIN: 15.18 / MAX: 19.82MIN: 15.92 / MAX: 20.23
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge714212835Min: 15.76 / Avg: 15.96 / Max: 16.16Min: 18.51 / Avg: 18.84 / Max: 19.09Min: 31.47 / Avg: 31.89 / Max: 32.31Min: 19 / Avg: 19.35 / Max: 19.63Min: 19.5 / Avg: 19.62 / Max: 19.77

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge714212835SE +/- 0.17, N = 4SE +/- 0.29, N = 15SE +/- 0.42, N = 15SE +/- 0.15, N = 10SE +/- 0.21, N = 515.8118.3431.2919.3219.43MIN: 9.11 / MAX: 17.08MIN: 4.11 / MAX: 22.15MIN: 20.73 / MAX: 33.27MIN: 13.74 / MAX: 19.98MIN: 13.34 / MAX: 20.17
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge714212835Min: 15.33 / Avg: 15.81 / Max: 16.15Min: 15.59 / Avg: 18.34 / Max: 19.52Min: 28.22 / Avg: 31.29 / Max: 32.77Min: 18.18 / Avg: 19.32 / Max: 19.82Min: 18.6 / Avg: 19.43 / Max: 19.73

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215SE +/- 0.05, N = 3SE +/- 0.08, N = 5SE +/- 0.09, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 36.367.3612.957.687.65MIN: 5.43 / MAX: 6.61MIN: 2.24 / MAX: 8.66MIN: 4.2 / MAX: 13.21MIN: 6.45 / MAX: 7.76MIN: 6.27 / MAX: 7.82
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge48121620Min: 6.26 / Avg: 6.36 / Max: 6.43Min: 7.06 / Avg: 7.36 / Max: 7.48Min: 12.81 / Avg: 12.95 / Max: 13.1Min: 7.64 / Avg: 7.68 / Max: 7.7Min: 7.62 / Avg: 7.65 / Max: 7.69

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.387.5513.037.687.66MIN: 3.7 / MAX: 6.57MIN: 2.93 / MAX: 8.62MIN: 10.5 / MAX: 13.2MIN: 6.31 / MAX: 7.76MIN: 4.36 / MAX: 7.82
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152c6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge48121620Min: 6.37 / Avg: 6.38 / Max: 6.4Min: 7.52 / Avg: 7.55 / Max: 7.59Min: 12.93 / Avg: 13.03 / Max: 13.11Min: 7.66 / Avg: 7.68 / Max: 7.7Min: 7.64 / Avg: 7.66 / Max: 7.68

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215SE +/- 0.02, N = 3SE +/- 0.09, N = 12SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 37.998.9911.748.448.84MIN: 6.59 / MAX: 8.31MIN: 3.08 / MAX: 10.39MIN: 8.95 / MAX: 11.9MIN: 2.94 / MAX: 8.53MIN: 7.72 / MAX: 9.09
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215Min: 7.95 / Avg: 7.99 / Max: 8.03Min: 8.46 / Avg: 8.99 / Max: 9.37Min: 11.7 / Avg: 11.74 / Max: 11.77Min: 8.42 / Avg: 8.44 / Max: 8.46Min: 8.78 / Avg: 8.84 / Max: 8.93

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.065.388.755.665.26MIN: 3.29 / MAX: 4.32MIN: 2.12 / MAX: 6.08MIN: 5.72 / MAX: 8.84MIN: 4.27 / MAX: 5.72MIN: 3.36 / MAX: 5.38
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215Min: 4 / Avg: 4.06 / Max: 4.1Min: 5.36 / Avg: 5.38 / Max: 5.39Min: 8.73 / Avg: 8.75 / Max: 8.77Min: 5.64 / Avg: 5.66 / Max: 5.68Min: 5.24 / Avg: 5.26 / Max: 5.28

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 34.045.478.715.675.28MIN: 3.5 / MAX: 4.36MIN: 2.31 / MAX: 6.19MIN: 5.53 / MAX: 8.82MIN: 4.37 / MAX: 5.73MIN: 4.01 / MAX: 5.41
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215Min: 4.02 / Avg: 4.04 / Max: 4.07Min: 5.45 / Avg: 5.47 / Max: 5.49Min: 8.67 / Avg: 8.71 / Max: 8.74Min: 5.65 / Avg: 5.67 / Max: 5.68Min: 5.24 / Avg: 5.28 / Max: 5.3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge246810SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 31.778.435.162.627.801. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge3691215Min: 1.76 / Avg: 1.77 / Max: 1.78Min: 8.23 / Avg: 8.43 / Max: 8.58Min: 5.15 / Avg: 5.16 / Max: 5.16Min: 2.62 / Avg: 2.62 / Max: 2.62Min: 7.75 / Avg: 7.8 / Max: 7.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge5001000150020002500SE +/- 3.74, N = 3SE +/- 5.97, N = 3SE +/- 0.28, N = 3SE +/- 0.15, N = 3SE +/- 2.36, N = 32252.04474.46774.23764.30511.54MIN: 2202.18 / MAX: 2317.24MIN: 427.39 / MAX: 558.54MIN: 767.78 / MAX: 795.49MIN: 761.82 / MAX: 783.23MIN: 337.11 / MAX: 534.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge400800120016002000Min: 2244.57 / Avg: 2252.04 / Max: 2256.2Min: 465.87 / Avg: 474.46 / Max: 485.94Min: 773.88 / Avg: 774.23 / Max: 774.79Min: 764.08 / Avg: 764.3 / Max: 764.59Min: 507.13 / Avg: 511.54 / Max: 515.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge48121620SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 36.5316.579.764.9014.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge48121620Min: 6.47 / Avg: 6.53 / Max: 6.6Min: 16.25 / Avg: 16.57 / Max: 16.78Min: 9.76 / Avg: 9.76 / Max: 9.76Min: 4.9 / Avg: 4.9 / Max: 4.9Min: 14.49 / Avg: 14.51 / Max: 14.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge130260390520650SE +/- 3.28, N = 3SE +/- 2.35, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.26, N = 3610.59241.21409.78408.02275.65MIN: 497.95 / MAX: 643.84MIN: 91.81 / MAX: 374.94MIN: 407.35 / MAX: 424.19MIN: 406.22 / MAX: 426.04MIN: 267.85 / MAX: 299.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge110220330440550Min: 604.8 / Avg: 610.59 / Max: 616.17Min: 238.26 / Avg: 241.21 / Max: 245.85Min: 409.68 / Avg: 409.78 / Max: 409.91Min: 407.8 / Avg: 408.02 / Max: 408.28Min: 275.15 / Avg: 275.65 / Max: 276.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge1224364860SE +/- 0.06, N = 3SE +/- 0.19, N = 3SE +/- 0.02, N = 3SE +/- 0.18, N = 3SE +/- 0.02, N = 322.2653.8754.5130.7749.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge1122334455Min: 22.16 / Avg: 22.26 / Max: 22.37Min: 53.5 / Avg: 53.87 / Max: 54.11Min: 54.49 / Avg: 54.51 / Max: 54.54Min: 30.4 / Avg: 30.77 / Max: 30.99Min: 49.36 / Avg: 49.39 / Max: 49.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge4080120160200SE +/- 0.44, N = 3SE +/- 0.25, N = 3SE +/- 0.02, N = 3SE +/- 0.40, N = 3SE +/- 0.04, N = 3179.5374.2073.3464.9880.94MIN: 100.26 / MAX: 344MIN: 34.84 / MAX: 96.18MIN: 65.17 / MAX: 80.62MIN: 61.57 / MAX: 90.41MIN: 69.04 / MAX: 153.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPUc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge306090120150Min: 178.75 / Avg: 179.53 / Max: 180.27Min: 73.88 / Avg: 74.2 / Max: 74.69Min: 73.29 / Avg: 73.34 / Max: 73.36Min: 64.51 / Avg: 64.98 / Max: 65.77Min: 80.85 / Avg: 80.94 / Max: 811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge2004006008001000SE +/- 0.33, N = 3SE +/- 3.18, N = 3SE +/- 0.67, N = 3SE +/- 1.33, N = 3SE +/- 2.33, N = 31000736887887815
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesc6i.2xlargem7i-flex.2xlargec7a.2xlarger7a.xlargem7i.2xlarge2004006008001000Min: 1000 / Avg: 1000.33 / Max: 1001Min: 732 / Avg: 735.67 / Max: 742Min: 886 / Avg: 886.67 / Max: 888Min: 886 / Avg: 887.33 / Max: 890Min: 811 / Avg: 814.67 / Max: 819