Amazon EC2 Graviton3 Benchmark Comparison

Amazon AWS Graviton3 benchmarks by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2205260-PTS-GRAVITON42
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 6 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 22 Tests
Creator Workloads 7 Tests
Cryptography 2 Tests
Fortran Tests 4 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 14 Tests
Imaging 2 Tests
Common Kernel Benchmarks 3 Tests
Linear Algebra 2 Tests
Machine Learning 3 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 7 Tests
Multi-Core 23 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 10 Tests
Programmer / Developer System Benchmarks 12 Tests
Python Tests 6 Tests
Raytracing 2 Tests
Renderers 2 Tests
Scientific Computing 8 Tests
Server 5 Tests
Server CPU Tests 16 Tests
Single-Threaded 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a1.4xlarge Graviton
May 25 2022
  17 Hours, 42 Minutes
c6g.4xlarge Graviton2
May 25 2022
  9 Hours, 58 Minutes
c7g.4xlarge Graviton3
May 24 2022
  7 Hours, 43 Minutes
c6a.4xlarge EPYC
May 26 2022
  11 Hours, 43 Minutes
c6i.4xlarge Xeon
May 26 2022
  9 Hours, 40 Minutes
Invert Hiding All Results Option
  11 Hours, 21 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Amazon EC2 Graviton3 Benchmark ComparisonProcessorMotherboardChipsetMemoryDiskNetworkOSKernelCompilerFile-SystemSystem Layera1.4xlarge Gravitonc6g.4xlarge Graviton2c7g.4xlarge Graviton3c6a.4xlarge EPYCc6i.4xlarge XeonARMv8 Cortex-A72 (16 Cores)Amazon EC2 a1.4xlarge (1.0 BIOS)Amazon Device 020032GB193GB Amazon Elastic Block StoreAmazon ElasticUbuntu 22.045.15.0-1004-aws (aarch64)GCC 11.2.0ext4amazonARMv8 Neoverse-N1 (16 Cores)Amazon EC2 c6g.4xlarge (1.0 BIOS)ARMv8 Neoverse-V1 (16 Cores)Amazon EC2 c7g.4xlarge (1.0 BIOS)AMD EPYC 7R13 (8 Cores / 16 Threads)Amazon EC2 c6a.4xlarge (1.0 BIOS)Intel 440FX 82441FX PMC5.15.0-1004-aws (x86_64)Intel Xeon Platinum 8375C (8 Cores / 16 Threads)Amazon EC2 c6i.4xlarge (1.0 BIOS)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- a1.4xlarge Graviton: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6g.4xlarge Graviton2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c7g.4xlarge Graviton3: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6a.4xlarge EPYC: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c6i.4xlarge Xeon: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Java Details- OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Details- Python 3.10.4Security Details- a1.4xlarge Graviton: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected - c6g.4xlarge Graviton2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c7g.4xlarge Graviton3: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c6a.4xlarge EPYC: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - c6i.4xlarge Xeon: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Processor Details- c6a.4xlarge EPYC: CPU Microcode: 0xa001144- c6i.4xlarge Xeon: CPU Microcode: 0xd000331

a1.4xlarge Gravitonc6g.4xlarge Graviton2c7g.4xlarge Graviton3c6a.4xlarge EPYCc6i.4xlarge XeonLogarithmic Result OverviewPhoronix Test SuiteHigh Performance Conjugate GradientAlgebraic Multi-Grid BenchmarkACES DGEMMONNX RuntimeXcompact3d Incompact3dNAS Parallel BenchmarksTimed MrBayes AnalysisGPAWLULESHGROMACSApache HTTP ServersimdjsonTensorFlow LiteASTC EncoderTimed Node.js CompilationLAMMPS Molecular Dynamics SimulatorPyBenchPHPBenchlibavif avifencTimed ImageMagick CompilationTimed Apache CompilationTimed LLVM CompilationRodiniaOpenSSLZstd CompressionSecureMarkNgspiceLiquid-DSPBuild2Timed PHP CompilationDaCapo BenchmarkWebP Image EncodeTimed Gem5 CompilationnginxC-RayTSCPStockfishPOV-Ray7-Zip CompressionStress-NGasmFishGoogle SynthMarkLeelaChessZeroCoremarkN-Queensm-queens

Amazon EC2 Graviton3 Benchmark Comparisonbuild-llvm: Ninjabuild-nodejs: Time To Compilelczero: BLASbuild-gem5: Time To Compilelczero: Eigennpb: SP.Cnpb: BT.Csecuremark: SecureMark-TLSavifenc: 0ngspice: C7552gpaw: Carbon Nanotubenpb: LU.Cmrbayes: Primate Phylogeny Analysisnpb: EP.Dngspice: C2670onnx: GPT-2 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardgromacs: MPI CPU - water_GMX50_barerodinia: OpenMP LavaMDavifenc: 2tensorflow-lite: NASNet Mobileonnx: fcn-resnet101-11 - CPU - Standardasmfish: 1024 Hash Memory, 26 Depthonnx: bertsquad-12 - CPU - Standardonnx: super-resolution-10 - CPU - Standardopenssl: SHA256build2: Time To Compileapache: 500c-ray: Total Time - 4K, 16 Rays Per Pixelhpcg: astcenc: Exhaustivetensorflow-lite: Mobilenet Quantpovray: Trace Timemt-dgemm: Sustained Floating-Point Ratenpb: IS.Dbuild-php: Time To Compileapache: 1000nginx: 100nginx: 200nginx: 1000nginx: 500apache: 200apache: 100incompact3d: input.i3d 193 Cells Per Directionm-queens: Time To Solvenpb: CG.Csimdjson: PartialTweetssimdjson: DistinctUserIDwebp: Quality 100, Lossless, Highest Compressionsimdjson: Kostyacompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedtensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: Mobilenet Floattensorflow-lite: SqueezeNetopenssl: RSA4096openssl: RSA4096compress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speednpb: FT.Csimdjson: LargeRandwebp: Quality 100, Losslessstockfish: Total Timebuild-imagemagick: Time To Compilephpbench: PHP Benchmark Suiterodinia: OpenMP Streamclusterpybench: Total For Average Test Timescompress-zstd: 3 - Compression Speedbuild-apache: Time To Compilecompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingdacapobench: Tradebeansstress-ng: CPU Stresssynthmark: VoiceMark_100stress-ng: IO_uringstress-ng: Memory Copyingstress-ng: Cryptostress-ng: Vector Mathincompact3d: input.i3d 129 Cells Per Directioncoremark: CoreMark Size 666 - Iterations Per Secondn-queens: Elapsed Timeastcenc: Thoroughrodinia: OpenMP CFD Solvernpb: MG.Cliquid-dsp: 16 - 256 - 57dacapobench: Tradesoapavifenc: 6, Losslessamg: dacapobench: H2dacapobench: Jythonlulesh: lammps: Rhodopsin Proteintscp: AI Chess Performancea1.4xlarge Gravitonc6g.4xlarge Graviton2c7g.4xlarge Graviton3c6a.4xlarge EPYCc6i.4xlarge Xeon1784.6001765.9101351155.6151281293.803148.1874356768.302480.793769.3462558.12644.788339.20473.90123121650.316360.304449.02230986.710153315501157576785689517353.91220133.49104.7613.77834277.76695724.6693.8010.891391197.57196.02919278.68143155.48141436.20138205.11139414.8420887.5818636.43182.583939110.3681213.150.780.8124.7080.631213.9161889101711699990.1512014.745328.6588.31121.716.92927.160.361.8011098043093.63224125947.4303452633.974.742408913249890452366.00331.070918172.37798.2411985.3827341.4753.7706274203869.40201732.28533.519841.4503266.361655133331118233.9911867169336740129972328.27243.245538500682.981628.401864488.8058342356.166449.11120301406.937255.205215.5285133.89384.753558.88263.72469483340.781215.666238.20514985.42826540482322207210723184083142.27750077.8162.32319.7218159.20391980.2451.0474.785123372.7688.89746629.45307349.36308938.67308213.13310596.5850059.9746995.3541.024083575.2243520.861.511.5366.1471.192196.331.046793.945955.72500.873969.3553951.5660.62051.634.66244.480.4931.0822167924540.33344985515.48417412888.334.201594457128543443404.94470.389770521.812903.0017924.1837753.8911.5733547315464.33980023.13616.522217.0356720.68262890000450616.518932652900396456266016.16277.935872313544.929497.5791103391.17111894467.1910339.53183708256.841191.286155.1807730.41251.397934.72198.22479906091.128143.334141.69811591.93832134123407281713722045973115.02073546.3238.51726.3058139.37971502.9537.8635.8538641041.9069.48372719.33345710.87352380.98346814.75346613.3473676.9567231.8829.125857066.8226571.952.622.6948.2081.943240.639.541855.140051.32156.603257.94178460.42546.43050.341.211791.770.722.7692760889127.90466648413.29611854639.126.940730549782432035029.71675.635843015.786693.3223181.8155258.178.01671425405413.86055421.53613.924810.47813481.61383606667352411.90812588073332951394010940.93911.2911370094760.344664.3471091515.20110018094.7913134.46213288195.532180.356302.95625140.55120.636466.21245.886561711921.004224.33193.9469266.866526187688488369611691403353150.99481995.6469.3495.0604272.39083847.9649.4352.432432541.3567.08471537.11388010.76390932.79388657.76389030.1183070.0077567.69110.77002772.3306169.223.644.3048.6772.802826.025.944920.641366.62159.723103.12136784.22088.92907.530.018299.960.9526.7082385762332.62648074118.38319612784.023.5325731862562316713304.50663.073768723.463551.8013556.0653787.6128.2797661345133.44054116.3787.981821.78916826.43509746667405216.394267670700301946165452.10515.0671442631685.704604.6201397469.94014669563.2213888.40230549204.994161.081202.10638136.77134.9241103.22147.893794413741.452281.38997.73510900.61392374620077334507096993937136.80191746.5792.5458.6603169.63873967.3952.7842.230545861.5764.33779830.96356302.84356829.93347345.49351672.9294458.2286545.5769.216997891.2319522.823.714.3041.8052.462666.133.841185.741179.71965.072983.93140964.42161.32582.038.120423.570.8621.1222208196129.73782818623.5129973440.622.5274565366631292812527.16565.6901037943.373150.4910210.3440140.3017.8682772285378.84166118.8397.262520.44626298.81373100000381517.529661364767292140138112.37156.2201272596OpenBenchmarking.org

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjaa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3400800120016002000SE +/- 0.34, N = 3SE +/- 0.05, N = 3SE +/- 0.49, N = 3SE +/- 0.12, N = 3SE +/- 5.19, N = 31784.60760.34682.98685.70544.93
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjaa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330060090012001500Min: 1784.16 / Avg: 1784.6 / Max: 1785.27Min: 760.25 / Avg: 760.34 / Max: 760.43Min: 682.05 / Avg: 682.98 / Max: 683.7Min: 685.49 / Avg: 685.7 / Max: 685.91Min: 535.72 / Avg: 544.93 / Max: 553.68

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3400800120016002000SE +/- 1.80, N = 3SE +/- 0.26, N = 3SE +/- 0.37, N = 3SE +/- 0.42, N = 3SE +/- 2.06, N = 31765.91664.35628.40604.62497.58
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330060090012001500Min: 1762.78 / Avg: 1765.91 / Max: 1769.01Min: 664.08 / Avg: 664.35 / Max: 664.86Min: 627.82 / Avg: 628.4 / Max: 629.09Min: 603.88 / Avg: 604.62 / Max: 605.34Min: 493.85 / Avg: 497.58 / Max: 500.97

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330060090012001500SE +/- 0.88, N = 3SE +/- 12.82, N = 9SE +/- 10.22, N = 4SE +/- 12.41, N = 9SE +/- 6.44, N = 31351091864139711031. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000Min: 134 / Avg: 135.33 / Max: 137Min: 1024 / Avg: 1091.11 / Max: 1144Min: 840 / Avg: 864.25 / Max: 890Min: 1345 / Avg: 1396.67 / Max: 1452Min: 1090 / Avg: 1102.67 / Max: 11111. (CXX) g++ options: -flto -pthread

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000SE +/- 0.78, N = 3SE +/- 0.79, N = 3SE +/- 0.53, N = 3SE +/- 0.59, N = 3SE +/- 1.33, N = 31155.62515.20488.81469.94391.17
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000Min: 1154.47 / Avg: 1155.62 / Max: 1157.1Min: 514.12 / Avg: 515.2 / Max: 516.74Min: 487.79 / Avg: 488.81 / Max: 489.55Min: 469.21 / Avg: 469.94 / Max: 471.11Min: 389.16 / Avg: 391.17 / Max: 393.69

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigena1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330060090012001500SE +/- 0.67, N = 3SE +/- 11.74, N = 9SE +/- 12.00, N = 3SE +/- 13.37, N = 3SE +/- 9.70, N = 31281001834146611891. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigena1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330060090012001500Min: 127 / Avg: 128.33 / Max: 129Min: 943 / Avg: 1001.11 / Max: 1052Min: 819 / Avg: 834.33 / Max: 858Min: 1447 / Avg: 1466.33 / Max: 1492Min: 1171 / Avg: 1189.33 / Max: 12041. (CXX) g++ options: -flto -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KSE +/- 2.51, N = 3SE +/- 24.63, N = 3SE +/- 0.57, N = 3SE +/- 73.65, N = 3SE +/- 9.61, N = 31293.808094.792356.169563.224467.191. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton317003400510068008500Min: 1288.84 / Avg: 1293.8 / Max: 1296.88Min: 8057.91 / Avg: 8094.79 / Max: 8141.52Min: 2355.3 / Avg: 2356.16 / Max: 2357.24Min: 9433.84 / Avg: 9563.22 / Max: 9688.88Min: 4449.83 / Avg: 4467.19 / Max: 4483.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton33K6K9K12K15KSE +/- 3.44, N = 3SE +/- 98.45, N = 3SE +/- 3.20, N = 3SE +/- 22.04, N = 3SE +/- 7.36, N = 33148.1813134.466449.1113888.4010339.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KMin: 3141.34 / Avg: 3148.18 / Max: 3152.19Min: 12949.99 / Avg: 13134.46 / Max: 13286.34Min: 6444.55 / Avg: 6449.11 / Max: 6455.29Min: 13862.56 / Avg: 13888.4 / Max: 13932.24Min: 10325.26 / Avg: 10339.53 / Max: 10349.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton350K100K150K200K250KSE +/- 59.40, N = 3SE +/- 3310.19, N = 9SE +/- 23.07, N = 3SE +/- 864.34, N = 3SE +/- 773.26, N = 3743562132881203012305491837081. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton340K80K120K160K200KMin: 74239.26 / Avg: 74356.36 / Max: 74432.21Min: 198187.25 / Avg: 213287.78 / Max: 230349.17Min: 120260.21 / Avg: 120301.13 / Max: 120340.04Min: 229225.86 / Avg: 230548.69 / Max: 232173.88Min: 182165.75 / Avg: 183708.29 / Max: 184575.71. (CC) gcc options: -pedantic -O3

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3170340510680850SE +/- 0.58, N = 3SE +/- 0.62, N = 3SE +/- 0.13, N = 3SE +/- 0.33, N = 3SE +/- 0.18, N = 3768.30195.53406.94204.99256.841. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3140280420560700Min: 767.54 / Avg: 768.3 / Max: 769.45Min: 194.74 / Avg: 195.53 / Max: 196.75Min: 406.75 / Avg: 406.94 / Max: 407.19Min: 204.36 / Avg: 204.99 / Max: 205.49Min: 256.51 / Avg: 256.84 / Max: 257.11. (CXX) g++ options: -O3 -fPIC -lm

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3100200300400500SE +/- 1.19, N = 3SE +/- 0.66, N = 3SE +/- 2.40, N = 7SE +/- 0.33, N = 3SE +/- 1.94, N = 3480.79180.36255.21161.08191.291. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton390180270360450Min: 478.62 / Avg: 480.79 / Max: 482.72Min: 179.41 / Avg: 180.36 / Max: 181.63Min: 242.65 / Avg: 255.21 / Max: 261.23Min: 160.42 / Avg: 161.08 / Max: 161.44Min: 188.31 / Avg: 191.29 / Max: 194.941. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotubea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3170340510680850SE +/- 5.37, N = 3SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.24, N = 3SE +/- 0.08, N = 3769.35302.96215.53202.11155.181. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotubea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3140280420560700Min: 763.32 / Avg: 769.35 / Max: 780.07Min: 302.63 / Avg: 302.96 / Max: 303.2Min: 215.37 / Avg: 215.53 / Max: 215.79Min: 201.8 / Avg: 202.11 / Max: 202.58Min: 155.01 / Avg: 155.18 / Max: 155.291. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton38K16K24K32K40KSE +/- 0.15, N = 3SE +/- 18.06, N = 3SE +/- 0.90, N = 3SE +/- 160.86, N = 3SE +/- 1.96, N = 32558.1225140.555133.8938136.777730.411. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton37K14K21K28K35KMin: 2557.84 / Avg: 2558.12 / Max: 2558.37Min: 25107.3 / Avg: 25140.55 / Max: 25169.4Min: 5132.39 / Avg: 5133.89 / Max: 5135.49Min: 37926.17 / Avg: 38136.77 / Max: 38452.69Min: 7728.06 / Avg: 7730.41 / Max: 7734.311. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3140280420560700SE +/- 0.49, N = 3SE +/- 0.35, N = 3SE +/- 0.11, N = 3SE +/- 1.43, N = 3SE +/- 0.24, N = 3644.79120.64384.75134.92251.40-mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm-mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm1. (CC) gcc options: -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3110220330440550Min: 644.13 / Avg: 644.79 / Max: 645.74Min: 120.17 / Avg: 120.64 / Max: 121.31Min: 384.53 / Avg: 384.75 / Max: 384.88Min: 132.1 / Avg: 134.92 / Max: 136.68Min: 251.04 / Avg: 251.4 / Max: 251.851. (CC) gcc options: -O3 -std=c99 -pedantic -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Da1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000SE +/- 0.24, N = 3SE +/- 0.06, N = 3SE +/- 0.23, N = 3SE +/- 19.93, N = 9SE +/- 0.39, N = 3339.20466.21558.881103.22934.721. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Da1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000Min: 338.94 / Avg: 339.2 / Max: 339.67Min: 466.09 / Avg: 466.21 / Max: 466.3Min: 558.51 / Avg: 558.88 / Max: 559.3Min: 1030.08 / Avg: 1103.22 / Max: 1180.25Min: 934.01 / Avg: 934.72 / Max: 935.361. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3100200300400500SE +/- 3.48, N = 3SE +/- 1.17, N = 3SE +/- 0.91, N = 3SE +/- 1.80, N = 4SE +/- 0.86, N = 3473.90245.89263.72147.89198.221. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380160240320400Min: 467.68 / Avg: 473.9 / Max: 479.7Min: 244.38 / Avg: 245.89 / Max: 248.18Min: 262.1 / Avg: 263.72 / Max: 265.25Min: 143.57 / Avg: 147.89 / Max: 152.37Min: 197.24 / Avg: 198.22 / Max: 199.941. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KSE +/- 2.20, N = 3SE +/- 75.29, N = 12SE +/- 3.50, N = 3SE +/- 322.41, N = 12SE +/- 2.40, N = 3231256176948794479901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton314002800420056007000Min: 2308 / Avg: 2312.17 / Max: 2315.5Min: 5386 / Avg: 5617 / Max: 5986.5Min: 6944 / Avg: 6947.5 / Max: 6954.5Min: 6856 / Avg: 7944.42 / Max: 9074.5Min: 7985.5 / Avg: 7990.17 / Max: 7993.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330060090012001500SE +/- 0.50, N = 3SE +/- 82.60, N = 12SE +/- 0.17, N = 3SE +/- 91.51, N = 12SE +/- 0.00, N = 3165119233413746091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000Min: 163.5 / Avg: 164.5 / Max: 165Min: 924.5 / Avg: 1192.42 / Max: 1524.5Min: 333.5 / Avg: 333.83 / Max: 334Min: 1191 / Avg: 1373.75 / Max: 1918.5Min: 608.5 / Avg: 608.5 / Max: 608.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_barea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton30.32670.65340.98011.30681.6335SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 30.3161.0040.7811.4521.1281. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_barea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3246810Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 1 / Avg: 1 / Max: 1.01Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.13 / Avg: 1.13 / Max: 1.131. (CXX) g++ options: -O3

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380160240320400SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 3360.30224.33215.67281.39143.331. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton360120180240300Min: 360.17 / Avg: 360.3 / Max: 360.37Min: 224.3 / Avg: 224.33 / Max: 224.38Min: 215.65 / Avg: 215.67 / Max: 215.69Min: 281.12 / Avg: 281.39 / Max: 281.58Min: 143.14 / Avg: 143.33 / Max: 143.641. (CXX) g++ options: -O2 -lOpenCL

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3100200300400500SE +/- 0.29, N = 3SE +/- 0.44, N = 3SE +/- 0.12, N = 3SE +/- 0.26, N = 3SE +/- 0.11, N = 3449.0293.95238.2197.74141.701. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380160240320400Min: 448.45 / Avg: 449.02 / Max: 449.4Min: 93.42 / Avg: 93.95 / Max: 94.82Min: 237.98 / Avg: 238.21 / Max: 238.4Min: 97.23 / Avg: 97.74 / Max: 98.12Min: 141.5 / Avg: 141.7 / Max: 141.881. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton37K14K21K28K35KSE +/- 49.84, N = 3SE +/- 23.44, N = 3SE +/- 203.15, N = 15SE +/- 166.62, N = 14SE +/- 121.56, N = 1530986.709266.8614985.4010900.6011591.90
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton35K10K15K20K25KMin: 30906.7 / Avg: 30986.67 / Max: 31078.2Min: 9235.81 / Avg: 9266.86 / Max: 9312.81Min: 13965.4 / Avg: 14985.42 / Max: 16307.5Min: 10663.9 / Avg: 10900.6 / Max: 13062.1Min: 10847.8 / Avg: 11591.94 / Max: 12395.4

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3306090120150SE +/- 0.00, N = 3SE +/- 5.55, N = 12SE +/- 0.00, N = 3SE +/- 0.60, N = 3SE +/- 0.00, N = 3106528139381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3306090120150Min: 9.5 / Avg: 9.5 / Max: 9.5Min: 46.5 / Avg: 65.08 / Max: 84Min: 27.5 / Avg: 27.5 / Max: 27.5Min: 138 / Avg: 138.83 / Max: 140Min: 38 / Avg: 38 / Max: 381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Deptha1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton37M14M21M28M35MSE +/- 106812.26, N = 3SE +/- 303648.79, N = 3SE +/- 359309.26, N = 3SE +/- 325631.00, N = 3SE +/- 104795.40, N = 31533155026187688265404822374620032134123
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Deptha1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton36M12M18M24M30MMin: 15140045 / Avg: 15331549.67 / Max: 15509284Min: 25653010 / Avg: 26187687.67 / Max: 26704421Min: 26061970 / Avg: 26540482 / Max: 27244043Min: 23100009 / Avg: 23746200.33 / Max: 24139540Min: 32023095 / Avg: 32134123.33 / Max: 32343588

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3170340510680850SE +/- 0.88, N = 3SE +/- 0.58, N = 3SE +/- 0.17, N = 3SE +/- 50.92, N = 12SE +/- 0.17, N = 31154883227734071. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3140280420560700Min: 113.5 / Avg: 115.17 / Max: 116.5Min: 486.5 / Avg: 487.5 / Max: 488.5Min: 321.5 / Avg: 321.67 / Max: 322Min: 632.5 / Avg: 773.25 / Max: 1004Min: 407 / Avg: 407.17 / Max: 407.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton38001600240032004000SE +/- 0.50, N = 3SE +/- 234.97, N = 12SE +/- 1.74, N = 3SE +/- 1.61, N = 3SE +/- 1.86, N = 375736962072345028171. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standarda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton36001200180024003000Min: 756.5 / Avg: 757 / Max: 758Min: 3124.5 / Avg: 3696.04 / Max: 4905Min: 2069.5 / Avg: 2072.33 / Max: 2075.5Min: 3446.5 / Avg: 3449.5 / Max: 3452Min: 2815 / Avg: 2817.33 / Max: 28211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton33000M6000M9000M12000M15000MSE +/- 12563225.46, N = 3SE +/- 8616254.20, N = 3SE +/- 47755430.47, N = 3SE +/- 606684.16, N = 3SE +/- 7739237.92, N = 367856895171169140335310723184083709699393713722045973-m64-m641. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32000M4000M6000M8000M10000MMin: 6760580260 / Avg: 6785689516.67 / Max: 6799049020Min: 11674247470 / Avg: 11691403353.33 / Max: 11701387090Min: 10627684700 / Avg: 10723184083.33 / Max: 10772216060Min: 7096258150 / Avg: 7096993936.67 / Max: 7098197390Min: 13712096220 / Avg: 13722045973.33 / Max: 137372892101. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380160240320400SE +/- 1.89, N = 3SE +/- 0.87, N = 3SE +/- 0.70, N = 3SE +/- 0.69, N = 3SE +/- 0.64, N = 3353.91150.99142.28136.80115.02
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton360120180240300Min: 351.57 / Avg: 353.91 / Max: 357.65Min: 149.64 / Avg: 150.99 / Max: 152.62Min: 140.89 / Avg: 142.28 / Max: 143.17Min: 135.42 / Avg: 136.8 / Max: 137.5Min: 113.8 / Avg: 115.02 / Max: 115.97

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320K40K60K80K100KSE +/- 93.64, N = 3SE +/- 636.46, N = 13SE +/- 578.32, N = 3SE +/- 833.50, N = 7SE +/- 89.82, N = 320133.4981995.6450077.8191746.5773546.321. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton316K32K48K64K80KMin: 19971.62 / Avg: 20133.49 / Max: 20295.99Min: 74657.82 / Avg: 81995.64 / Max: 83405.1Min: 48925.08 / Avg: 50077.81 / Max: 50736.49Min: 86751.45 / Avg: 91746.57 / Max: 92771.95Min: 73405.22 / Avg: 73546.32 / Max: 73713.171. (CC) gcc options: -shared -fPIC -O2

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixela1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100SE +/- 2.00, N = 15SE +/- 0.77, N = 5SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3104.7669.3562.3292.5538.521. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixela1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100Min: 97.09 / Avg: 104.76 / Max: 116.7Min: 68.48 / Avg: 69.35 / Max: 72.44Min: 62.29 / Avg: 62.32 / Max: 62.38Min: 92.49 / Avg: 92.55 / Max: 92.63Min: 38.49 / Avg: 38.52 / Max: 38.551. (CC) gcc options: -lm -lpthread -O3

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3612182430SE +/- 0.00065, N = 3SE +/- 0.00225, N = 3SE +/- 0.01639, N = 3SE +/- 0.04033, N = 3SE +/- 0.03738, N = 33.778345.0604219.721808.6603126.305801. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3612182430Min: 3.78 / Avg: 3.78 / Max: 3.78Min: 5.06 / Avg: 5.06 / Max: 5.06Min: 19.7 / Avg: 19.72 / Max: 19.75Min: 8.58 / Avg: 8.66 / Max: 8.7Min: 26.26 / Avg: 26.31 / Max: 26.381. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustivea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton360120180240300SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3277.7772.39159.2069.64139.381. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustivea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton350100150200250Min: 277.66 / Avg: 277.77 / Max: 277.89Min: 72.36 / Avg: 72.39 / Max: 72.44Min: 159.2 / Avg: 159.2 / Max: 159.21Min: 69.58 / Avg: 69.64 / Max: 69.71Min: 139.36 / Avg: 139.38 / Max: 139.391. (CXX) g++ options: -O3 -flto -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quanta1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton312002400360048006000SE +/- 20.90, N = 3SE +/- 53.31, N = 15SE +/- 14.44, N = 3SE +/- 80.05, N = 12SE +/- 17.76, N = 35724.663847.961980.243967.391502.95
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quanta1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton310002000300040005000Min: 5686.62 / Avg: 5724.66 / Max: 5758.7Min: 3543.81 / Avg: 3847.96 / Max: 4160Min: 1956.49 / Avg: 1980.24 / Max: 2006.34Min: 3442.24 / Avg: 3967.39 / Max: 4294.78Min: 1468.14 / Avg: 1502.95 / Max: 1526.49

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Timea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100SE +/- 0.94, N = 15SE +/- 0.18, N = 3SE +/- 0.00, N = 3SE +/- 0.12, N = 3SE +/- 0.01, N = 393.8049.4451.0552.7837.86-march=native-march=native1. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Timea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100Min: 89.04 / Avg: 93.8 / Max: 100.81Min: 49.24 / Avg: 49.44 / Max: 49.8Min: 51.04 / Avg: 51.05 / Max: 51.05Min: 52.63 / Avg: 52.78 / Max: 53.01Min: 37.84 / Avg: 37.86 / Max: 37.891. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31.31712.63423.95135.26846.5855SE +/- 0.002370, N = 3SE +/- 0.023324, N = 6SE +/- 0.007139, N = 3SE +/- 0.003819, N = 3SE +/- 0.016350, N = 30.8913912.4324324.7851232.2305455.8538641. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3246810Min: 0.89 / Avg: 0.89 / Max: 0.9Min: 2.32 / Avg: 2.43 / Max: 2.48Min: 4.77 / Avg: 4.79 / Max: 4.8Min: 2.22 / Avg: 2.23 / Max: 2.24Min: 5.83 / Avg: 5.85 / Max: 5.891. (CC) gcc options: -O3 -march=native -fopenmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Da1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000SE +/- 0.31, N = 3SE +/- 0.47, N = 3SE +/- 0.20, N = 3SE +/- 2.14, N = 3SE +/- 2.29, N = 3197.57541.35372.76861.571041.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Da1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32004006008001000Min: 197.2 / Avg: 197.57 / Max: 198.19Min: 540.42 / Avg: 541.35 / Max: 541.83Min: 372.52 / Avg: 372.76 / Max: 373.15Min: 857.51 / Avg: 861.57 / Max: 864.76Min: 1038.58 / Avg: 1041.9 / Max: 1046.31. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton34080120160200SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.31, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3196.0367.0888.9064.3469.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton34080120160200Min: 195.88 / Avg: 196.03 / Max: 196.15Min: 67.01 / Avg: 67.08 / Max: 67.18Min: 88.57 / Avg: 88.9 / Max: 89.52Min: 64.18 / Avg: 64.34 / Max: 64.49Min: 69.32 / Avg: 69.48 / Max: 69.7

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320K40K60K80K100KSE +/- 98.61, N = 3SE +/- 397.88, N = 3SE +/- 276.10, N = 3SE +/- 335.63, N = 3SE +/- 83.83, N = 319278.6871537.1146629.4579830.9672719.331. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton314K28K42K56K70KMin: 19082.25 / Avg: 19278.68 / Max: 19392.14Min: 70744.64 / Avg: 71537.11 / Max: 71995.87Min: 46348.82 / Avg: 46629.45 / Max: 47181.62Min: 79188.28 / Avg: 79830.96 / Max: 80320.15Min: 72567.8 / Avg: 72719.33 / Max: 72857.221. (CC) gcc options: -shared -fPIC -O2

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380K160K240K320K400KSE +/- 22.67, N = 3SE +/- 436.72, N = 3SE +/- 3992.58, N = 3SE +/- 1727.81, N = 3SE +/- 2009.97, N = 3143155.48388010.76307349.36356302.84345710.871. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton370K140K210K280K350KMin: 143113.1 / Avg: 143155.48 / Max: 143190.63Min: 387337.49 / Avg: 388010.76 / Max: 388829.26Min: 302113.55 / Avg: 307349.36 / Max: 315188.56Min: 354138.78 / Avg: 356302.84 / Max: 359718.03Min: 341701.14 / Avg: 345710.87 / Max: 347963.741. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 200a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380K160K240K320K400KSE +/- 133.96, N = 3SE +/- 1242.81, N = 3SE +/- 1347.28, N = 3SE +/- 1582.66, N = 3SE +/- 3986.77, N = 3141436.20390932.79308938.67356829.93352380.981. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 200a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton370K140K210K280K350KMin: 141169.18 / Avg: 141436.2 / Max: 141588.71Min: 388925.58 / Avg: 390932.79 / Max: 393206.06Min: 306245.77 / Avg: 308938.67 / Max: 310367.17Min: 353665.2 / Avg: 356829.93 / Max: 358465.71Min: 344424.56 / Avg: 352380.98 / Max: 356811.551. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380K160K240K320K400KSE +/- 66.96, N = 3SE +/- 781.49, N = 3SE +/- 1677.89, N = 3SE +/- 2637.25, N = 3SE +/- 1410.11, N = 3138205.11388657.76308213.13347345.49346814.751. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton370K140K210K280K350KMin: 138094.88 / Avg: 138205.11 / Max: 138326.1Min: 387384.75 / Avg: 388657.76 / Max: 390079.6Min: 306510.05 / Avg: 308213.13 / Max: 311568.78Min: 344132.5 / Avg: 347345.49 / Max: 352574.53Min: 344622.05 / Avg: 346814.75 / Max: 349447.111. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton380K160K240K320K400KSE +/- 141.15, N = 3SE +/- 771.95, N = 3SE +/- 3783.68, N = 3SE +/- 1620.39, N = 3SE +/- 1017.52, N = 3139414.84389030.11310596.58351672.92346613.341. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton370K140K210K280K350KMin: 139196.57 / Avg: 139414.84 / Max: 139679.01Min: 387669.66 / Avg: 389030.11 / Max: 390342.46Min: 303433 / Avg: 310596.58 / Max: 316290.47Min: 349589.65 / Avg: 351672.92 / Max: 354864.44Min: 344614.99 / Avg: 346613.34 / Max: 347945.691. (CC) gcc options: -lcrypt -lz -O3 -march=native

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320K40K60K80K100KSE +/- 59.55, N = 3SE +/- 644.29, N = 3SE +/- 112.65, N = 3SE +/- 615.05, N = 3SE +/- 649.31, N = 320887.5883070.0050059.9794458.2273676.951. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton316K32K48K64K80KMin: 20769.87 / Avg: 20887.58 / Max: 20962.15Min: 82007.59 / Avg: 83070 / Max: 84232.7Min: 49842.68 / Avg: 50059.97 / Max: 50220.18Min: 93478.69 / Avg: 94458.22 / Max: 95592.38Min: 72788.14 / Avg: 73676.95 / Max: 74941.31. (CC) gcc options: -shared -fPIC -O2

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320K40K60K80K100KSE +/- 28.97, N = 3SE +/- 211.56, N = 3SE +/- 93.03, N = 3SE +/- 389.13, N = 3SE +/- 38.09, N = 318636.4377567.6946995.3586545.5767231.881. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton315K30K45K60K75KMin: 18584 / Avg: 18636.43 / Max: 18683.99Min: 77148.29 / Avg: 77567.69 / Max: 77825.94Min: 46816.56 / Avg: 46995.35 / Max: 47129.33Min: 85770.92 / Avg: 86545.57 / Max: 86997.78Min: 67187.11 / Avg: 67231.88 / Max: 67307.651. (CC) gcc options: -shared -fPIC -O2

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directiona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton34080120160200SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 3182.58110.7741.0269.2229.131. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directiona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3306090120150Min: 182.36 / Avg: 182.58 / Max: 182.88Min: 110.58 / Avg: 110.77 / Max: 111.01Min: 41.01 / Avg: 41.02 / Max: 41.05Min: 68.94 / Avg: 69.22 / Max: 69.38Min: 29.1 / Avg: 29.13 / Max: 29.181. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To Solvea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 3SE +/- 0.00, N = 3110.3772.3375.2291.2366.821. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To Solvea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100Min: 110.36 / Avg: 110.37 / Max: 110.38Min: 72.3 / Avg: 72.33 / Max: 72.37Min: 75.22 / Avg: 75.22 / Max: 75.23Min: 91.17 / Avg: 91.23 / Max: 91.34Min: 66.82 / Avg: 66.82 / Max: 66.831. (CXX) g++ options: -fopenmp -O2 -march=native

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KSE +/- 11.79, N = 6SE +/- 81.25, N = 3SE +/- 9.95, N = 3SE +/- 66.44, N = 3SE +/- 17.12, N = 31213.156169.223520.869522.826571.951. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton317003400510068008500Min: 1179 / Avg: 1213.15 / Max: 1248.52Min: 6043 / Avg: 6169.22 / Max: 6320.96Min: 3501.43 / Avg: 3520.86 / Max: 3534.33Min: 9400.77 / Avg: 9522.82 / Max: 9629.37Min: 6551.05 / Avg: 6571.95 / Max: 6605.881. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton30.83481.66962.50443.33924.174SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.783.641.513.712.621. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 3.64 / Avg: 3.64 / Max: 3.65Min: 1.51 / Avg: 1.51 / Max: 1.51Min: 3.7 / Avg: 3.71 / Max: 3.71Min: 2.62 / Avg: 2.62 / Max: 2.621. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton30.96751.9352.90253.874.8375SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.804.301.534.302.691. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3246810Min: 0.8 / Avg: 0.8 / Max: 0.8Min: 4.29 / Avg: 4.3 / Max: 4.31Min: 1.53 / Avg: 1.53 / Max: 1.53Min: 4.29 / Avg: 4.3 / Max: 4.3Min: 2.69 / Avg: 2.69 / Max: 2.691. (CXX) g++ options: -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressiona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3306090120150SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.34, N = 3SE +/- 0.01, N = 3124.7148.6866.1541.8148.21-ltiff-ltiff-ltiff-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressiona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100Min: 124.55 / Avg: 124.71 / Max: 124.84Min: 48.61 / Avg: 48.68 / Max: 48.79Min: 66.13 / Avg: 66.15 / Max: 66.16Min: 41.46 / Avg: 41.81 / Max: 42.48Min: 48.2 / Avg: 48.21 / Max: 48.221. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyaa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton30.631.261.892.523.15SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.632.801.192.461.941. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyaa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3246810Min: 0.63 / Avg: 0.63 / Max: 0.63Min: 2.8 / Avg: 2.8 / Max: 2.81Min: 1.19 / Avg: 1.19 / Max: 1.19Min: 2.46 / Avg: 2.46 / Max: 2.47Min: 1.94 / Avg: 1.94 / Max: 1.941. (CXX) g++ options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton37001400210028003500SE +/- 15.28, N = 3SE +/- 6.53, N = 3SE +/- 2.93, N = 3SE +/- 7.82, N = 3SE +/- 6.93, N = 31213.92826.02196.32666.13240.6-llzma-llzma-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton36001200180024003000Min: 1184 / Avg: 1213.9 / Max: 1234.3Min: 2815 / Avg: 2825.97 / Max: 2837.6Min: 2193.3 / Avg: 2196.33 / Max: 2202.2Min: 2657.8 / Avg: 2666.07 / Max: 2681.7Min: 3229.8 / Avg: 3240.57 / Max: 3253.51. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3918273645SE +/- 0.00, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.23, N = 316.025.931.033.839.5-llzma-llzma-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3816243240Min: 16 / Avg: 16 / Max: 16Min: 25.4 / Avg: 25.93 / Max: 26.3Min: 30.9 / Avg: 30.97 / Max: 31Min: 33.6 / Avg: 33.8 / Max: 33.9Min: 39 / Avg: 39.47 / Max: 39.71. (CC) gcc options: -O3 -pthread -lz

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton340K80K120K160K200KSE +/- 1746.17, N = 3SE +/- 53.95, N = 3SE +/- 197.89, N = 3SE +/- 75.14, N = 3SE +/- 210.27, N = 3188910.044920.646793.941185.741855.1
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330K60K90K120K150KMin: 185496 / Avg: 188910 / Max: 191254Min: 44825.6 / Avg: 44920.63 / Max: 45012.4Min: 46548.4 / Avg: 46793.9 / Max: 47185.5Min: 41107.5 / Avg: 41185.67 / Max: 41335.9Min: 41440.3 / Avg: 41855.1 / Max: 42122.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton340K80K120K160K200KSE +/- 825.35, N = 3SE +/- 27.66, N = 3SE +/- 336.95, N = 3SE +/- 110.01, N = 3SE +/- 305.31, N = 3171169.041366.645955.741179.740051.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330K60K90K120K150KMin: 170102 / Avg: 171168.67 / Max: 172793Min: 41314.3 / Avg: 41366.57 / Max: 41408.4Min: 45554.1 / Avg: 45955.73 / Max: 46625.2Min: 41054 / Avg: 41179.67 / Max: 41398.9Min: 39503.5 / Avg: 40051.33 / Max: 40558.8

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floata1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KSE +/- 113.94, N = 3SE +/- 1.03, N = 3SE +/- 28.63, N = 3SE +/- 1.81, N = 3SE +/- 19.61, N = 39990.152159.722500.871965.072156.60
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floata1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KMin: 9831.11 / Avg: 9990.15 / Max: 10211Min: 2157.69 / Avg: 2159.72 / Max: 2161.04Min: 2462.74 / Avg: 2500.87 / Max: 2556.93Min: 1961.49 / Avg: 1965.07 / Max: 1967.34Min: 2129.52 / Avg: 2156.6 / Max: 2194.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNeta1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton33K6K9K12K15KSE +/- 46.48, N = 3SE +/- 1.37, N = 3SE +/- 37.23, N = 3SE +/- 3.54, N = 3SE +/- 22.07, N = 312014.703103.123969.352983.933257.94
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNeta1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KMin: 11923.3 / Avg: 12014.7 / Max: 12075.1Min: 3100.56 / Avg: 3103.12 / Max: 3105.26Min: 3901.04 / Avg: 3969.35 / Max: 4029.15Min: 2978.25 / Avg: 2983.93 / Max: 2990.42Min: 3216.26 / Avg: 3257.94 / Max: 3291.38

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton340K80K120K160K200KSE +/- 63.75, N = 3SE +/- 74.60, N = 3SE +/- 3.30, N = 3SE +/- 47.94, N = 3SE +/- 82.61, N = 345328.6136784.253951.5140964.4178460.4-m64-m641. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton330K60K90K120K150KMin: 45201.2 / Avg: 45328.63 / Max: 45396Min: 136644.7 / Avg: 136784.2 / Max: 136899.8Min: 53945.2 / Avg: 53951.53 / Max: 53956.3Min: 140874.2 / Avg: 140964.37 / Max: 141037.7Min: 178358.2 / Avg: 178460.37 / Max: 178623.91. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton35001000150020002500SE +/- 0.12, N = 3SE +/- 1.40, N = 3SE +/- 0.03, N = 3SE +/- 4.47, N = 3SE +/- 0.23, N = 3588.32088.9660.62161.32546.4-m64-m641. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3400800120016002000Min: 588.1 / Avg: 588.3 / Max: 588.5Min: 2086.1 / Avg: 2088.9 / Max: 2090.3Min: 660.5 / Avg: 660.57 / Max: 660.6Min: 2152.4 / Avg: 2161.33 / Max: 2165.9Min: 2546 / Avg: 2546.4 / Max: 2546.81. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton37001400210028003500SE +/- 4.74, N = 3SE +/- 3.25, N = 3SE +/- 12.10, N = 3SE +/- 24.18, N = 3SE +/- 7.75, N = 31121.72907.52051.62582.03050.3-llzma-llzma-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton35001000150020002500Min: 1115.1 / Avg: 1121.7 / Max: 1130.9Min: 2901.2 / Avg: 2907.47 / Max: 2912.1Min: 2035.9 / Avg: 2051.6 / Max: 2075.4Min: 2533.7 / Avg: 2581.97 / Max: 2608.7Min: 3042.5 / Avg: 3050.3 / Max: 3065.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3918273645SE +/- 0.03, N = 3SE +/- 0.21, N = 3SE +/- 0.06, N = 3SE +/- 0.40, N = 3SE +/- 0.00, N = 316.930.034.638.141.2-llzma-llzma-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3918273645Min: 16.9 / Avg: 16.93 / Max: 17Min: 29.6 / Avg: 30 / Max: 30.3Min: 34.5 / Avg: 34.6 / Max: 34.7Min: 37.3 / Avg: 38.1 / Max: 38.5Min: 41.2 / Avg: 41.2 / Max: 41.21. (CC) gcc options: -O3 -pthread -lz

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton34K8K12K16K20KSE +/- 1.73, N = 3SE +/- 45.90, N = 3SE +/- 1.10, N = 3SE +/- 40.24, N = 3SE +/- 1.17, N = 32927.1618299.966244.4820423.5711791.771. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton34K8K12K16K20KMin: 2923.87 / Avg: 2927.16 / Max: 2929.76Min: 18217.55 / Avg: 18299.96 / Max: 18376.18Min: 6242.29 / Avg: 6244.48 / Max: 6245.7Min: 20343.5 / Avg: 20423.57 / Max: 20470.62Min: 11789.44 / Avg: 11791.77 / Max: 11792.991. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandoma1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton30.21380.42760.64140.85521.069SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.300.950.490.860.701. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandoma1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3246810Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.48 / Avg: 0.49 / Max: 0.49Min: 0.86 / Avg: 0.86 / Max: 0.86Min: 0.7 / Avg: 0.7 / Max: 0.71. (CXX) g++ options: -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31428425670SE +/- 0.06, N = 3SE +/- 0.17, N = 15SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 361.8026.7131.0821.1222.77-ltiff-ltiff-ltiff-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31224364860Min: 61.68 / Avg: 61.8 / Max: 61.88Min: 25.92 / Avg: 26.71 / Max: 27.99Min: 31.05 / Avg: 31.08 / Max: 31.11Min: 21.07 / Avg: 21.12 / Max: 21.17Min: 22.67 / Avg: 22.77 / Max: 22.941. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton36M12M18M24M30MSE +/- 123749.22, N = 3SE +/- 149731.77, N = 3SE +/- 292329.99, N = 3SE +/- 242448.39, N = 3SE +/- 153578.64, N = 31098043023857623216792452208196127608891-m64 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2-m64 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi21. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton35M10M15M20M25MMin: 10738520 / Avg: 10980430 / Max: 11146676Min: 23609163 / Avg: 23857622.67 / Max: 24126627Min: 21327596 / Avg: 21679245.33 / Max: 22259579Min: 21829335 / Avg: 22081960.67 / Max: 22566713Min: 27303905 / Avg: 27608891 / Max: 277929571. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100SE +/- 0.27, N = 3SE +/- 0.06, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 3SE +/- 0.13, N = 393.6332.6340.3329.7427.90
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100Min: 93.22 / Avg: 93.63 / Max: 94.13Min: 32.52 / Avg: 32.63 / Max: 32.72Min: 39.91 / Avg: 40.33 / Max: 40.64Min: 29.63 / Avg: 29.74 / Max: 29.86Min: 27.67 / Avg: 27.9 / Max: 28.12

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suitea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3200K400K600K800K1000KSE +/- 816.27, N = 3SE +/- 2681.41, N = 3SE +/- 743.13, N = 3SE +/- 983.65, N = 3SE +/- 525.83, N = 3241259480741449855828186666484
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suitea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3140K280K420K560K700KMin: 239636 / Avg: 241259.33 / Max: 242221Min: 475521 / Avg: 480741.33 / Max: 484415Min: 448715 / Avg: 449855.33 / Max: 451251Min: 826631 / Avg: 828185.67 / Max: 830007Min: 665522 / Avg: 666484 / Max: 667333

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclustera1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31122334455SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.26, N = 15SE +/- 0.07, N = 3SE +/- 0.33, N = 1247.4318.3815.4823.5113.301. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclustera1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31020304050Min: 47.4 / Avg: 47.43 / Max: 47.47Min: 18.28 / Avg: 18.38 / Max: 18.45Min: 14.26 / Avg: 15.48 / Max: 17.08Min: 23.39 / Avg: 23.51 / Max: 23.63Min: 11.89 / Avg: 13.3 / Max: 14.871. (CXX) g++ options: -O2 -lOpenCL

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton37001400210028003500SE +/- 18.15, N = 3SE +/- 1.53, N = 3SE +/- 1.67, N = 3SE +/- 3.84, N = 3SE +/- 0.33, N = 33452196117419971185
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton36001200180024003000Min: 3416 / Avg: 3452 / Max: 3474Min: 1959 / Avg: 1961 / Max: 1964Min: 1739 / Avg: 1740.67 / Max: 1744Min: 993 / Avg: 997.33 / Max: 1005Min: 1184 / Avg: 1184.67 / Max: 1185

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton310002000300040005000SE +/- 4.47, N = 3SE +/- 18.93, N = 3SE +/- 3.74, N = 3SE +/- 29.53, N = 3SE +/- 9.57, N = 3633.92768.72878.83440.64639.1-llzma-llzma-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speeda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton38001600240032004000Min: 626.2 / Avg: 633.93 / Max: 641.7Min: 2733.4 / Avg: 2768.73 / Max: 2798.2Min: 2873.7 / Avg: 2878.83 / Max: 2886.1Min: 3408.7 / Avg: 3440.6 / Max: 3499.6Min: 4620 / Avg: 4639.13 / Max: 4649.11. (CC) gcc options: -O3 -pthread -lz

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320406080100SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 374.7423.5334.2022.5326.94
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compilea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31428425670Min: 74.72 / Avg: 74.74 / Max: 74.76Min: 23.45 / Avg: 23.53 / Max: 23.67Min: 34.18 / Avg: 34.2 / Max: 34.23Min: 22.42 / Avg: 22.53 / Max: 22.59Min: 26.87 / Avg: 26.94 / Max: 27.04

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratinga1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton316K32K48K64K80KSE +/- 31.21, N = 3SE +/- 142.56, N = 3SE +/- 239.68, N = 3SE +/- 35.00, N = 3SE +/- 12.88, N = 340891573185944545653730541. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratinga1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton313K26K39K52K65KMin: 40833 / Avg: 40891 / Max: 40940Min: 57036 / Avg: 57318.33 / Max: 57494Min: 58966 / Avg: 59445.33 / Max: 59689Min: 45595 / Avg: 45653.33 / Max: 45716Min: 73037 / Avg: 73053.67 / Max: 730791. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratinga1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320K40K60K80K100KSE +/- 91.00, N = 3SE +/- 16.02, N = 3SE +/- 44.77, N = 3SE +/- 174.34, N = 3SE +/- 159.36, N = 332498625627128566631978241. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratinga1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton320K40K60K80K100KMin: 32380 / Avg: 32498 / Max: 32677Min: 62532 / Avg: 62561.67 / Max: 62587Min: 71213 / Avg: 71284.67 / Max: 71367Min: 66455 / Avg: 66631.33 / Max: 66980Min: 97563 / Avg: 97824.33 / Max: 981131. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KSE +/- 44.35, N = 4SE +/- 23.53, N = 11SE +/- 40.13, N = 4SE +/- 19.24, N = 20SE +/- 26.73, N = 490453167434429283203
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton316003200480064008000Min: 8913 / Avg: 9045 / Max: 9102Min: 2979 / Avg: 3167.09 / Max: 3277Min: 4257 / Avg: 4343.5 / Max: 4428Min: 2827 / Avg: 2928.45 / Max: 3106Min: 3141 / Avg: 3202.5 / Max: 3264

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stressa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton33K6K9K12K15KSE +/- 0.16, N = 3SE +/- 37.60, N = 3SE +/- 0.54, N = 3SE +/- 155.66, N = 3SE +/- 0.41, N = 32366.0013304.503404.9412527.165029.711. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stressa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KMin: 2365.79 / Avg: 2366 / Max: 2366.31Min: 13246.57 / Avg: 13304.5 / Max: 13374.99Min: 3404.14 / Avg: 3404.94 / Max: 3405.97Min: 12365.65 / Avg: 12527.16 / Max: 12838.41Min: 5028.91 / Avg: 5029.71 / Max: 5030.291. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3150300450600750SE +/- 0.00, N = 3SE +/- 7.09, N = 3SE +/- 0.33, N = 3SE +/- 2.00, N = 3SE +/- 0.32, N = 3331.07663.07470.39565.69675.641. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3120240360480600Min: 331.07 / Avg: 331.07 / Max: 331.07Min: 650.2 / Avg: 663.07 / Max: 674.63Min: 470.05 / Avg: 470.39 / Max: 471.05Min: 563.66 / Avg: 565.69 / Max: 569.69Min: 675.15 / Avg: 675.64 / Max: 676.251. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3200K400K600K800K1000KSE +/- 3840.04, N = 3SE +/- 713.03, N = 3SE +/- 2395.13, N = 3SE +/- 405.56, N = 3SE +/- 614.16, N = 3918172.37768723.46770521.811037943.37843015.781. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3200K400K600K800K1000KMin: 912809.75 / Avg: 918172.37 / Max: 925614.93Min: 767641.15 / Avg: 768723.46 / Max: 770068.79Min: 767318.01 / Avg: 770521.81 / Max: 775207.81Min: 1037132.46 / Avg: 1037943.37 / Max: 1038364.88Min: 841810.62 / Avg: 843015.78 / Max: 843823.921. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory Copyinga1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton314002800420056007000SE +/- 0.91, N = 3SE +/- 11.57, N = 3SE +/- 3.75, N = 3SE +/- 0.94, N = 3SE +/- 3.52, N = 3798.243551.802903.003150.496693.321. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory Copyinga1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton312002400360048006000Min: 796.83 / Avg: 798.24 / Max: 799.93Min: 3528.68 / Avg: 3551.8 / Max: 3564.29Min: 2895.63 / Avg: 2903 / Max: 2907.88Min: 3148.99 / Avg: 3150.49 / Max: 3152.22Min: 6686.28 / Avg: 6693.32 / Max: 6696.971. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Cryptoa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton35K10K15K20K25KSE +/- 6.29, N = 3SE +/- 3.93, N = 3SE +/- 92.83, N = 3SE +/- 5.89, N = 3SE +/- 32.01, N = 311985.3813556.0617924.1810210.3423181.811. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Cryptoa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton34K8K12K16K20KMin: 11977.75 / Avg: 11985.38 / Max: 11997.86Min: 13551.02 / Avg: 13556.06 / Max: 13563.8Min: 17748.62 / Avg: 17924.18 / Max: 18064.26Min: 10199.14 / Avg: 10210.34 / Max: 10219.1Min: 23119.13 / Avg: 23181.81 / Max: 23224.41. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Matha1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton312K24K36K48K60KSE +/- 0.49, N = 3SE +/- 2.46, N = 3SE +/- 15.72, N = 3SE +/- 28.50, N = 3SE +/- 17.05, N = 327341.4753787.6137753.8940140.3055258.171. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Matha1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton310K20K30K40K50KMin: 27340.68 / Avg: 27341.47 / Max: 27342.37Min: 53783.01 / Avg: 53787.61 / Max: 53791.41Min: 37737.89 / Avg: 37753.89 / Max: 37785.32Min: 40101.89 / Avg: 40140.3 / Max: 40195.98Min: 55237.21 / Avg: 55258.17 / Max: 55291.941. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directiona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31224364860SE +/- 0.02862870, N = 3SE +/- 0.03718674, N = 3SE +/- 0.01351889, N = 3SE +/- 0.09619197, N = 3SE +/- 0.01401446, N = 353.7706274028.2797661011.5733547017.868277208.016714251. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directiona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton31122334455Min: 53.73 / Avg: 53.77 / Max: 53.83Min: 28.22 / Avg: 28.28 / Max: 28.35Min: 11.55 / Avg: 11.57 / Max: 11.59Min: 17.69 / Avg: 17.87 / Max: 18.01Min: 8 / Avg: 8.02 / Max: 8.041. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Seconda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton390K180K270K360K450KSE +/- 116.54, N = 3SE +/- 2163.00, N = 3SE +/- 49.84, N = 3SE +/- 80.93, N = 3SE +/- 3211.91, N = 3203869.40345133.44315464.34285378.84405413.861. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Seconda1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton370K140K210K280K350KMin: 203704.88 / Avg: 203869.4 / Max: 204094.65Min: 342961.26 / Avg: 345133.44 / Max: 349459.43Min: 315395.23 / Avg: 315464.34 / Max: 315561.11Min: 285243.13 / Avg: 285378.84 / Max: 285523.09Min: 399077.13 / Avg: 405413.86 / Max: 409495.171. (CC) gcc options: -O2 -lrt" -lrt

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed Timea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 332.2916.3823.1418.8421.541. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed Timea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3714212835Min: 32.28 / Avg: 32.29 / Max: 32.29Min: 16.37 / Avg: 16.38 / Max: 16.39Min: 23.13 / Avg: 23.14 / Max: 23.14Min: 18.84 / Avg: 18.84 / Max: 18.84Min: 21.54 / Avg: 21.54 / Max: 21.541. (CC) gcc options: -static -fopenmp -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorougha1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3816243240SE +/- 0.0061, N = 3SE +/- 0.0154, N = 3SE +/- 0.0064, N = 3SE +/- 0.0001, N = 3SE +/- 0.0011, N = 333.51987.981816.52227.262513.92481. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorougha1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3714212835Min: 33.51 / Avg: 33.52 / Max: 33.53Min: 7.96 / Avg: 7.98 / Max: 8.01Min: 16.51 / Avg: 16.52 / Max: 16.53Min: 7.26 / Avg: 7.26 / Max: 7.26Min: 13.92 / Avg: 13.92 / Max: 13.931. (CXX) g++ options: -O3 -flto -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solvera1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3918273645SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 341.4521.7917.0420.4510.481. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solvera1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3918273645Min: 41.31 / Avg: 41.45 / Max: 41.6Min: 21.62 / Avg: 21.79 / Max: 21.89Min: 16.97 / Avg: 17.03 / Max: 17.12Min: 20.42 / Avg: 20.45 / Max: 20.49Min: 10.44 / Avg: 10.48 / Max: 10.511. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton36K12K18K24K30KSE +/- 1.64, N = 3SE +/- 30.62, N = 3SE +/- 1.39, N = 3SE +/- 184.24, N = 3SE +/- 4.69, N = 33266.3616826.436720.6826298.8113481.611. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Ca1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton35K10K15K20K25KMin: 3263.37 / Avg: 3266.36 / Max: 3269.01Min: 16791.6 / Avg: 16826.43 / Max: 16887.46Min: 6719.25 / Avg: 6720.68 / Max: 6723.47Min: 26062.84 / Avg: 26298.81 / Max: 26661.9Min: 13472.59 / Avg: 13481.61 / Max: 13488.331. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3110M220M330M440M550MSE +/- 8819.17, N = 3SE +/- 489364.67, N = 3SE +/- 35118.85, N = 3SE +/- 41633.32, N = 3SE +/- 400097.21, N = 31655133335097466672628900003731000003836066671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton390M180M270M360M450MMin: 165500000 / Avg: 165513333.33 / Max: 165530000Min: 508770000 / Avg: 509746666.67 / Max: 510290000Min: 262820000 / Avg: 262890000 / Max: 262930000Min: 373020000 / Avg: 373100000 / Max: 373160000Min: 382810000 / Avg: 383606666.67 / Max: 3840700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KSE +/- 71.92, N = 4SE +/- 16.15, N = 4SE +/- 27.95, N = 4SE +/- 24.39, N = 4SE +/- 14.95, N = 4111824052450638153524
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KMin: 10986 / Avg: 11181.5 / Max: 11320Min: 4012 / Avg: 4051.75 / Max: 4084Min: 4428 / Avg: 4506.25 / Max: 4560Min: 3747 / Avg: 3814.5 / Max: 3863Min: 3487 / Avg: 3523.75 / Max: 3551

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Losslessa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3816243240SE +/- 0.31, N = 3SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 333.9916.3916.5217.5311.911. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Losslessa1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3714212835Min: 33.65 / Avg: 33.99 / Max: 34.6Min: 16.17 / Avg: 16.39 / Max: 16.57Min: 16.18 / Avg: 16.52 / Max: 16.7Min: 17.49 / Avg: 17.53 / Max: 17.58Min: 11.89 / Avg: 11.91 / Max: 11.921. (CXX) g++ options: -O3 -fPIC -lm

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3300M600M900M1200M1500MSE +/- 176548.39, N = 3SE +/- 103921.81, N = 3SE +/- 3420043.89, N = 3SE +/- 5114517.12, N = 3SE +/- 952437.28, N = 318671693326767070093265290066136476712588073331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3200M400M600M800M1000MMin: 186372300 / Avg: 186716933.33 / Max: 186955800Min: 267514000 / Avg: 267670700 / Max: 267867300Min: 927742900 / Avg: 932652900 / Max: 939232100Min: 654941100 / Avg: 661364766.67 / Max: 671470600Min: 1256931000 / Avg: 1258807333.33 / Max: 12600300001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton314002800420056007000SE +/- 63.66, N = 4SE +/- 27.42, N = 4SE +/- 45.89, N = 4SE +/- 32.93, N = 4SE +/- 32.57, N = 567403019396429212951
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton312002400360048006000Min: 6626 / Avg: 6740 / Max: 6920Min: 2941 / Avg: 3018.75 / Max: 3070Min: 3843 / Avg: 3964 / Max: 4056Min: 2835 / Avg: 2921.25 / Max: 2986Min: 2868 / Avg: 2951 / Max: 3068

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton33K6K9K12K15KSE +/- 48.38, N = 4SE +/- 23.33, N = 4SE +/- 23.29, N = 4SE +/- 24.07, N = 4SE +/- 6.99, N = 4129974616562640133940
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythona1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KMin: 12878 / Avg: 12997.25 / Max: 13093Min: 4574 / Avg: 4616.25 / Max: 4677Min: 5587 / Avg: 5625.5 / Max: 5693Min: 3955 / Avg: 4013 / Max: 4063Min: 3927 / Avg: 3940.25 / Max: 3960

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KSE +/- 6.27, N = 3SE +/- 5.52, N = 3SE +/- 4.88, N = 3SE +/- 14.20, N = 3SE +/- 76.73, N = 32328.275452.116016.168112.3710940.941. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3a1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton32K4K6K8K10KMin: 2316.57 / Avg: 2328.27 / Max: 2338.01Min: 5441.17 / Avg: 5452.11 / Max: 5458.81Min: 6009.82 / Avg: 6016.16 / Max: 6025.75Min: 8085.86 / Avg: 8112.37 / Max: 8134.43Min: 10787.69 / Avg: 10940.94 / Max: 11024.621. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteina1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton33691215SE +/- 0.040, N = 3SE +/- 0.039, N = 12SE +/- 0.014, N = 3SE +/- 0.009, N = 3SE +/- 0.060, N = 33.2455.0677.9356.22011.2911. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteina1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton33691215Min: 3.17 / Avg: 3.25 / Max: 3.3Min: 4.64 / Avg: 5.07 / Max: 5.13Min: 7.91 / Avg: 7.93 / Max: 7.96Min: 6.21 / Avg: 6.22 / Max: 6.24Min: 11.17 / Avg: 11.29 / Max: 11.361. (CXX) g++ options: -O3 -lm

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3300K600K900K1200K1500KSE +/- 196.86, N = 5SE +/- 4180.17, N = 5SE +/- 338.27, N = 5SE +/- 1099.67, N = 5SE +/- 0.00, N = 55385001442631872313127259613700941. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancea1.4xlarge Gravitonc6a.4xlarge EPYCc6g.4xlarge Graviton2c6i.4xlarge Xeonc7g.4xlarge Graviton3300K600K900K1200K1500KMin: 537869 / Avg: 538499.8 / Max: 538921Min: 1426886 / Avg: 1442630.8 / Max: 1449415Min: 871484 / Avg: 872312.6 / Max: 872865Min: 1269073 / Avg: 1272595.8 / Max: 1274949Min: 1370094 / Avg: 1370094 / Max: 13700941. (CC) gcc options: -O3 -march=native

94 Results Shown

Timed LLVM Compilation
Timed Node.js Compilation
LeelaChessZero
Timed Gem5 Compilation
LeelaChessZero
NAS Parallel Benchmarks:
  SP.C
  BT.C
SecureMark
libavif avifenc
Ngspice
GPAW
NAS Parallel Benchmarks
Timed MrBayes Analysis
NAS Parallel Benchmarks
Ngspice
ONNX Runtime:
  GPT-2 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
GROMACS
Rodinia
libavif avifenc
TensorFlow Lite
ONNX Runtime
asmFish
ONNX Runtime:
  bertsquad-12 - CPU - Standard
  super-resolution-10 - CPU - Standard
OpenSSL
Build2
Apache HTTP Server
C-Ray
High Performance Conjugate Gradient
ASTC Encoder
TensorFlow Lite
POV-Ray
ACES DGEMM
NAS Parallel Benchmarks
Timed PHP Compilation
Apache HTTP Server
nginx:
  100
  200
  1000
  500
Apache HTTP Server:
  200
  100
Xcompact3d Incompact3d
m-queens
NAS Parallel Benchmarks
simdjson:
  PartialTweets
  DistinctUserID
WebP Image Encode
simdjson
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
TensorFlow Lite:
  Inception V4
  Inception ResNet V2
  Mobilenet Float
  SqueezeNet
OpenSSL:
  RSA4096:
    verify/s
    sign/s
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
NAS Parallel Benchmarks
simdjson
WebP Image Encode
Stockfish
Timed ImageMagick Compilation
PHPBench
Rodinia
PyBench
Zstd Compression
Timed Apache Compilation
7-Zip Compression:
  Decompression Rating
  Compression Rating
DaCapo Benchmark
Stress-NG
Google SynthMark
Stress-NG:
  IO_uring
  Memory Copying
  Crypto
  Vector Math
Xcompact3d Incompact3d
Coremark
N-Queens
ASTC Encoder
Rodinia
NAS Parallel Benchmarks
Liquid-DSP
DaCapo Benchmark
libavif avifenc
Algebraic Multi-Grid Benchmark
DaCapo Benchmark:
  H2
  Jython
LULESH
LAMMPS Molecular Dynamics Simulator
TSCP