Amazon EC2 c7g.4xlarge Graviton3 Tests

Graviton3 benchmarks by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2205256-NE-2205240NE24
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 3 Tests
Chess Test Suite 6 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 22 Tests
Creator Workloads 7 Tests
Cryptography 2 Tests
Fortran Tests 4 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 14 Tests
Imaging 2 Tests
Common Kernel Benchmarks 3 Tests
Linear Algebra 2 Tests
Machine Learning 3 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 7 Tests
Multi-Core 23 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 10 Tests
Programmer / Developer System Benchmarks 12 Tests
Python Tests 6 Tests
Raytracing 2 Tests
Renderers 2 Tests
Scientific Computing 8 Tests
Server 5 Tests
Server CPU Tests 16 Tests
Single-Threaded 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
c7g.4xlarge
May 24 2022
  8 Hours, 9 Minutes
a1.4xlarge
May 25 2022
  20 Hours, 31 Minutes
Invert Hiding All Results Option
  14 Hours, 20 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Amazon EC2 c7g.4xlarge Graviton3 TestsProcessorMotherboardChipsetMemoryDiskNetworkOSKernelCompilerFile-SystemSystem Layerc7g.4xlargea1.4xlargeARMv8 Neoverse-V1 (16 Cores)Amazon EC2 c7g.4xlarge (1.0 BIOS)Amazon Device 020032GB193GB Amazon Elastic Block StoreAmazon ElasticUbuntu 22.045.15.0-1004-aws (aarch64)GCC 11.2.0ext4amazonARMv8 Cortex-A72 (16 Cores)Amazon EC2 a1.4xlarge (1.0 BIOS)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details- OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Details- Python 3.10.4Security Details- c7g.4xlarge: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - a1.4xlarge: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected

c7g.4xlarge vs. a1.4xlarge ComparisonPhoronix Test SuiteBaseline+247.2%+247.2%+494.4%+494.4%+741.6%+741.6%622.8%8.9%Eigen828.9%Memory Copying738.5%BLAS717%3 - Compression Speed631.8%CPU Cache596.2%574.2%i.i.1.C.P.D570.7%S.F.P.R556.7%i.i.1.C.P.D526.9%CG.C441.7%IS.D427.4%Carbon Nanotube395.8%369.9%Mobilenet Float363.2%Inception V4351.3%RSA4096332.8%I.R.V327.4%MG.C312.7%FT.C302.8%20k Atoms296%OpenMP CFD Solver295.6%RSA4096293.7%Mobilenet Quant280.9%fcn-resnet101-11 - CPU - Standard280%1000277.2%super-resolution-10 - CPU - Standard272.1%ArcFace ResNet-100 - CPU - Standard269.1%SqueezeNet268.8%500265.3%100260.8%MPI CPU - water_GMX50_bare257%O.S256.7%Time To Compile254.9%bertsquad-12 - CPU - Standard253.9%200252.7%Rhodopsin Protein248%GPT-2 - CPU - Standard245.6%SP.C245.3%DistinctUserID236.3%PartialTweets235.9%Time To Compile235.6%Jython229.9%BT.C228.4%Ninja227.5%Tradesoap217.3%2216.9%Kostya207.9%Time To Compile207.7%6206.6%LU.C202.2%Compression Rating201%199.1%Time To Compile195.4%T.F.A.T.T191.3%6, Lossless185.4%Tradebeans182.4%Time To Compile182.1%Time To Compile177.4%P.B.S176.3%EP.D175.6%Total Time - 4.1.R.P.P172%19 - D.S171.9%Q.1.L171.4%NASNet Mobile167.3%19, Long Mode - D.S167%10, Lossless163.8%Q.1.L.H.C158.7%3 - D.S157.1%P.P.A156.5%A.C.P154.4%Total Time151.4%OpenMP LavaMD151.4%C7552151.3%1000150.9%200149.1%500148.6%Trace Time147.7%SecureMark-TLS147.1%19, Long Mode - Compression Speed146.9%19 - Compression Speed143.8%100141.5%Thorough140.7%C2670139.1%LargeRand133.3%16 - 256 - 57131.8%H2128.4%CPU Stress112.6%1.H.M.2.D109.6%VoiceMark_100104.1%SHA256102.2%Vector Math102.1%Matrix Math988.6%Exhaustive99.3%CoreMark Size 666 - I.P.S98.9%Crypto93.4%Q.1.H.C91.4%D.R78.7%Time To Solve65.2%Elapsed Time49.9%IO_uringLeelaChessZeroStress-NGLeelaChessZeroZstd CompressionStress-NGHigh Performance Conjugate GradientAlgebraic Multi-Grid BenchmarkXcompact3d Incompact3dACES DGEMMXcompact3d Incompact3dNAS Parallel BenchmarksNAS Parallel BenchmarksGPAWLULESHTensorFlow LiteTensorFlow LiteOpenSSLTensorFlow LiteNAS Parallel BenchmarksNAS Parallel BenchmarksLAMMPS Molecular Dynamics SimulatorRodiniaOpenSSLTensorFlow LiteONNX RuntimeApache HTTP ServerONNX RuntimeONNX RuntimeTensorFlow LiteApache HTTP ServerApache HTTP ServerGROMACSRodiniaTimed Node.js CompilationONNX RuntimeApache HTTP ServerLAMMPS Molecular Dynamics SimulatorONNX RuntimeNAS Parallel BenchmarkssimdjsonsimdjsonTimed ImageMagick CompilationDaCapo BenchmarkNAS Parallel BenchmarksTimed LLVM CompilationDaCapo Benchmarklibavif avifencsimdjsonBuild2libavif avifencNAS Parallel Benchmarks7-Zip Compressionlibavif avifencTimed Gem5 CompilationPyBenchlibavif avifencDaCapo BenchmarkTimed PHP CompilationTimed Apache CompilationPHPBenchNAS Parallel BenchmarksC-RayZstd CompressionWebP Image EncodeTensorFlow LiteZstd Compressionlibavif avifencWebP Image EncodeZstd CompressionTimed MrBayes AnalysisTSCPStockfishRodiniaNgspicenginxnginxnginxPOV-RaySecureMarkZstd CompressionZstd CompressionnginxASTC EncoderNgspicesimdjsonLiquid-DSPDaCapo BenchmarkStress-NGasmFishGoogle SynthMarkOpenSSLStress-NGStress-NGASTC EncoderCoremarkStress-NGWebP Image Encode7-Zip Compressionm-queensN-QueensStress-NGc7g.4xlargea1.4xlarge

Amazon EC2 c7g.4xlarge Graviton3 Testslczero: Eigenstress-ng: Memory Copyinglczero: BLAScompress-zstd: 3 - Compression Speedhpcg: amg: incompact3d: input.i3d 129 Cells Per Directionmt-dgemm: Sustained Floating-Point Rateincompact3d: input.i3d 193 Cells Per Directionnpb: CG.Cnpb: IS.Dgpaw: Carbon Nanotubelulesh: tensorflow-lite: Mobilenet Floattensorflow-lite: Inception V4openssl: RSA4096tensorflow-lite: Inception ResNet V2npb: MG.Cnpb: FT.Clammps: 20k Atomsrodinia: OpenMP CFD Solveropenssl: RSA4096tensorflow-lite: Mobilenet Quantonnx: fcn-resnet101-11 - CPU - Standardapache: 1000onnx: super-resolution-10 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardtensorflow-lite: SqueezeNetapache: 500apache: 100gromacs: MPI CPU - water_GMX50_barebuild-nodejs: Time To Compileonnx: bertsquad-12 - CPU - Standardapache: 200lammps: Rhodopsin Proteinonnx: GPT-2 - CPU - Standardnpb: SP.Csimdjson: DistinctUserIDsimdjson: PartialTweetsbuild-imagemagick: Time To Compiledacapobench: Jythonnpb: BT.Cbuild-llvm: Ninjadacapobench: Tradesoapavifenc: 2simdjson: Kostyabuild2: Time To Compileavifenc: 6npb: LU.Ccompress-7zip: Compression Ratingavifenc: 0build-gem5: Time To Compilepybench: Total For Average Test Timesavifenc: 6, Losslessdacapobench: Tradebeansbuild-php: Time To Compilebuild-apache: Time To Compilephpbench: PHP Benchmark Suitenpb: EP.Dcompress-zstd: 19 - Decompression Speedwebp: Quality 100, Losslesstensorflow-lite: NASNet Mobilecompress-zstd: 19, Long Mode - Decompression Speedavifenc: 10, Losslesswebp: Quality 100, Lossless, Highest Compressioncompress-zstd: 3 - Decompression Speedmrbayes: Primate Phylogeny Analysistscp: AI Chess Performancestockfish: Total Timerodinia: OpenMP LavaMDngspice: C7552nginx: 1000nginx: 200nginx: 500povray: Trace Timesecuremark: SecureMark-TLScompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19 - Compression Speednginx: 100astcenc: Thoroughngspice: C2670simdjson: LargeRandliquid-dsp: 16 - 256 - 57dacapobench: H2stress-ng: CPU Stressasmfish: 1024 Hash Memory, 26 Depthsynthmark: VoiceMark_100openssl: SHA256stress-ng: Vector Mathstress-ng: Matrix Mathastcenc: Exhaustivecoremark: CoreMark Size 666 - Iterations Per Secondstress-ng: Cryptowebp: Quality 100, Highest Compressioncompress-7zip: Decompression Ratingm-queens: Time To Solven-queens: Elapsed Timestress-ng: IO_uringquantlib: stress-ng: CPU Cachec-ray: Total Time - 4K, 16 Rays Per Pixelrodinia: OpenMP Streamclusterc7g.4xlargea1.4xlarge11896693.3211034639.126.305812588073338.016714255.85386429.12585706571.951041.90155.18010940.9392156.6041855.12546.440051.313481.6111791.7711.42510.478178460.41502.953872719.3328176093257.9473546.3267231.881.128497.57940773676.9511.29179904467.192.692.6227.904394010339.53544.9293524141.6981.94115.0209.3857730.4197824256.841391.171118511.908320369.48326.940666484934.723050.322.76911591.93240.65.76548.2083508.5251.397137009427608891143.334191.286346814.75352380.98346613.3437.86318370839.541.2345710.8713.9248198.2240.738360666729515029.7132134123675.6351372204597355258.1780088.74139.3797405413.86055423181.819.3467305466.82221.536843015.782512.764.3138.51713.296128798.24135633.93.7783418671693353.77062740.891391182.5839391213.15197.57769.3462328.27249990.15188910588.31711693266.362927.162.88541.45045328.65724.661019278.6875716512014.720133.4918636.430.3161765.91011520887.583.24523121293.800.80.7893.632129973148.181784.60011182449.0220.63353.91228.7782558.1232498768.3021155.615345233.9919045196.02974.742241259339.201121.761.80130986.71213.915.209124.7081364.6644.78853850010980430360.304480.793138205.11141436.20139414.8493.801743561616.9143155.4833.5198473.9010.316551333367402366.0015331550331.070678568951727341.477356.85277.7669203869.40201711985.3817.88840891110.36832.285918172.37464.85104.76147.430OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigena1.4xlargec7g.4xlarge30060090012001500SE +/- 0.67, N = 3SE +/- 9.70, N = 312811891. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigena1.4xlargec7g.4xlarge2004006008001000Min: 127 / Avg: 128.33 / Max: 129Min: 1171 / Avg: 1189.33 / Max: 12041. (CXX) g++ options: -flto -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory Copyinga1.4xlargec7g.4xlarge14002800420056007000SE +/- 0.91, N = 3SE +/- 3.52, N = 3798.246693.321. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory Copyinga1.4xlargec7g.4xlarge12002400360048006000Min: 796.83 / Avg: 798.24 / Max: 799.93Min: 6686.28 / Avg: 6693.32 / Max: 6696.971. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASa1.4xlargec7g.4xlarge2004006008001000SE +/- 0.88, N = 3SE +/- 6.44, N = 313511031. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASa1.4xlargec7g.4xlarge2004006008001000Min: 134 / Avg: 135.33 / Max: 137Min: 1090 / Avg: 1102.67 / Max: 11111. (CXX) g++ options: -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speeda1.4xlargec7g.4xlarge10002000300040005000SE +/- 4.47, N = 3SE +/- 9.57, N = 3633.94639.1-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speeda1.4xlargec7g.4xlarge8001600240032004000Min: 626.2 / Avg: 633.93 / Max: 641.7Min: 4620 / Avg: 4639.13 / Max: 4649.11. (CC) gcc options: -O3 -pthread -lz

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1a1.4xlargec7g.4xlarge612182430SE +/- 0.00065, N = 3SE +/- 0.03738, N = 33.7783426.305801. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1a1.4xlargec7g.4xlarge612182430Min: 3.78 / Avg: 3.78 / Max: 3.78Min: 26.26 / Avg: 26.31 / Max: 26.381. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2a1.4xlargec7g.4xlarge300M600M900M1200M1500MSE +/- 176548.39, N = 3SE +/- 952437.28, N = 318671693312588073331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2a1.4xlargec7g.4xlarge200M400M600M800M1000MMin: 186372300 / Avg: 186716933.33 / Max: 186955800Min: 1256931000 / Avg: 1258807333.33 / Max: 12600300001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directiona1.4xlargec7g.4xlarge1224364860SE +/- 0.02862870, N = 3SE +/- 0.01401446, N = 353.770627408.016714251. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directiona1.4xlargec7g.4xlarge1122334455Min: 53.73 / Avg: 53.77 / Max: 53.83Min: 8 / Avg: 8.02 / Max: 8.041. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratea1.4xlargec7g.4xlarge1.31712.63423.95135.26846.5855SE +/- 0.002370, N = 3SE +/- 0.016350, N = 30.8913915.8538641. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratea1.4xlargec7g.4xlarge246810Min: 0.89 / Avg: 0.89 / Max: 0.9Min: 5.83 / Avg: 5.85 / Max: 5.891. (CC) gcc options: -O3 -march=native -fopenmp

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directiona1.4xlargec7g.4xlarge4080120160200SE +/- 0.15, N = 3SE +/- 0.03, N = 3182.5829.131. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directiona1.4xlargec7g.4xlarge306090120150Min: 182.36 / Avg: 182.58 / Max: 182.88Min: 29.1 / Avg: 29.13 / Max: 29.181. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Ca1.4xlargec7g.4xlarge14002800420056007000SE +/- 11.79, N = 6SE +/- 17.12, N = 31213.156571.951. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Ca1.4xlargec7g.4xlarge11002200330044005500Min: 1179 / Avg: 1213.15 / Max: 1248.52Min: 6551.05 / Avg: 6571.95 / Max: 6605.881. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Da1.4xlargec7g.4xlarge2004006008001000SE +/- 0.31, N = 3SE +/- 2.29, N = 3197.571041.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Da1.4xlargec7g.4xlarge2004006008001000Min: 197.2 / Avg: 197.57 / Max: 198.19Min: 1038.58 / Avg: 1041.9 / Max: 1046.31. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotubea1.4xlargec7g.4xlarge170340510680850SE +/- 5.37, N = 3SE +/- 0.08, N = 3769.35155.181. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotubea1.4xlargec7g.4xlarge140280420560700Min: 763.32 / Avg: 769.35 / Max: 780.07Min: 155.01 / Avg: 155.18 / Max: 155.291. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3a1.4xlargec7g.4xlarge2K4K6K8K10KSE +/- 6.27, N = 3SE +/- 76.73, N = 32328.2710940.941. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3a1.4xlargec7g.4xlarge2K4K6K8K10KMin: 2316.57 / Avg: 2328.27 / Max: 2338.01Min: 10787.69 / Avg: 10940.94 / Max: 11024.621. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floata1.4xlargec7g.4xlarge2K4K6K8K10KSE +/- 113.94, N = 3SE +/- 19.61, N = 39990.152156.60
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floata1.4xlargec7g.4xlarge2K4K6K8K10KMin: 9831.11 / Avg: 9990.15 / Max: 10211Min: 2129.52 / Avg: 2156.6 / Max: 2194.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4a1.4xlargec7g.4xlarge40K80K120K160K200KSE +/- 1746.17, N = 3SE +/- 210.27, N = 3188910.041855.1
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4a1.4xlargec7g.4xlarge30K60K90K120K150KMin: 185496 / Avg: 188910 / Max: 191254Min: 41440.3 / Avg: 41855.1 / Max: 42122.5

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlargec7g.4xlarge5001000150020002500SE +/- 0.12, N = 3SE +/- 0.23, N = 3588.32546.41. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlargec7g.4xlarge400800120016002000Min: 588.1 / Avg: 588.3 / Max: 588.5Min: 2546 / Avg: 2546.4 / Max: 2546.81. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2a1.4xlargec7g.4xlarge40K80K120K160K200KSE +/- 825.35, N = 3SE +/- 305.31, N = 3171169.040051.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2a1.4xlargec7g.4xlarge30K60K90K120K150KMin: 170102 / Avg: 171168.67 / Max: 172793Min: 39503.5 / Avg: 40051.33 / Max: 40558.8

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Ca1.4xlargec7g.4xlarge3K6K9K12K15KSE +/- 1.64, N = 3SE +/- 4.69, N = 33266.3613481.611. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Ca1.4xlargec7g.4xlarge2K4K6K8K10KMin: 3263.37 / Avg: 3266.36 / Max: 3269.01Min: 13472.59 / Avg: 13481.61 / Max: 13488.331. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Ca1.4xlargec7g.4xlarge3K6K9K12K15KSE +/- 1.73, N = 3SE +/- 1.17, N = 32927.1611791.771. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Ca1.4xlargec7g.4xlarge2K4K6K8K10KMin: 2923.87 / Avg: 2927.16 / Max: 2929.76Min: 11789.44 / Avg: 11791.77 / Max: 11792.991. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atomsa1.4xlargec7g.4xlarge3691215SE +/- 0.074, N = 32.88511.4251. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atomsa1.4xlargec7g.4xlarge3691215Min: 2.74 / Avg: 2.89 / Max: 2.961. (CXX) g++ options: -O3 -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solvera1.4xlargec7g.4xlarge918273645SE +/- 0.08, N = 3SE +/- 0.02, N = 341.4510.481. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solvera1.4xlargec7g.4xlarge918273645Min: 41.31 / Avg: 41.45 / Max: 41.6Min: 10.44 / Avg: 10.48 / Max: 10.511. (CXX) g++ options: -O2 -lOpenCL

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlargec7g.4xlarge40K80K120K160K200KSE +/- 63.75, N = 3SE +/- 82.61, N = 345328.6178460.41. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096a1.4xlargec7g.4xlarge30K60K90K120K150KMin: 45201.2 / Avg: 45328.63 / Max: 45396Min: 178358.2 / Avg: 178460.37 / Max: 178623.91. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quanta1.4xlargec7g.4xlarge12002400360048006000SE +/- 20.90, N = 3SE +/- 17.76, N = 35724.661502.95
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quanta1.4xlargec7g.4xlarge10002000300040005000Min: 5686.62 / Avg: 5724.66 / Max: 5758.7Min: 1468.14 / Avg: 1502.95 / Max: 1526.49

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge918273645SE +/- 0.00, N = 3SE +/- 0.00, N = 310381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge816243240Min: 9.5 / Avg: 9.5 / Max: 9.5Min: 38 / Avg: 38 / Max: 381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000a1.4xlargec7g.4xlarge16K32K48K64K80KSE +/- 98.61, N = 3SE +/- 83.83, N = 319278.6872719.331. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000a1.4xlargec7g.4xlarge13K26K39K52K65KMin: 19082.25 / Avg: 19278.68 / Max: 19392.14Min: 72567.8 / Avg: 72719.33 / Max: 72857.221. (CC) gcc options: -shared -fPIC -O2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge6001200180024003000SE +/- 0.50, N = 3SE +/- 1.86, N = 375728171. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge5001000150020002500Min: 756.5 / Avg: 757 / Max: 758Min: 2815 / Avg: 2817.33 / Max: 28211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge130260390520650SE +/- 0.50, N = 3SE +/- 0.00, N = 31656091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge110220330440550Min: 163.5 / Avg: 164.5 / Max: 165Min: 608.5 / Avg: 608.5 / Max: 608.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNeta1.4xlargec7g.4xlarge3K6K9K12K15KSE +/- 46.48, N = 3SE +/- 22.07, N = 312014.703257.94
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNeta1.4xlargec7g.4xlarge2K4K6K8K10KMin: 11923.3 / Avg: 12014.7 / Max: 12075.1Min: 3216.26 / Avg: 3257.94 / Max: 3291.38

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500a1.4xlargec7g.4xlarge16K32K48K64K80KSE +/- 93.64, N = 3SE +/- 89.82, N = 320133.4973546.321. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500a1.4xlargec7g.4xlarge13K26K39K52K65KMin: 19971.62 / Avg: 20133.49 / Max: 20295.99Min: 73405.22 / Avg: 73546.32 / Max: 73713.171. (CC) gcc options: -shared -fPIC -O2

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100a1.4xlargec7g.4xlarge14K28K42K56K70KSE +/- 28.97, N = 3SE +/- 38.09, N = 318636.4367231.881. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100a1.4xlargec7g.4xlarge12K24K36K48K60KMin: 18584 / Avg: 18636.43 / Max: 18683.99Min: 67187.11 / Avg: 67231.88 / Max: 67307.651. (CC) gcc options: -shared -fPIC -O2

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_barea1.4xlargec7g.4xlarge0.25380.50760.76141.01521.269SE +/- 0.000, N = 3SE +/- 0.002, N = 30.3161.1281. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_barea1.4xlargec7g.4xlarge246810Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 1.13 / Avg: 1.13 / Max: 1.131. (CXX) g++ options: -O3

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To Compilea1.4xlargec7g.4xlarge400800120016002000SE +/- 1.80, N = 3SE +/- 2.06, N = 31765.91497.58
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To Compilea1.4xlargec7g.4xlarge30060090012001500Min: 1762.78 / Avg: 1765.91 / Max: 1769.01Min: 493.85 / Avg: 497.58 / Max: 500.97

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge90180270360450SE +/- 0.88, N = 3SE +/- 0.17, N = 31154071. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge70140210280350Min: 113.5 / Avg: 115.17 / Max: 116.5Min: 407 / Avg: 407.17 / Max: 407.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200a1.4xlargec7g.4xlarge16K32K48K64K80KSE +/- 59.55, N = 3SE +/- 649.31, N = 320887.5873676.951. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200a1.4xlargec7g.4xlarge13K26K39K52K65KMin: 20769.87 / Avg: 20887.58 / Max: 20962.15Min: 72788.14 / Avg: 73676.95 / Max: 74941.31. (CC) gcc options: -shared -fPIC -O2

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteina1.4xlargec7g.4xlarge3691215SE +/- 0.040, N = 3SE +/- 0.060, N = 33.24511.2911. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteina1.4xlargec7g.4xlarge3691215Min: 3.17 / Avg: 3.25 / Max: 3.3Min: 11.17 / Avg: 11.29 / Max: 11.361. (CXX) g++ options: -O3 -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge2K4K6K8K10KSE +/- 2.20, N = 3SE +/- 2.40, N = 3231279901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standarda1.4xlargec7g.4xlarge14002800420056007000Min: 2308 / Avg: 2312.17 / Max: 2315.5Min: 7985.5 / Avg: 7990.17 / Max: 7993.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Ca1.4xlargec7g.4xlarge10002000300040005000SE +/- 2.51, N = 3SE +/- 9.61, N = 31293.804467.191. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Ca1.4xlargec7g.4xlarge8001600240032004000Min: 1288.84 / Avg: 1293.8 / Max: 1296.88Min: 4449.83 / Avg: 4467.19 / Max: 4483.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDa1.4xlargec7g.4xlarge0.60531.21061.81592.42123.0265SE +/- 0.00, N = 3SE +/- 0.00, N = 30.802.691. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDa1.4xlargec7g.4xlarge246810Min: 0.8 / Avg: 0.8 / Max: 0.8Min: 2.69 / Avg: 2.69 / Max: 2.691. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsa1.4xlargec7g.4xlarge0.58951.1791.76852.3582.9475SE +/- 0.00, N = 3SE +/- 0.00, N = 30.782.621. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsa1.4xlargec7g.4xlarge246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 2.62 / Avg: 2.62 / Max: 2.621. (CXX) g++ options: -O3

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To Compilea1.4xlargec7g.4xlarge20406080100SE +/- 0.27, N = 3SE +/- 0.13, N = 393.6327.90
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To Compilea1.4xlargec7g.4xlarge20406080100Min: 93.22 / Avg: 93.63 / Max: 94.13Min: 27.67 / Avg: 27.9 / Max: 28.12

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythona1.4xlargec7g.4xlarge3K6K9K12K15KSE +/- 48.38, N = 4SE +/- 6.99, N = 4129973940
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythona1.4xlargec7g.4xlarge2K4K6K8K10KMin: 12878 / Avg: 12997.25 / Max: 13093Min: 3927 / Avg: 3940.25 / Max: 3960

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Ca1.4xlargec7g.4xlarge2K4K6K8K10KSE +/- 3.44, N = 3SE +/- 7.36, N = 33148.1810339.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Ca1.4xlargec7g.4xlarge2K4K6K8K10KMin: 3141.34 / Avg: 3148.18 / Max: 3152.19Min: 10325.26 / Avg: 10339.53 / Max: 10349.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjaa1.4xlargec7g.4xlarge400800120016002000SE +/- 0.34, N = 3SE +/- 5.19, N = 31784.60544.93
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjaa1.4xlargec7g.4xlarge30060090012001500Min: 1784.16 / Avg: 1784.6 / Max: 1785.27Min: 535.72 / Avg: 544.93 / Max: 553.68

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapa1.4xlargec7g.4xlarge2K4K6K8K10KSE +/- 71.92, N = 4SE +/- 14.95, N = 4111823524
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapa1.4xlargec7g.4xlarge2K4K6K8K10KMin: 10986 / Avg: 11181.5 / Max: 11320Min: 3487 / Avg: 3523.75 / Max: 3551

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2a1.4xlargec7g.4xlarge100200300400500SE +/- 0.29, N = 3SE +/- 0.11, N = 3449.02141.701. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2a1.4xlargec7g.4xlarge80160240320400Min: 448.45 / Avg: 449.02 / Max: 449.4Min: 141.5 / Avg: 141.7 / Max: 141.881. (CXX) g++ options: -O3 -fPIC -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyaa1.4xlargec7g.4xlarge0.43650.8731.30951.7462.1825SE +/- 0.00, N = 3SE +/- 0.00, N = 30.631.941. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyaa1.4xlargec7g.4xlarge246810Min: 0.63 / Avg: 0.63 / Max: 0.63Min: 1.94 / Avg: 1.94 / Max: 1.941. (CXX) g++ options: -O3

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compilea1.4xlargec7g.4xlarge80160240320400SE +/- 1.89, N = 3SE +/- 0.64, N = 3353.91115.02
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compilea1.4xlargec7g.4xlarge60120180240300Min: 351.57 / Avg: 353.91 / Max: 357.65Min: 113.8 / Avg: 115.02 / Max: 115.97

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6a1.4xlargec7g.4xlarge714212835SE +/- 0.064, N = 3SE +/- 0.025, N = 328.7789.3851. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6a1.4xlargec7g.4xlarge612182430Min: 28.68 / Avg: 28.78 / Max: 28.9Min: 9.34 / Avg: 9.38 / Max: 9.411. (CXX) g++ options: -O3 -fPIC -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Ca1.4xlargec7g.4xlarge17003400510068008500SE +/- 0.15, N = 3SE +/- 1.96, N = 32558.127730.411. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Ca1.4xlargec7g.4xlarge13002600390052006500Min: 2557.84 / Avg: 2558.12 / Max: 2558.37Min: 7728.06 / Avg: 7730.41 / Max: 7734.311. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratinga1.4xlargec7g.4xlarge20K40K60K80K100KSE +/- 91.00, N = 3SE +/- 159.36, N = 332498978241. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratinga1.4xlargec7g.4xlarge20K40K60K80K100KMin: 32380 / Avg: 32498 / Max: 32677Min: 97563 / Avg: 97824.33 / Max: 981131. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0a1.4xlargec7g.4xlarge170340510680850SE +/- 0.58, N = 3SE +/- 0.18, N = 3768.30256.841. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0a1.4xlargec7g.4xlarge140280420560700Min: 767.54 / Avg: 768.3 / Max: 769.45Min: 256.51 / Avg: 256.84 / Max: 257.11. (CXX) g++ options: -O3 -fPIC -lm

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compilea1.4xlargec7g.4xlarge2004006008001000SE +/- 0.78, N = 3SE +/- 1.33, N = 31155.62391.17
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compilea1.4xlargec7g.4xlarge2004006008001000Min: 1154.47 / Avg: 1155.62 / Max: 1157.1Min: 389.16 / Avg: 391.17 / Max: 393.69

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesa1.4xlargec7g.4xlarge7001400210028003500SE +/- 18.15, N = 3SE +/- 0.33, N = 334521185
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesa1.4xlargec7g.4xlarge6001200180024003000Min: 3416 / Avg: 3452 / Max: 3474Min: 1184 / Avg: 1184.67 / Max: 1185

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Losslessa1.4xlargec7g.4xlarge816243240SE +/- 0.31, N = 3SE +/- 0.01, N = 333.9911.911. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Losslessa1.4xlargec7g.4xlarge714212835Min: 33.65 / Avg: 33.99 / Max: 34.6Min: 11.89 / Avg: 11.91 / Max: 11.921. (CXX) g++ options: -O3 -fPIC -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansa1.4xlargec7g.4xlarge2K4K6K8K10KSE +/- 44.35, N = 4SE +/- 26.73, N = 490453203
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansa1.4xlargec7g.4xlarge16003200480064008000Min: 8913 / Avg: 9045 / Max: 9102Min: 3141 / Avg: 3202.5 / Max: 3264

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To Compilea1.4xlargec7g.4xlarge4080120160200SE +/- 0.08, N = 3SE +/- 0.11, N = 3196.0369.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To Compilea1.4xlargec7g.4xlarge4080120160200Min: 195.88 / Avg: 196.03 / Max: 196.15Min: 69.32 / Avg: 69.48 / Max: 69.7

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compilea1.4xlargec7g.4xlarge20406080100SE +/- 0.01, N = 3SE +/- 0.05, N = 374.7426.94
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compilea1.4xlargec7g.4xlarge1428425670Min: 74.72 / Avg: 74.74 / Max: 74.76Min: 26.87 / Avg: 26.94 / Max: 27.04

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suitea1.4xlargec7g.4xlarge140K280K420K560K700KSE +/- 816.27, N = 3SE +/- 525.83, N = 3241259666484
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suitea1.4xlargec7g.4xlarge120K240K360K480K600KMin: 239636 / Avg: 241259.33 / Max: 242221Min: 665522 / Avg: 666484 / Max: 667333

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Da1.4xlargec7g.4xlarge2004006008001000SE +/- 0.24, N = 3SE +/- 0.39, N = 3339.20934.721. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Da1.4xlargec7g.4xlarge160320480640800Min: 338.94 / Avg: 339.2 / Max: 339.67Min: 934.01 / Avg: 934.72 / Max: 935.361. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speeda1.4xlargec7g.4xlarge7001400210028003500SE +/- 4.74, N = 3SE +/- 7.75, N = 31121.73050.3-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speeda1.4xlargec7g.4xlarge5001000150020002500Min: 1115.1 / Avg: 1121.7 / Max: 1130.9Min: 3042.5 / Avg: 3050.3 / Max: 3065.81. (CC) gcc options: -O3 -pthread -lz

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessa1.4xlargec7g.4xlarge1428425670SE +/- 0.06, N = 3SE +/- 0.09, N = 361.8022.77-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessa1.4xlargec7g.4xlarge1224364860Min: 61.68 / Avg: 61.8 / Max: 61.88Min: 22.67 / Avg: 22.77 / Max: 22.941. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilea1.4xlargec7g.4xlarge7K14K21K28K35KSE +/- 49.84, N = 3SE +/- 121.56, N = 1530986.711591.9
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilea1.4xlargec7g.4xlarge5K10K15K20K25KMin: 30906.7 / Avg: 30986.67 / Max: 31078.2Min: 10847.8 / Avg: 11591.94 / Max: 12395.4

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speeda1.4xlargec7g.4xlarge7001400210028003500SE +/- 15.28, N = 3SE +/- 6.93, N = 31213.93240.6-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speeda1.4xlargec7g.4xlarge6001200180024003000Min: 1184 / Avg: 1213.9 / Max: 1234.3Min: 3229.8 / Avg: 3240.57 / Max: 3253.51. (CC) gcc options: -O3 -pthread -lz

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Losslessa1.4xlargec7g.4xlarge48121620SE +/- 0.010, N = 3SE +/- 0.021, N = 315.2095.7651. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Losslessa1.4xlargec7g.4xlarge48121620Min: 15.19 / Avg: 15.21 / Max: 15.22Min: 5.73 / Avg: 5.76 / Max: 5.81. (CXX) g++ options: -O3 -fPIC -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressiona1.4xlargec7g.4xlarge306090120150SE +/- 0.09, N = 3SE +/- 0.01, N = 3124.7148.21-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressiona1.4xlargec7g.4xlarge20406080100Min: 124.55 / Avg: 124.71 / Max: 124.84Min: 48.2 / Avg: 48.21 / Max: 48.221. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speeda1.4xlargec7g.4xlarge8001600240032004000SE +/- 21.59, N = 3SE +/- 2.07, N = 31364.63508.5-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speeda1.4xlargec7g.4xlarge6001200180024003000Min: 1321.4 / Avg: 1364.57 / Max: 1387.3Min: 3504.5 / Avg: 3508.47 / Max: 3511.51. (CC) gcc options: -O3 -pthread -lz

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisa1.4xlargec7g.4xlarge140280420560700SE +/- 0.49, N = 3SE +/- 0.24, N = 3644.79251.401. (CC) gcc options: -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisa1.4xlargec7g.4xlarge110220330440550Min: 644.13 / Avg: 644.79 / Max: 645.74Min: 251.04 / Avg: 251.4 / Max: 251.851. (CC) gcc options: -O3 -std=c99 -pedantic -lm

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancea1.4xlargec7g.4xlarge300K600K900K1200K1500KSE +/- 196.86, N = 5SE +/- 0.00, N = 553850013700941. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancea1.4xlargec7g.4xlarge200K400K600K800K1000KMin: 537869 / Avg: 538499.8 / Max: 538921Min: 1370094 / Avg: 1370094 / Max: 13700941. (CC) gcc options: -O3 -march=native

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timea1.4xlargec7g.4xlarge6M12M18M24M30MSE +/- 123749.22, N = 3SE +/- 153578.64, N = 310980430276088911. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timea1.4xlargec7g.4xlarge5M10M15M20M25MMin: 10738520 / Avg: 10980430 / Max: 11146676Min: 27303905 / Avg: 27608891 / Max: 277929571. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDa1.4xlargec7g.4xlarge80160240320400SE +/- 0.07, N = 3SE +/- 0.15, N = 3360.30143.331. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDa1.4xlargec7g.4xlarge60120180240300Min: 360.17 / Avg: 360.3 / Max: 360.37Min: 143.14 / Avg: 143.33 / Max: 143.641. (CXX) g++ options: -O2 -lOpenCL

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552a1.4xlargec7g.4xlarge100200300400500SE +/- 1.19, N = 3SE +/- 1.94, N = 3480.79191.291. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552a1.4xlargec7g.4xlarge90180270360450Min: 478.62 / Avg: 480.79 / Max: 482.72Min: 188.31 / Avg: 191.29 / Max: 194.941. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000a1.4xlargec7g.4xlarge70K140K210K280K350KSE +/- 66.96, N = 3SE +/- 1410.11, N = 3138205.11346814.751. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000a1.4xlargec7g.4xlarge60K120K180K240K300KMin: 138094.88 / Avg: 138205.11 / Max: 138326.1Min: 344622.05 / Avg: 346814.75 / Max: 349447.111. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 200a1.4xlargec7g.4xlarge80K160K240K320K400KSE +/- 133.96, N = 3SE +/- 3986.77, N = 3141436.20352380.981. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 200a1.4xlargec7g.4xlarge60K120K180K240K300KMin: 141169.18 / Avg: 141436.2 / Max: 141588.71Min: 344424.56 / Avg: 352380.98 / Max: 356811.551. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500a1.4xlargec7g.4xlarge70K140K210K280K350KSE +/- 141.15, N = 3SE +/- 1017.52, N = 3139414.84346613.341. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500a1.4xlargec7g.4xlarge60K120K180K240K300KMin: 139196.57 / Avg: 139414.84 / Max: 139679.01Min: 344614.99 / Avg: 346613.34 / Max: 347945.691. (CC) gcc options: -lcrypt -lz -O3 -march=native

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Timea1.4xlargec7g.4xlarge20406080100SE +/- 0.94, N = 15SE +/- 0.01, N = 393.8037.861. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Timea1.4xlargec7g.4xlarge20406080100Min: 89.04 / Avg: 93.8 / Max: 100.81Min: 37.84 / Avg: 37.86 / Max: 37.891. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSa1.4xlargec7g.4xlarge40K80K120K160K200KSE +/- 59.40, N = 3SE +/- 773.26, N = 3743561837081. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSa1.4xlargec7g.4xlarge30K60K90K120K150KMin: 74239.26 / Avg: 74356.36 / Max: 74432.21Min: 182165.75 / Avg: 183708.29 / Max: 184575.71. (CC) gcc options: -pedantic -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speeda1.4xlargec7g.4xlarge918273645SE +/- 0.00, N = 3SE +/- 0.23, N = 316.039.5-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speeda1.4xlargec7g.4xlarge816243240Min: 16 / Avg: 16 / Max: 16Min: 39 / Avg: 39.47 / Max: 39.71. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speeda1.4xlargec7g.4xlarge918273645SE +/- 0.03, N = 3SE +/- 0.00, N = 316.941.2-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speeda1.4xlargec7g.4xlarge918273645Min: 16.9 / Avg: 16.93 / Max: 17Min: 41.2 / Avg: 41.2 / Max: 41.21. (CC) gcc options: -O3 -pthread -lz

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100a1.4xlargec7g.4xlarge70K140K210K280K350KSE +/- 22.67, N = 3SE +/- 2009.97, N = 3143155.48345710.871. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100a1.4xlargec7g.4xlarge60K120K180K240K300KMin: 143113.1 / Avg: 143155.48 / Max: 143190.63Min: 341701.14 / Avg: 345710.87 / Max: 347963.741. (CC) gcc options: -lcrypt -lz -O3 -march=native

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorougha1.4xlargec7g.4xlarge816243240SE +/- 0.01, N = 3SE +/- 0.00, N = 333.5213.921. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorougha1.4xlargec7g.4xlarge714212835Min: 33.51 / Avg: 33.52 / Max: 33.53Min: 13.92 / Avg: 13.92 / Max: 13.931. (CXX) g++ options: -O3 -flto -pthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670a1.4xlargec7g.4xlarge100200300400500SE +/- 3.48, N = 3SE +/- 0.86, N = 3473.90198.221. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670a1.4xlargec7g.4xlarge80160240320400Min: 467.68 / Avg: 473.9 / Max: 479.7Min: 197.24 / Avg: 198.22 / Max: 199.941. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandoma1.4xlargec7g.4xlarge0.15750.3150.47250.630.7875SE +/- 0.00, N = 3SE +/- 0.00, N = 30.30.71. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandoma1.4xlargec7g.4xlarge246810Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.7 / Avg: 0.7 / Max: 0.71. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57a1.4xlargec7g.4xlarge80M160M240M320M400MSE +/- 8819.17, N = 3SE +/- 400097.21, N = 31655133333836066671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57a1.4xlargec7g.4xlarge70M140M210M280M350MMin: 165500000 / Avg: 165513333.33 / Max: 165530000Min: 382810000 / Avg: 383606666.67 / Max: 3840700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2a1.4xlargec7g.4xlarge14002800420056007000SE +/- 63.66, N = 4SE +/- 32.57, N = 567402951
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2a1.4xlargec7g.4xlarge12002400360048006000Min: 6626 / Avg: 6740 / Max: 6920Min: 2868 / Avg: 2951 / Max: 3068

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stressa1.4xlargec7g.4xlarge11002200330044005500SE +/- 0.16, N = 3SE +/- 0.41, N = 32366.005029.711. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stressa1.4xlargec7g.4xlarge9001800270036004500Min: 2365.79 / Avg: 2366 / Max: 2366.31Min: 5028.91 / Avg: 5029.71 / Max: 5030.291. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Deptha1.4xlargec7g.4xlarge7M14M21M28M35MSE +/- 106812.26, N = 3SE +/- 104795.40, N = 31533155032134123
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Deptha1.4xlargec7g.4xlarge6M12M18M24M30MMin: 15140045 / Avg: 15331549.67 / Max: 15509284Min: 32023095 / Avg: 32134123.33 / Max: 32343588

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100a1.4xlargec7g.4xlarge150300450600750SE +/- 0.00, N = 3SE +/- 0.32, N = 3331.07675.641. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100a1.4xlargec7g.4xlarge120240360480600Min: 331.07 / Avg: 331.07 / Max: 331.07Min: 675.15 / Avg: 675.64 / Max: 676.251. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256a1.4xlargec7g.4xlarge3000M6000M9000M12000M15000MSE +/- 12563225.46, N = 3SE +/- 7739237.92, N = 36785689517137220459731. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256a1.4xlargec7g.4xlarge2000M4000M6000M8000M10000MMin: 6760580260 / Avg: 6785689516.67 / Max: 6799049020Min: 13712096220 / Avg: 13722045973.33 / Max: 137372892101. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Matha1.4xlargec7g.4xlarge12K24K36K48K60KSE +/- 0.49, N = 3SE +/- 17.05, N = 327341.4755258.171. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Matha1.4xlargec7g.4xlarge10K20K30K40K50KMin: 27340.68 / Avg: 27341.47 / Max: 27342.37Min: 55237.21 / Avg: 55258.17 / Max: 55291.941. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Matha1.4xlargec7g.4xlarge20K40K60K80K100KSE +/- 40.28, N = 3SE +/- 3.18, N = 37356.8580088.741. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Matha1.4xlargec7g.4xlarge14K28K42K56K70KMin: 7276.52 / Avg: 7356.85 / Max: 7402.25Min: 80082.86 / Avg: 80088.74 / Max: 80093.791. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustivea1.4xlargec7g.4xlarge60120180240300SE +/- 0.07, N = 3SE +/- 0.01, N = 3277.77139.381. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustivea1.4xlargec7g.4xlarge50100150200250Min: 277.66 / Avg: 277.77 / Max: 277.89Min: 139.36 / Avg: 139.38 / Max: 139.391. (CXX) g++ options: -O3 -flto -pthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Seconda1.4xlargec7g.4xlarge90K180K270K360K450KSE +/- 116.54, N = 3SE +/- 3211.91, N = 3203869.40405413.861. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Seconda1.4xlargec7g.4xlarge70K140K210K280K350KMin: 203704.88 / Avg: 203869.4 / Max: 204094.65Min: 399077.13 / Avg: 405413.86 / Max: 409495.171. (CC) gcc options: -O2 -lrt" -lrt

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Cryptoa1.4xlargec7g.4xlarge5K10K15K20K25KSE +/- 6.29, N = 3SE +/- 32.01, N = 311985.3823181.811. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Cryptoa1.4xlargec7g.4xlarge4K8K12K16K20KMin: 11977.75 / Avg: 11985.38 / Max: 11997.86Min: 23119.13 / Avg: 23181.81 / Max: 23224.41. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compressiona1.4xlargec7g.4xlarge48121620SE +/- 0.072, N = 3SE +/- 0.007, N = 317.8889.346-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compressiona1.4xlargec7g.4xlarge510152025Min: 17.81 / Avg: 17.89 / Max: 18.03Min: 9.34 / Avg: 9.35 / Max: 9.361. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratinga1.4xlargec7g.4xlarge16K32K48K64K80KSE +/- 31.21, N = 3SE +/- 12.88, N = 340891730541. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratinga1.4xlargec7g.4xlarge13K26K39K52K65KMin: 40833 / Avg: 40891 / Max: 40940Min: 73037 / Avg: 73053.67 / Max: 730791. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To Solvea1.4xlargec7g.4xlarge20406080100SE +/- 0.01, N = 3SE +/- 0.00, N = 3110.3766.821. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To Solvea1.4xlargec7g.4xlarge20406080100Min: 110.36 / Avg: 110.37 / Max: 110.38Min: 66.82 / Avg: 66.82 / Max: 66.831. (CXX) g++ options: -fopenmp -O2 -march=native

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed Timea1.4xlargec7g.4xlarge714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 332.2921.541. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed Timea1.4xlargec7g.4xlarge714212835Min: 32.28 / Avg: 32.29 / Max: 32.29Min: 21.54 / Avg: 21.54 / Max: 21.541. (CC) gcc options: -static -fopenmp -O3 -march=native

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringa1.4xlargec7g.4xlarge200K400K600K800K1000KSE +/- 3840.04, N = 3SE +/- 614.16, N = 3918172.37843015.781. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringa1.4xlargec7g.4xlarge160K320K480K640K800KMin: 912809.75 / Avg: 918172.37 / Max: 925614.93Min: 841810.62 / Avg: 843015.78 / Max: 843823.921. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21c7g.4xlarge5001000150020002500SE +/- 0.15, N = 32512.71. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21c7g.4xlarge400800120016002000Min: 2512.5 / Avg: 2512.73 / Max: 25131. (CXX) g++ options: -O3 -march=native -rdynamic

a1.4xlarge: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cachea1.4xlargec7g.4xlarge100200300400500SE +/- 3.73, N = 3SE +/- 3.64, N = 12464.8564.311. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cachea1.4xlargec7g.4xlarge80160240320400Min: 458.31 / Avg: 464.85 / Max: 471.24Min: 40.19 / Avg: 64.31 / Max: 82.061. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixela1.4xlargec7g.4xlarge20406080100SE +/- 2.00, N = 15SE +/- 0.02, N = 3104.7638.521. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixela1.4xlargec7g.4xlarge20406080100Min: 97.09 / Avg: 104.76 / Max: 116.7Min: 38.49 / Avg: 38.52 / Max: 38.551. (CC) gcc options: -lm -lpthread -O3

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclustera1.4xlargec7g.4xlarge1122334455SE +/- 0.02, N = 3SE +/- 0.33, N = 1247.4313.301. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclustera1.4xlargec7g.4xlarge1020304050Min: 47.4 / Avg: 47.43 / Max: 47.47Min: 11.89 / Avg: 13.3 / Max: 14.871. (CXX) g++ options: -O2 -lOpenCL

102 Results Shown

LeelaChessZero
Stress-NG
LeelaChessZero
Zstd Compression
High Performance Conjugate Gradient
Algebraic Multi-Grid Benchmark
Xcompact3d Incompact3d
ACES DGEMM
Xcompact3d Incompact3d
NAS Parallel Benchmarks:
  CG.C
  IS.D
GPAW
LULESH
TensorFlow Lite:
  Mobilenet Float
  Inception V4
OpenSSL
TensorFlow Lite
NAS Parallel Benchmarks:
  MG.C
  FT.C
LAMMPS Molecular Dynamics Simulator
Rodinia
OpenSSL
TensorFlow Lite
ONNX Runtime
Apache HTTP Server
ONNX Runtime:
  super-resolution-10 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
TensorFlow Lite
Apache HTTP Server:
  500
  100
GROMACS
Timed Node.js Compilation
ONNX Runtime
Apache HTTP Server
LAMMPS Molecular Dynamics Simulator
ONNX Runtime
NAS Parallel Benchmarks
simdjson:
  DistinctUserID
  PartialTweets
Timed ImageMagick Compilation
DaCapo Benchmark
NAS Parallel Benchmarks
Timed LLVM Compilation
DaCapo Benchmark
libavif avifenc
simdjson
Build2
libavif avifenc
NAS Parallel Benchmarks
7-Zip Compression
libavif avifenc
Timed Gem5 Compilation
PyBench
libavif avifenc
DaCapo Benchmark
Timed PHP Compilation
Timed Apache Compilation
PHPBench
NAS Parallel Benchmarks
Zstd Compression
WebP Image Encode
TensorFlow Lite
Zstd Compression
libavif avifenc
WebP Image Encode
Zstd Compression
Timed MrBayes Analysis
TSCP
Stockfish
Rodinia
Ngspice
nginx:
  1000
  200
  500
POV-Ray
SecureMark
Zstd Compression:
  19, Long Mode - Compression Speed
  19 - Compression Speed
nginx
ASTC Encoder
Ngspice
simdjson
Liquid-DSP
DaCapo Benchmark
Stress-NG
asmFish
Google SynthMark
OpenSSL
Stress-NG:
  Vector Math
  Matrix Math
ASTC Encoder
Coremark
Stress-NG
WebP Image Encode
7-Zip Compression
m-queens
N-Queens
Stress-NG
QuantLib
Stress-NG
C-Ray
Rodinia