Amazon EC2 c7g.4xlarge AWS Graviton3

Graviton3 benchmarks by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2205269-NE-2205259NE55
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 3 Tests
Chess Test Suite 6 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 22 Tests
Creator Workloads 7 Tests
Cryptography 2 Tests
Fortran Tests 4 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 14 Tests
Imaging 2 Tests
Common Kernel Benchmarks 3 Tests
Linear Algebra 2 Tests
Machine Learning 3 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 7 Tests
Multi-Core 23 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 10 Tests
Programmer / Developer System Benchmarks 12 Tests
Python Tests 6 Tests
Raytracing 2 Tests
Renderers 2 Tests
Scientific Computing 8 Tests
Server 5 Tests
Server CPU Tests 16 Tests
Single-Threaded 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
c7g.4xlarge
May 24 2022
  7 Hours, 56 Minutes
c6g.4xlarge Graviton2
May 25 2022
  10 Hours, 11 Minutes
c6i.4xlarge Xeon
May 26 2022
  9 Hours, 54 Minutes
Invert Hiding All Results Option
  9 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Amazon EC2 c7g.4xlarge AWS Graviton3ProcessorMotherboardChipsetMemoryDiskNetworkOSKernelCompilerFile-SystemSystem LayerVulkanc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge XeonARMv8 Neoverse-V1 (16 Cores)Amazon EC2 c7g.4xlarge (1.0 BIOS)Amazon Device 020032GB193GB Amazon Elastic Block StoreAmazon ElasticUbuntu 22.045.15.0-1004-aws (aarch64)GCC 11.2.0ext4amazonARMv8 Neoverse-N1 (16 Cores)Amazon EC2 c6g.4xlarge (1.0 BIOS)Intel Xeon Platinum 8375C (8 Cores / 16 Threads)Amazon EC2 c6i.4xlarge (1.0 BIOS)Intel 440FX 82441FX PMC5.15.0-1004-aws (x86_64)1.2.204OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- c7g.4xlarge: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6g.4xlarge Graviton2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6i.4xlarge Xeon: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Java Details- OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Details- Python 3.10.4Security Details- c7g.4xlarge: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c6g.4xlarge Graviton2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c6i.4xlarge Xeon: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Processor Details- c6i.4xlarge Xeon: CPU Microcode: 0xd000331

c7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge XeonResult OverviewPhoronix Test Suite100%154%209%263%NAS Parallel BenchmarksHigh Performance Conjugate GradientTimed MrBayes AnalysisACES DGEMMOpenSSLONNX RuntimeC-RayXcompact3d Incompact3dASTC EncodersimdjsonSecureMarkAlgebraic Multi-Grid BenchmarkRodiniaGROMACSPHPBenchLULESHApache HTTP ServerLAMMPS Molecular Dynamics SimulatorPyBenchLeelaChessZeroNgspiceTSCP7-Zip CompressionTimed Apache CompilationWebP Image Encodelibavif avifencStress-NGLiquid-DSPQuantLibTimed ImageMagick CompilationGoogle SynthMarkCoremarkZstd CompressionPOV-RayGPAWTimed PHP Compilationm-queensDaCapo BenchmarkasmFishStockfishTimed Node.js CompilationTimed LLVM CompilationTimed Gem5 CompilationBuild2N-QueensTensorFlow Litenginx

Amazon EC2 c7g.4xlarge AWS Graviton3npb: LU.Connx: fcn-resnet101-11 - CPU - Standardnpb: SP.Cnpb: MG.Copenssl: RSA4096stress-ng: CPU Stressopenssl: RSA4096npb: FT.Chpcg: mrbayes: Primate Phylogeny Analysissimdjson: DistinctUserIDnpb: IS.Dnpb: CG.Cmt-dgemm: Sustained Floating-Point Ratesimdjson: PartialTweetsavifenc: 2c-ray: Total Time - 4K, 16 Rays Per Pixelincompact3d: input.i3d 193 Cells Per Directionstress-ng: Memory Copyingastcenc: Exhaustiveastcenc: Thoroughstress-ng: Cryptoincompact3d: input.i3d 129 Cells Per Directionnpb: BT.Csimdjson: Kostyastress-ng: Matrix Mathavifenc: 0npb: EP.Drodinia: OpenMP LavaMDrodinia: OpenMP CFD Solveropenssl: SHA256securemark: SecureMark-TLSamg: apache: 200gromacs: MPI CPU - water_GMX50_bareapache: 100phpbench: PHP Benchmark Suiteapache: 500lulesh: lammps: Rhodopsin Proteinngspice: C2670lczero: Eigensimdjson: LargeRandpybench: Total For Average Test Timesapache: 1000onnx: super-resolution-10 - CPU - Standardlczero: BLAScompress-zstd: 3 - Compression Speedcompress-7zip: Decompression Ratingngspice: C7552webp: Quality 100, Lossless, Highest Compressiontscp: AI Chess Performancebuild-apache: Time To Compilecompress-zstd: 19 - Decompression Speeddacapobench: Tradebeanswebp: Quality 100, Highest Compressioncompress-zstd: 19, Long Mode - Decompression Speedavifenc: 6, Losslesswebp: Quality 100, Losslesscompress-7zip: Compression Ratingstress-ng: Vector Mathliquid-dsp: 16 - 256 - 57quantlib: build-imagemagick: Time To Compileavifenc: 10, Losslesssynthmark: VoiceMark_100dacapobench: Jythoncoremark: CoreMark Size 666 - Iterations Per Secondpovray: Trace Timeavifenc: 6gpaw: Carbon Nanotubebuild-php: Time To Compiletensorflow-lite: NASNet Mobilem-queens: Time To Solvedacapobench: H2asmfish: 1024 Hash Memory, 26 Depthstress-ng: IO_uringtensorflow-lite: SqueezeNetdacapobench: Tradesoapcompress-zstd: 19, Long Mode - Compression Speedstockfish: Total Timetensorflow-lite: Mobilenet Floatbuild-nodejs: Time To Compilebuild-llvm: Ninjabuild-gem5: Time To Compilebuild2: Time To Compilen-queens: Elapsed Timecompress-zstd: 19 - Compression Speedcompress-zstd: 3 - Decompression Speednginx: 100nginx: 200tensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4nginx: 500nginx: 1000onnx: ArcFace ResNet-100 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: GPT-2 - CPU - Standardstress-ng: CPU Cachetensorflow-lite: Mobilenet Quantrodinia: OpenMP Streamclusterc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon7730.41384467.1913481.612546.45029.71178460.411791.7726.3058251.3972.691041.906571.955.8538642.62141.69838.51729.12585706693.32139.379713.924823181.818.0167142510339.531.9480088.74256.841934.72143.33410.47813722045973183708125880733373676.951.12867231.8866648473546.3210940.93911.291198.22411890.7118572719.33281711034639.173054191.28648.208137009426.9403050.332039.3463240.611.90822.7699782455258.173836066672512.727.9045.765675.6353940405413.86055437.8639.385155.18069.48311591.966.822295132134123843015.783257.94352439.5276088912156.60497.579544.929391.171115.02021.53641.23508.5345710.87352380.9840051.341855.1346613.34346814.75609407799064.311502.9513.2965133.89282356.166720.68660.63404.9453951.56244.4819.7218384.7531.53372.763520.864.7851231.51238.20562.32341.02408352903.00159.203916.522217924.1811.57335476449.111.1964084.08406.937558.88215.66617.0351072318408312030193265290050059.970.78146995.3544985550077.816016.16277.935263.7248340.49174146629.4520728642878.859445255.20566.14787231334.2012051.6434412.2482196.316.51831.0827128537753.892628900001742.440.3338.311470.3895626315464.33980051.04713.046215.52888.89714985.475.224396426540482770521.813969.35450631.0216792452500.87628.401682.981488.805142.27723.13634.6307349.36308938.6745955.746793.9310596.58308213.13334322694837.191980.2415.48438136.771399563.2226298.812161.312527.16140964.420423.578.66031134.9244.30861.579522.822.2305453.7197.73592.54569.21699783150.4969.63877.262510210.3417.868277213888.402.4639878.86204.9941103.22281.38920.446709699393723054966136476794458.221.45286545.5782818691746.578112.37156.220147.89314660.8699779830.96345013973440.645653161.08141.805127259622.5272582.029288.2702666.117.52921.1226663140140.303731000002533.029.7378.265565.6904013285378.84166152.78412.984202.10664.33710900.691.2312921237462001037943.372983.93381533.8220819611965.07604.620685.704469.940136.80118.83938.12996.8356302.84356829.9341179.741185.7351672.92347345.491374773794417.403967.3923.512OpenBenchmarking.org

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton28K16K24K32K40KSE +/- 160.86, N = 3SE +/- 1.96, N = 3SE +/- 0.90, N = 338136.777730.415133.891. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton27K14K21K28K35KMin: 37926.17 / Avg: 38136.77 / Max: 38452.69Min: 7728.06 / Avg: 7730.41 / Max: 7734.31Min: 5132.39 / Avg: 5133.89 / Max: 5135.491. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2306090120150SE +/- 0.60, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 313938281. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2306090120150Min: 138 / Avg: 138.83 / Max: 140Min: 38 / Avg: 38 / Max: 38Min: 27.5 / Avg: 27.5 / Max: 27.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22K4K6K8K10KSE +/- 73.65, N = 3SE +/- 9.61, N = 3SE +/- 0.57, N = 39563.224467.192356.161. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton217003400510068008500Min: 9433.84 / Avg: 9563.22 / Max: 9688.88Min: 4449.83 / Avg: 4467.19 / Max: 4483.01Min: 2355.3 / Avg: 2356.16 / Max: 2357.241. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton26K12K18K24K30KSE +/- 184.24, N = 3SE +/- 4.69, N = 3SE +/- 1.39, N = 326298.8113481.616720.681. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton25K10K15K20K25KMin: 26062.84 / Avg: 26298.81 / Max: 26661.9Min: 13472.59 / Avg: 13481.61 / Max: 13488.33Min: 6719.25 / Avg: 6720.68 / Max: 6723.471. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton25001000150020002500SE +/- 0.23, N = 3SE +/- 4.47, N = 3SE +/- 0.03, N = 32546.42161.3660.6-m641. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2400800120016002000Min: 2546 / Avg: 2546.4 / Max: 2546.8Min: 2152.4 / Avg: 2161.33 / Max: 2165.9Min: 660.5 / Avg: 660.57 / Max: 660.61. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stressc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton23K6K9K12K15KSE +/- 155.66, N = 3SE +/- 0.41, N = 3SE +/- 0.54, N = 312527.165029.713404.941. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stressc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22K4K6K8K10KMin: 12365.65 / Avg: 12527.16 / Max: 12838.41Min: 5028.91 / Avg: 5029.71 / Max: 5030.29Min: 3404.14 / Avg: 3404.94 / Max: 3405.971. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton240K80K120K160K200KSE +/- 82.61, N = 3SE +/- 47.94, N = 3SE +/- 3.30, N = 3178460.4140964.453951.5-m641. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton230K60K90K120K150KMin: 178358.2 / Avg: 178460.37 / Max: 178623.9Min: 140874.2 / Avg: 140964.37 / Max: 141037.7Min: 53945.2 / Avg: 53951.53 / Max: 53956.31. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton24K8K12K16K20KSE +/- 40.24, N = 3SE +/- 1.17, N = 3SE +/- 1.10, N = 320423.5711791.776244.481. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton24K8K12K16K20KMin: 20343.5 / Avg: 20423.57 / Max: 20470.62Min: 11789.44 / Avg: 11791.77 / Max: 11792.99Min: 6242.29 / Avg: 6244.48 / Max: 6245.71. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1c7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon612182430SE +/- 0.03738, N = 3SE +/- 0.01639, N = 3SE +/- 0.04033, N = 326.3058019.721808.660311. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1c7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon612182430Min: 26.26 / Avg: 26.31 / Max: 26.38Min: 19.7 / Avg: 19.72 / Max: 19.75Min: 8.58 / Avg: 8.66 / Max: 8.71. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton280160240320400SE +/- 1.43, N = 3SE +/- 0.24, N = 3SE +/- 0.11, N = 3134.92251.40384.75-mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm1. (CC) gcc options: -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton270140210280350Min: 132.1 / Avg: 134.92 / Max: 136.68Min: 251.04 / Avg: 251.4 / Max: 251.85Min: 384.53 / Avg: 384.75 / Max: 384.881. (CC) gcc options: -O3 -std=c99 -pedantic -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton20.96751.9352.90253.874.8375SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.302.691.531. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2246810Min: 4.29 / Avg: 4.3 / Max: 4.3Min: 2.69 / Avg: 2.69 / Max: 2.69Min: 1.53 / Avg: 1.53 / Max: 1.531. (CXX) g++ options: -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Dc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton22004006008001000SE +/- 2.29, N = 3SE +/- 2.14, N = 3SE +/- 0.20, N = 31041.90861.57372.761. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Dc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton22004006008001000Min: 1038.58 / Avg: 1041.9 / Max: 1046.3Min: 857.51 / Avg: 861.57 / Max: 864.76Min: 372.52 / Avg: 372.76 / Max: 373.151. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22K4K6K8K10KSE +/- 66.44, N = 3SE +/- 17.12, N = 3SE +/- 9.95, N = 39522.826571.953520.861. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton217003400510068008500Min: 9400.77 / Avg: 9522.82 / Max: 9629.37Min: 6551.05 / Avg: 6571.95 / Max: 6605.88Min: 3501.43 / Avg: 3520.86 / Max: 3534.331. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon1.31712.63423.95135.26846.5855SE +/- 0.016350, N = 3SE +/- 0.007139, N = 3SE +/- 0.003819, N = 35.8538644.7851232.2305451. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon246810Min: 5.83 / Avg: 5.85 / Max: 5.89Min: 4.77 / Avg: 4.79 / Max: 4.8Min: 2.22 / Avg: 2.23 / Max: 2.241. (CC) gcc options: -O3 -march=native -fopenmp

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton20.83481.66962.50443.33924.174SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.712.621.511. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2246810Min: 3.7 / Avg: 3.71 / Max: 3.71Min: 2.62 / Avg: 2.62 / Max: 2.62Min: 1.51 / Avg: 1.51 / Max: 1.511. (CXX) g++ options: -O3

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton250100150200250SE +/- 0.26, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 397.74141.70238.211. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton24080120160200Min: 97.23 / Avg: 97.74 / Max: 98.12Min: 141.5 / Avg: 141.7 / Max: 141.88Min: 237.98 / Avg: 238.21 / Max: 238.41. (CXX) g++ options: -O3 -fPIC -lm

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixelc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon20406080100SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 338.5262.3292.551. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixelc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon20406080100Min: 38.49 / Avg: 38.52 / Max: 38.55Min: 62.29 / Avg: 62.32 / Max: 62.38Min: 92.49 / Avg: 92.55 / Max: 92.631. (CC) gcc options: -lm -lpthread -O3

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directionc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon1530456075SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 329.1341.0269.221. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directionc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon1326395265Min: 29.1 / Avg: 29.13 / Max: 29.18Min: 41.01 / Avg: 41.02 / Max: 41.05Min: 68.94 / Avg: 69.22 / Max: 69.381. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory Copyingc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton214002800420056007000SE +/- 3.52, N = 3SE +/- 0.94, N = 3SE +/- 3.75, N = 36693.323150.492903.001. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory Copyingc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton212002400360048006000Min: 6686.28 / Avg: 6693.32 / Max: 6696.97Min: 3148.99 / Avg: 3150.49 / Max: 3152.22Min: 2895.63 / Avg: 2903 / Max: 2907.881. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustivec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton24080120160200SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 369.64139.38159.201. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustivec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2306090120150Min: 69.58 / Avg: 69.64 / Max: 69.71Min: 139.36 / Avg: 139.38 / Max: 139.39Min: 159.2 / Avg: 159.2 / Max: 159.211. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thoroughc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton248121620SE +/- 0.0001, N = 3SE +/- 0.0011, N = 3SE +/- 0.0064, N = 37.262513.924816.52221. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thoroughc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton248121620Min: 7.26 / Avg: 7.26 / Max: 7.26Min: 13.92 / Avg: 13.92 / Max: 13.93Min: 16.51 / Avg: 16.52 / Max: 16.531. (CXX) g++ options: -O3 -flto -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Cryptoc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon5K10K15K20K25KSE +/- 32.01, N = 3SE +/- 92.83, N = 3SE +/- 5.89, N = 323181.8117924.1810210.341. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Cryptoc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon4K8K12K16K20KMin: 23119.13 / Avg: 23181.81 / Max: 23224.4Min: 17748.62 / Avg: 17924.18 / Max: 18064.26Min: 10199.14 / Avg: 10210.34 / Max: 10219.11. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon48121620SE +/- 0.01401446, N = 3SE +/- 0.01351889, N = 3SE +/- 0.09619197, N = 38.0167142511.5733547017.868277201. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon510152025Min: 8 / Avg: 8.02 / Max: 8.04Min: 11.55 / Avg: 11.57 / Max: 11.59Min: 17.69 / Avg: 17.87 / Max: 18.011. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton23K6K9K12K15KSE +/- 22.04, N = 3SE +/- 7.36, N = 3SE +/- 3.20, N = 313888.4010339.536449.111. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Cc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22K4K6K8K10KMin: 13862.56 / Avg: 13888.4 / Max: 13932.24Min: 10325.26 / Avg: 10339.53 / Max: 10349.81Min: 6444.55 / Avg: 6449.11 / Max: 6455.291. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyac6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton20.55351.1071.66052.2142.7675SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.461.941.191. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyac6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2246810Min: 2.46 / Avg: 2.46 / Max: 2.47Min: 1.94 / Avg: 1.94 / Max: 1.94Min: 1.19 / Avg: 1.19 / Max: 1.191. (CXX) g++ options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Mathc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon20K40K60K80K100KSE +/- 3.18, N = 3SE +/- 2.76, N = 3SE +/- 35.16, N = 380088.7464084.0839878.861. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Mathc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon14K28K42K56K70KMin: 80082.86 / Avg: 80088.74 / Max: 80093.79Min: 64078.65 / Avg: 64084.08 / Max: 64087.71Min: 39831 / Avg: 39878.86 / Max: 39947.41. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton290180270360450SE +/- 0.33, N = 3SE +/- 0.18, N = 3SE +/- 0.13, N = 3204.99256.84406.941. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton270140210280350Min: 204.36 / Avg: 204.99 / Max: 205.49Min: 256.51 / Avg: 256.84 / Max: 257.1Min: 406.75 / Avg: 406.94 / Max: 407.191. (CXX) g++ options: -O3 -fPIC -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Dc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22004006008001000SE +/- 19.93, N = 9SE +/- 0.39, N = 3SE +/- 0.23, N = 31103.22934.72558.881. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Dc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22004006008001000Min: 1030.08 / Avg: 1103.22 / Max: 1180.25Min: 934.01 / Avg: 934.72 / Max: 935.36Min: 558.51 / Avg: 558.88 / Max: 559.31. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon60120180240300SE +/- 0.15, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 3143.33215.67281.391. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon50100150200250Min: 143.14 / Avg: 143.33 / Max: 143.64Min: 215.65 / Avg: 215.67 / Max: 215.69Min: 281.12 / Avg: 281.39 / Max: 281.581. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solverc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon510152025SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 310.4817.0420.451. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solverc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon510152025Min: 10.44 / Avg: 10.48 / Max: 10.51Min: 16.97 / Avg: 17.03 / Max: 17.12Min: 20.42 / Avg: 20.45 / Max: 20.491. (CXX) g++ options: -O2 -lOpenCL

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256c7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon3000M6000M9000M12000M15000MSE +/- 7739237.92, N = 3SE +/- 47755430.47, N = 3SE +/- 606684.16, N = 313722045973107231840837096993937-m641. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256c7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon2000M4000M6000M8000M10000MMin: 13712096220 / Avg: 13722045973.33 / Max: 13737289210Min: 10627684700 / Avg: 10723184083.33 / Max: 10772216060Min: 7096258150 / Avg: 7096993936.67 / Max: 70981973901. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton250K100K150K200K250KSE +/- 864.34, N = 3SE +/- 773.26, N = 3SE +/- 23.07, N = 32305491837081203011. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton240K80K120K160K200KMin: 229225.86 / Avg: 230548.69 / Max: 232173.88Min: 182165.75 / Avg: 183708.29 / Max: 184575.7Min: 120260.21 / Avg: 120301.13 / Max: 120340.041. (CC) gcc options: -pedantic -O3

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2c7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon300M600M900M1200M1500MSE +/- 952437.28, N = 3SE +/- 3420043.89, N = 3SE +/- 5114517.12, N = 312588073339326529006613647671. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2c7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon200M400M600M800M1000MMin: 1256931000 / Avg: 1258807333.33 / Max: 1260030000Min: 927742900 / Avg: 932652900 / Max: 939232100Min: 654941100 / Avg: 661364766.67 / Max: 6714706001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton220K40K60K80K100KSE +/- 615.05, N = 3SE +/- 649.31, N = 3SE +/- 112.65, N = 394458.2273676.9550059.971. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton216K32K48K64K80KMin: 93478.69 / Avg: 94458.22 / Max: 95592.38Min: 72788.14 / Avg: 73676.95 / Max: 74941.3Min: 49842.68 / Avg: 50059.97 / Max: 50220.181. (CC) gcc options: -shared -fPIC -O2

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_barec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton20.32670.65340.98011.30681.6335SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 31.4521.1280.7811. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_barec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2246810Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.13 / Avg: 1.13 / Max: 1.13Min: 0.78 / Avg: 0.78 / Max: 0.781. (CXX) g++ options: -O3

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton220K40K60K80K100KSE +/- 389.13, N = 3SE +/- 38.09, N = 3SE +/- 93.03, N = 386545.5767231.8846995.351. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton215K30K45K60K75KMin: 85770.92 / Avg: 86545.57 / Max: 86997.78Min: 67187.11 / Avg: 67231.88 / Max: 67307.65Min: 46816.56 / Avg: 46995.35 / Max: 47129.331. (CC) gcc options: -shared -fPIC -O2

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suitec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2200K400K600K800K1000KSE +/- 983.65, N = 3SE +/- 525.83, N = 3SE +/- 743.13, N = 3828186666484449855
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suitec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2140K280K420K560K700KMin: 826631 / Avg: 828185.67 / Max: 830007Min: 665522 / Avg: 666484 / Max: 667333Min: 448715 / Avg: 449855.33 / Max: 451251

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton220K40K60K80K100KSE +/- 833.50, N = 7SE +/- 89.82, N = 3SE +/- 578.32, N = 391746.5773546.3250077.811. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton216K32K48K64K80KMin: 86751.45 / Avg: 91746.57 / Max: 92771.95Min: 73405.22 / Avg: 73546.32 / Max: 73713.17Min: 48925.08 / Avg: 50077.81 / Max: 50736.491. (CC) gcc options: -shared -fPIC -O2

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton22K4K6K8K10KSE +/- 76.73, N = 3SE +/- 14.20, N = 3SE +/- 4.88, N = 310940.948112.376016.161. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton22K4K6K8K10KMin: 10787.69 / Avg: 10940.94 / Max: 11024.62Min: 8085.86 / Avg: 8112.37 / Max: 8134.43Min: 6009.82 / Avg: 6016.16 / Max: 6025.751. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteinc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon3691215SE +/- 0.060, N = 3SE +/- 0.014, N = 3SE +/- 0.009, N = 311.2917.9356.2201. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteinc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon3691215Min: 11.17 / Avg: 11.29 / Max: 11.36Min: 7.91 / Avg: 7.93 / Max: 7.96Min: 6.21 / Avg: 6.22 / Max: 6.241. (CXX) g++ options: -O3 -lm

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton260120180240300SE +/- 1.80, N = 4SE +/- 0.86, N = 3SE +/- 0.91, N = 3147.89198.22263.721. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton250100150200250Min: 143.57 / Avg: 147.89 / Max: 152.37Min: 197.24 / Avg: 198.22 / Max: 199.94Min: 262.1 / Avg: 263.72 / Max: 265.251. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigenc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton230060090012001500SE +/- 13.37, N = 3SE +/- 9.70, N = 3SE +/- 12.00, N = 3146611898341. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigenc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton230060090012001500Min: 1447 / Avg: 1466.33 / Max: 1492Min: 1171 / Avg: 1189.33 / Max: 1204Min: 819 / Avg: 834.33 / Max: 8581. (CXX) g++ options: -flto -pthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandomc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton20.19350.3870.58050.7740.9675SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.860.700.491. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandomc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2246810Min: 0.86 / Avg: 0.86 / Max: 0.86Min: 0.7 / Avg: 0.7 / Max: 0.7Min: 0.48 / Avg: 0.49 / Max: 0.491. (CXX) g++ options: -O3

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2400800120016002000SE +/- 3.84, N = 3SE +/- 0.33, N = 3SE +/- 1.67, N = 399711851741
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton230060090012001500Min: 993 / Avg: 997.33 / Max: 1005Min: 1184 / Avg: 1184.67 / Max: 1185Min: 1739 / Avg: 1740.67 / Max: 1744

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton220K40K60K80K100KSE +/- 335.63, N = 3SE +/- 83.83, N = 3SE +/- 276.10, N = 379830.9672719.3346629.451. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton214K28K42K56K70KMin: 79188.28 / Avg: 79830.96 / Max: 80320.15Min: 72567.8 / Avg: 72719.33 / Max: 72857.22Min: 46348.82 / Avg: 46629.45 / Max: 47181.621. (CC) gcc options: -shared -fPIC -O2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton27001400210028003500SE +/- 1.61, N = 3SE +/- 1.86, N = 3SE +/- 1.74, N = 33450281720721. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton26001200180024003000Min: 3446.5 / Avg: 3449.5 / Max: 3452Min: 2815 / Avg: 2817.33 / Max: 2821Min: 2069.5 / Avg: 2072.33 / Max: 2075.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton230060090012001500SE +/- 12.41, N = 9SE +/- 6.44, N = 3SE +/- 10.22, N = 4139711038641. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22004006008001000Min: 1345 / Avg: 1396.67 / Max: 1452Min: 1090 / Avg: 1102.67 / Max: 1111Min: 840 / Avg: 864.25 / Max: 8901. (CXX) g++ options: -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton210002000300040005000SE +/- 9.57, N = 3SE +/- 29.53, N = 3SE +/- 6.37, N = 34639.13440.62888.3-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton28001600240032004000Min: 4620 / Avg: 4639.13 / Max: 4649.1Min: 3408.7 / Avg: 3440.6 / Max: 3499.6Min: 2876.8 / Avg: 2888.33 / Max: 2898.81. (CC) gcc options: -O3 -pthread -lz

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratingc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon16K32K48K64K80KSE +/- 12.88, N = 3SE +/- 239.68, N = 3SE +/- 35.00, N = 37305459445456531. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratingc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon13K26K39K52K65KMin: 73037 / Avg: 73053.67 / Max: 73079Min: 58966 / Avg: 59445.33 / Max: 59689Min: 45595 / Avg: 45653.33 / Max: 457161. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton260120180240300SE +/- 0.33, N = 3SE +/- 1.94, N = 3SE +/- 2.40, N = 7161.08191.29255.211. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton250100150200250Min: 160.42 / Avg: 161.08 / Max: 161.44Min: 188.31 / Avg: 191.29 / Max: 194.94Min: 242.65 / Avg: 255.21 / Max: 261.231. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressionc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton21530456075SE +/- 0.34, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 341.8148.2166.15-ltiff-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressionc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton21326395265Min: 41.46 / Avg: 41.81 / Max: 42.48Min: 48.2 / Avg: 48.21 / Max: 48.22Min: 66.13 / Avg: 66.15 / Max: 66.161. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2300K600K900K1200K1500KSE +/- 0.00, N = 5SE +/- 1099.67, N = 5SE +/- 338.27, N = 5137009412725968723131. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2200K400K600K800K1000KMin: 1370094 / Avg: 1370094 / Max: 1370094Min: 1269073 / Avg: 1272595.8 / Max: 1274949Min: 871484 / Avg: 872312.6 / Max: 8728651. (CC) gcc options: -O3 -march=native

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compilec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2816243240SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 322.5326.9434.20
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compilec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2714212835Min: 22.42 / Avg: 22.53 / Max: 22.59Min: 26.87 / Avg: 26.94 / Max: 27.04Min: 34.18 / Avg: 34.2 / Max: 34.23

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton27001400210028003500SE +/- 7.75, N = 3SE +/- 24.18, N = 3SE +/- 12.10, N = 33050.32582.02051.6-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton25001000150020002500Min: 3042.5 / Avg: 3050.3 / Max: 3065.8Min: 2533.7 / Avg: 2581.97 / Max: 2608.7Min: 2035.9 / Avg: 2051.6 / Max: 2075.41. (CC) gcc options: -O3 -pthread -lz

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton29001800270036004500SE +/- 19.24, N = 20SE +/- 26.73, N = 4SE +/- 40.13, N = 4292832034344
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton28001600240032004000Min: 2827 / Avg: 2928.45 / Max: 3106Min: 3141 / Avg: 3202.5 / Max: 3264Min: 4257 / Avg: 4343.5 / Max: 4428

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compressionc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton23691215SE +/- 0.021, N = 3SE +/- 0.007, N = 3SE +/- 0.043, N = 38.2709.34612.248-ltiff-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compressionc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton248121620Min: 8.24 / Avg: 8.27 / Max: 8.31Min: 9.34 / Avg: 9.35 / Max: 9.36Min: 12.2 / Avg: 12.25 / Max: 12.331. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton27001400210028003500SE +/- 6.93, N = 3SE +/- 7.82, N = 3SE +/- 2.93, N = 33240.62666.12196.3-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton26001200180024003000Min: 3229.8 / Avg: 3240.57 / Max: 3253.5Min: 2657.8 / Avg: 2666.07 / Max: 2681.7Min: 2193.3 / Avg: 2196.33 / Max: 2202.21. (CC) gcc options: -O3 -pthread -lz

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Losslessc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon48121620SE +/- 0.01, N = 3SE +/- 0.17, N = 3SE +/- 0.03, N = 311.9116.5217.531. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Losslessc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon48121620Min: 11.89 / Avg: 11.91 / Max: 11.92Min: 16.18 / Avg: 16.52 / Max: 16.7Min: 17.49 / Avg: 17.53 / Max: 17.581. (CXX) g++ options: -O3 -fPIC -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2714212835SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 321.1222.7731.08-ltiff-ltiff1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2714212835Min: 21.07 / Avg: 21.12 / Max: 21.17Min: 22.67 / Avg: 22.77 / Max: 22.94Min: 31.05 / Avg: 31.08 / Max: 31.111. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratingc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon20K40K60K80K100KSE +/- 159.36, N = 3SE +/- 44.77, N = 3SE +/- 174.34, N = 39782471285666311. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratingc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon20K40K60K80K100KMin: 97563 / Avg: 97824.33 / Max: 98113Min: 71213 / Avg: 71284.67 / Max: 71367Min: 66455 / Avg: 66631.33 / Max: 669801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Mathc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton212K24K36K48K60KSE +/- 17.05, N = 3SE +/- 28.50, N = 3SE +/- 15.72, N = 355258.1740140.3037753.891. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Mathc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton210K20K30K40K50KMin: 55237.21 / Avg: 55258.17 / Max: 55291.94Min: 40101.89 / Avg: 40140.3 / Max: 40195.98Min: 37737.89 / Avg: 37753.89 / Max: 37785.321. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton280M160M240M320M400MSE +/- 400097.21, N = 3SE +/- 41633.32, N = 3SE +/- 35118.85, N = 33836066673731000002628900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton270M140M210M280M350MMin: 382810000 / Avg: 383606666.67 / Max: 384070000Min: 373020000 / Avg: 373100000 / Max: 373160000Min: 262820000 / Avg: 262890000 / Max: 2629300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton25001000150020002500SE +/- 6.22, N = 3SE +/- 0.15, N = 3SE +/- 5.40, N = 32533.02512.71742.41. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2400800120016002000Min: 2521.4 / Avg: 2533 / Max: 2542.7Min: 2512.5 / Avg: 2512.73 / Max: 2513Min: 1737 / Avg: 1742.4 / Max: 1753.21. (CXX) g++ options: -O3 -march=native -rdynamic

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2918273645SE +/- 0.13, N = 3SE +/- 0.07, N = 3SE +/- 0.22, N = 327.9029.7440.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2816243240Min: 27.67 / Avg: 27.9 / Max: 28.12Min: 29.63 / Avg: 29.74 / Max: 29.86Min: 39.91 / Avg: 40.33 / Max: 40.64

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Losslessc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2246810SE +/- 0.021, N = 3SE +/- 0.010, N = 3SE +/- 0.026, N = 35.7658.2658.3111. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Losslessc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton23691215Min: 5.73 / Avg: 5.76 / Max: 5.8Min: 8.25 / Avg: 8.27 / Max: 8.28Min: 8.27 / Avg: 8.31 / Max: 8.361. (CXX) g++ options: -O3 -fPIC -lm

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2150300450600750SE +/- 0.32, N = 3SE +/- 2.00, N = 3SE +/- 0.33, N = 3675.64565.69470.391. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2120240360480600Min: 675.15 / Avg: 675.64 / Max: 676.25Min: 563.66 / Avg: 565.69 / Max: 569.69Min: 470.05 / Avg: 470.39 / Max: 471.051. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythonc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton212002400360048006000SE +/- 6.99, N = 4SE +/- 24.07, N = 4SE +/- 23.29, N = 4394040135626
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythonc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton210002000300040005000Min: 3927 / Avg: 3940.25 / Max: 3960Min: 3955 / Avg: 4013 / Max: 4063Min: 5587 / Avg: 5625.5 / Max: 5693

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Secondc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon90K180K270K360K450KSE +/- 3211.91, N = 3SE +/- 49.84, N = 3SE +/- 80.93, N = 3405413.86315464.34285378.841. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Secondc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon70K140K210K280K350KMin: 399077.13 / Avg: 405413.86 / Max: 409495.17Min: 315395.23 / Avg: 315464.34 / Max: 315561.11Min: 285243.13 / Avg: 285378.84 / Max: 285523.091. (CC) gcc options: -O2 -lrt" -lrt

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Timec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon1224364860SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.12, N = 337.8651.0552.78-march=native1. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Timec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon1122334455Min: 37.84 / Avg: 37.86 / Max: 37.89Min: 51.04 / Avg: 51.05 / Max: 51.05Min: 52.63 / Avg: 52.78 / Max: 53.011. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton23691215SE +/- 0.025, N = 3SE +/- 0.025, N = 3SE +/- 0.006, N = 39.38512.98413.0461. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton248121620Min: 9.34 / Avg: 9.38 / Max: 9.41Min: 12.96 / Avg: 12.98 / Max: 13.03Min: 13.04 / Avg: 13.05 / Max: 13.061. (CXX) g++ options: -O3 -fPIC -lm

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotubec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton250100150200250SE +/- 0.08, N = 3SE +/- 0.24, N = 3SE +/- 0.13, N = 3155.18202.11215.531. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotubec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton24080120160200Min: 155.01 / Avg: 155.18 / Max: 155.29Min: 201.8 / Avg: 202.11 / Max: 202.58Min: 215.37 / Avg: 215.53 / Max: 215.791. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To Compilec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton220406080100SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.31, N = 364.3469.4888.90
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To Compilec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton220406080100Min: 64.18 / Avg: 64.34 / Max: 64.49Min: 69.32 / Avg: 69.48 / Max: 69.7Min: 88.57 / Avg: 88.9 / Max: 89.52

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton23K6K9K12K15KSE +/- 166.62, N = 14SE +/- 121.56, N = 15SE +/- 203.15, N = 1510900.611591.914985.4
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton23K6K9K12K15KMin: 10663.9 / Avg: 10900.6 / Max: 13062.1Min: 10847.8 / Avg: 11591.94 / Max: 12395.4Min: 13965.4 / Avg: 14985.42 / Max: 16307.5

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To Solvec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon20406080100SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 366.8275.2291.231. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To Solvec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon20406080100Min: 66.82 / Avg: 66.82 / Max: 66.83Min: 75.22 / Avg: 75.22 / Max: 75.23Min: 91.17 / Avg: 91.23 / Max: 91.341. (CXX) g++ options: -fopenmp -O2 -march=native

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton29001800270036004500SE +/- 32.93, N = 4SE +/- 32.57, N = 5SE +/- 45.89, N = 4292129513964
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton27001400210028003500Min: 2835 / Avg: 2921.25 / Max: 2986Min: 2868 / Avg: 2951 / Max: 3068Min: 3843 / Avg: 3964 / Max: 4056

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon7M14M21M28M35MSE +/- 104795.40, N = 3SE +/- 359309.26, N = 3SE +/- 325631.00, N = 3321341232654048223746200
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon6M12M18M24M30MMin: 32023095 / Avg: 32134123.33 / Max: 32343588Min: 26061970 / Avg: 26540482 / Max: 27244043Min: 23100009 / Avg: 23746200.33 / Max: 24139540

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2200K400K600K800K1000KSE +/- 405.56, N = 3SE +/- 614.16, N = 3SE +/- 2395.13, N = 31037943.37843015.78770521.811. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2200K400K600K800K1000KMin: 1037132.46 / Avg: 1037943.37 / Max: 1038364.88Min: 841810.62 / Avg: 843015.78 / Max: 843823.92Min: 767318.01 / Avg: 770521.81 / Max: 775207.811. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton29001800270036004500SE +/- 3.54, N = 3SE +/- 22.07, N = 3SE +/- 37.23, N = 32983.933257.943969.35
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton27001400210028003500Min: 2978.25 / Avg: 2983.93 / Max: 2990.42Min: 3216.26 / Avg: 3257.94 / Max: 3291.38Min: 3901.04 / Avg: 3969.35 / Max: 4029.15

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton210002000300040005000SE +/- 14.95, N = 4SE +/- 24.39, N = 4SE +/- 27.95, N = 4352438154506
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton28001600240032004000Min: 3487 / Avg: 3523.75 / Max: 3551Min: 3747 / Avg: 3814.5 / Max: 3863Min: 4428 / Avg: 4506.25 / Max: 4560

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2918273645SE +/- 0.23, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 339.533.831.0-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2816243240Min: 39 / Avg: 39.47 / Max: 39.7Min: 33.6 / Avg: 33.8 / Max: 33.9Min: 30.9 / Avg: 30.97 / Max: 311. (CC) gcc options: -O3 -pthread -lz

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton26M12M18M24M30MSE +/- 153578.64, N = 3SE +/- 242448.39, N = 3SE +/- 292329.99, N = 3276088912208196121679245-m64 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi21. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton25M10M15M20M25MMin: 27303905 / Avg: 27608891 / Max: 27792957Min: 21829335 / Avg: 22081960.67 / Max: 22566713Min: 21327596 / Avg: 21679245.33 / Max: 222595791. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floatc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton25001000150020002500SE +/- 1.81, N = 3SE +/- 19.61, N = 3SE +/- 28.63, N = 31965.072156.602500.87
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floatc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2400800120016002000Min: 1961.49 / Avg: 1965.07 / Max: 1967.34Min: 2129.52 / Avg: 2156.6 / Max: 2194.7Min: 2462.74 / Avg: 2500.87 / Max: 2556.93

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2140280420560700SE +/- 2.06, N = 3SE +/- 0.42, N = 3SE +/- 0.37, N = 3497.58604.62628.40
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2110220330440550Min: 493.85 / Avg: 497.58 / Max: 500.97Min: 603.88 / Avg: 604.62 / Max: 605.34Min: 627.82 / Avg: 628.4 / Max: 629.09

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjac7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon150300450600750SE +/- 5.19, N = 3SE +/- 0.49, N = 3SE +/- 0.12, N = 3544.93682.98685.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjac7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon120240360480600Min: 535.72 / Avg: 544.93 / Max: 553.68Min: 682.05 / Avg: 682.98 / Max: 683.7Min: 685.49 / Avg: 685.7 / Max: 685.91

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2110220330440550SE +/- 1.33, N = 3SE +/- 0.59, N = 3SE +/- 0.53, N = 3391.17469.94488.81
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton290180270360450Min: 389.16 / Avg: 391.17 / Max: 393.69Min: 469.21 / Avg: 469.94 / Max: 471.11Min: 487.79 / Avg: 488.81 / Max: 489.55

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2306090120150SE +/- 0.64, N = 3SE +/- 0.69, N = 3SE +/- 0.70, N = 3115.02136.80142.28
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compilec7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2306090120150Min: 113.8 / Avg: 115.02 / Max: 115.97Min: 135.42 / Avg: 136.8 / Max: 137.5Min: 140.89 / Avg: 142.28 / Max: 143.17

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed Timec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2612182430SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 318.8421.5423.141. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed Timec6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2510152025Min: 18.84 / Avg: 18.84 / Max: 18.84Min: 21.54 / Avg: 21.54 / Max: 21.54Min: 23.13 / Avg: 23.14 / Max: 23.141. (CC) gcc options: -static -fopenmp -O3 -march=native

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2918273645SE +/- 0.00, N = 3SE +/- 0.40, N = 3SE +/- 0.06, N = 341.238.134.6-llzma-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speedc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton2918273645Min: 41.2 / Avg: 41.2 / Max: 41.2Min: 37.3 / Avg: 38.1 / Max: 38.5Min: 34.5 / Avg: 34.6 / Max: 34.71. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speedc7g.4xlargec6i.4xlarge Xeon8001600240032004000SE +/- 2.07, N = 3SE +/- 2.87, N = 33508.52996.8-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speedc7g.4xlargec6i.4xlarge Xeon6001200180024003000Min: 3504.5 / Avg: 3508.47 / Max: 3511.5Min: 2991.6 / Avg: 2996.83 / Max: 3001.51. (CC) gcc options: -O3 -pthread -lz

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton280K160K240K320K400KSE +/- 1727.81, N = 3SE +/- 2009.97, N = 3SE +/- 3992.58, N = 3356302.84345710.87307349.361. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton260K120K180K240K300KMin: 354138.78 / Avg: 356302.84 / Max: 359718.03Min: 341701.14 / Avg: 345710.87 / Max: 347963.74Min: 302113.55 / Avg: 307349.36 / Max: 315188.561. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 200c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton280K160K240K320K400KSE +/- 1582.66, N = 3SE +/- 3986.77, N = 3SE +/- 1347.28, N = 3356829.93352380.98308938.671. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 200c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton260K120K180K240K300KMin: 353665.2 / Avg: 356829.93 / Max: 358465.71Min: 344424.56 / Avg: 352380.98 / Max: 356811.55Min: 306245.77 / Avg: 308938.67 / Max: 310367.171. (CC) gcc options: -lcrypt -lz -O3 -march=native

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton210K20K30K40K50KSE +/- 305.31, N = 3SE +/- 110.01, N = 3SE +/- 336.95, N = 340051.341179.745955.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2c7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton28K16K24K32K40KMin: 39503.5 / Avg: 40051.33 / Max: 40558.8Min: 41054 / Avg: 41179.67 / Max: 41398.9Min: 45554.1 / Avg: 45955.73 / Max: 46625.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton210K20K30K40K50KSE +/- 75.14, N = 3SE +/- 210.27, N = 3SE +/- 197.89, N = 341185.741855.146793.9
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton28K16K24K32K40KMin: 41107.5 / Avg: 41185.67 / Max: 41335.9Min: 41440.3 / Avg: 41855.1 / Max: 42122.5Min: 46548.4 / Avg: 46793.9 / Max: 47185.5

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton280K160K240K320K400KSE +/- 1620.39, N = 3SE +/- 1017.52, N = 3SE +/- 3783.68, N = 3351672.92346613.34310596.581. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton260K120K180K240K300KMin: 349589.65 / Avg: 351672.92 / Max: 354864.44Min: 344614.99 / Avg: 346613.34 / Max: 347945.69Min: 303433 / Avg: 310596.58 / Max: 316290.471. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton270K140K210K280K350KSE +/- 2637.25, N = 3SE +/- 1410.11, N = 3SE +/- 1677.89, N = 3347345.49346814.75308213.131. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000c6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton260K120K180K240K300KMin: 344132.5 / Avg: 347345.49 / Max: 352574.53Min: 344622.05 / Avg: 346814.75 / Max: 349447.11Min: 306510.05 / Avg: 308213.13 / Max: 311568.781. (CC) gcc options: -lcrypt -lz -O3 -march=native

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton230060090012001500SE +/- 91.51, N = 12SE +/- 0.00, N = 3SE +/- 0.17, N = 313746093341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton22004006008001000Min: 1191 / Avg: 1373.75 / Max: 1918.5Min: 608.5 / Avg: 608.5 / Max: 608.5Min: 333.5 / Avg: 333.83 / Max: 3341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2170340510680850SE +/- 50.92, N = 12SE +/- 0.17, N = 3SE +/- 0.17, N = 37734073221. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standardc6i.4xlarge Xeonc7g.4xlargec6g.4xlarge Graviton2140280420560700Min: 632.5 / Avg: 773.25 / Max: 1004Min: 407 / Avg: 407.17 / Max: 407.5Min: 321.5 / Avg: 321.67 / Max: 3221. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standardc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton22K4K6K8K10KSE +/- 2.40, N = 3SE +/- 322.41, N = 12SE +/- 3.50, N = 37990794469481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standardc7g.4xlargec6i.4xlarge Xeonc6g.4xlarge Graviton214002800420056007000Min: 7985.5 / Avg: 7990.17 / Max: 7993.5Min: 6856 / Avg: 7944.42 / Max: 9074.5Min: 6944 / Avg: 6947.5 / Max: 6954.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cachec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon1428425670SE +/- 3.64, N = 12SE +/- 0.97, N = 15SE +/- 0.30, N = 1564.3137.1917.401. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cachec7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon1326395265Min: 40.19 / Avg: 64.31 / Max: 82.06Min: 30.31 / Avg: 37.19 / Max: 42.27Min: 15.73 / Avg: 17.4 / Max: 19.21. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quantc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon9001800270036004500SE +/- 17.76, N = 3SE +/- 14.44, N = 3SE +/- 80.05, N = 121502.951980.243967.39
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quantc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon7001400210028003500Min: 1468.14 / Avg: 1502.95 / Max: 1526.49Min: 1956.49 / Avg: 1980.24 / Max: 2006.34Min: 3442.24 / Avg: 3967.39 / Max: 4294.78

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusterc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon612182430SE +/- 0.33, N = 12SE +/- 0.26, N = 15SE +/- 0.07, N = 313.3015.4823.511. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusterc7g.4xlargec6g.4xlarge Graviton2c6i.4xlarge Xeon510152025Min: 11.89 / Avg: 13.3 / Max: 14.87Min: 14.26 / Avg: 15.48 / Max: 17.08Min: 23.39 / Avg: 23.51 / Max: 23.631. (CXX) g++ options: -O2 -lOpenCL

101 Results Shown

NAS Parallel Benchmarks
ONNX Runtime
NAS Parallel Benchmarks:
  SP.C
  MG.C
OpenSSL
Stress-NG
OpenSSL
NAS Parallel Benchmarks
High Performance Conjugate Gradient
Timed MrBayes Analysis
simdjson
NAS Parallel Benchmarks:
  IS.D
  CG.C
ACES DGEMM
simdjson
libavif avifenc
C-Ray
Xcompact3d Incompact3d
Stress-NG
ASTC Encoder:
  Exhaustive
  Thorough
Stress-NG
Xcompact3d Incompact3d
NAS Parallel Benchmarks
simdjson
Stress-NG
libavif avifenc
NAS Parallel Benchmarks
Rodinia:
  OpenMP LavaMD
  OpenMP CFD Solver
OpenSSL
SecureMark
Algebraic Multi-Grid Benchmark
Apache HTTP Server
GROMACS
Apache HTTP Server
PHPBench
Apache HTTP Server
LULESH
LAMMPS Molecular Dynamics Simulator
Ngspice
LeelaChessZero
simdjson
PyBench
Apache HTTP Server
ONNX Runtime
LeelaChessZero
Zstd Compression
7-Zip Compression
Ngspice
WebP Image Encode
TSCP
Timed Apache Compilation
Zstd Compression
DaCapo Benchmark
WebP Image Encode
Zstd Compression
libavif avifenc
WebP Image Encode
7-Zip Compression
Stress-NG
Liquid-DSP
QuantLib
Timed ImageMagick Compilation
libavif avifenc
Google SynthMark
DaCapo Benchmark
Coremark
POV-Ray
libavif avifenc
GPAW
Timed PHP Compilation
TensorFlow Lite
m-queens
DaCapo Benchmark
asmFish
Stress-NG
TensorFlow Lite
DaCapo Benchmark
Zstd Compression
Stockfish
TensorFlow Lite
Timed Node.js Compilation
Timed LLVM Compilation
Timed Gem5 Compilation
Build2
N-Queens
Zstd Compression:
  19 - Compression Speed
  3 - Decompression Speed
nginx:
  100
  200
TensorFlow Lite:
  Inception ResNet V2
  Inception V4
nginx:
  500
  1000
ONNX Runtime:
  ArcFace ResNet-100 - CPU - Standard
  bertsquad-12 - CPU - Standard
  GPT-2 - CPU - Standard
Stress-NG
TensorFlow Lite
Rodinia