Amazon AWS Graviton3 benchmarks by Michael Larabel.
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
c6g.4xlarge Graviton2 Changed Processor to ARMv8 Neoverse-N1 (16 Cores) .
Changed Motherboard to Amazon EC2 c6g.4xlarge (1.0 BIOS) .
Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
c7g.4xlarge Graviton3 Changed Processor to ARMv8 Neoverse-V1 (16 Cores) .
Changed Motherboard to Amazon EC2 c7g.4xlarge (1.0 BIOS) .
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads) , Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS) , Chipset: Intel 440FX 82441FX PMC , Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
c6i.4xlarge Xeon Changed Processor to Intel Xeon Platinum 8375C (8 Cores / 16 Threads) .
Changed Motherboard to Amazon EC2 c6i.4xlarge (1.0 BIOS) .
Processor Change: CPU Microcode: 0xd000331Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Amazon EC2 Graviton3 Benchmark Comparison Processor Motherboard Chipset Memory Disk Network OS Kernel Compiler File-System System Layer a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon ARMv8 Cortex-A72 (16 Cores) Amazon EC2 a1.4xlarge (1.0 BIOS) Amazon Device 0200 32GB 193GB Amazon Elastic Block Store Amazon Elastic Ubuntu 22.04 5.15.0-1004-aws (aarch64) GCC 11.2.0 ext4 amazon ARMv8 Neoverse-N1 (16 Cores) Amazon EC2 c6g.4xlarge (1.0 BIOS) ARMv8 Neoverse-V1 (16 Cores) Amazon EC2 c7g.4xlarge (1.0 BIOS) AMD EPYC 7R13 (8 Cores / 16 Threads) Amazon EC2 c6a.4xlarge (1.0 BIOS) Intel 440FX 82441FX PMC 5.15.0-1004-aws (x86_64) Intel Xeon Platinum 8375C (8 Cores / 16 Threads) Amazon EC2 c6i.4xlarge (1.0 BIOS) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - a1.4xlarge Graviton: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6g.4xlarge Graviton2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c7g.4xlarge Graviton3: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6a.4xlarge EPYC: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c6i.4xlarge Xeon: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - Python 3.10.4 Security Details - a1.4xlarge Graviton: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected - c6g.4xlarge Graviton2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c7g.4xlarge Graviton3: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c6a.4xlarge EPYC: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - c6i.4xlarge Xeon: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Processor Details - c6a.4xlarge EPYC: CPU Microcode: 0xa001144 - c6i.4xlarge Xeon: CPU Microcode: 0xd000331
a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon Logarithmic Result Overview Phoronix Test Suite High Performance Conjugate Gradient Algebraic Multi-Grid Benchmark ACES DGEMM ONNX Runtime Xcompact3d Incompact3d NAS Parallel Benchmarks Timed MrBayes Analysis GPAW LULESH GROMACS Apache HTTP Server simdjson TensorFlow Lite ASTC Encoder Timed Node.js Compilation LAMMPS Molecular Dynamics Simulator PyBench PHPBench libavif avifenc Timed ImageMagick Compilation Timed Apache Compilation Timed LLVM Compilation Rodinia OpenSSL Zstd Compression SecureMark Ngspice Liquid-DSP Build2 Timed PHP Compilation DaCapo Benchmark WebP Image Encode Timed Gem5 Compilation nginx C-Ray TSCP Stockfish POV-Ray 7-Zip Compression Stress-NG asmFish Google SynthMark LeelaChessZero Coremark N-Queens m-queens
Amazon EC2 Graviton3 Benchmark Comparison stress-ng: Memory Copying npb: MG.C npb: CG.C npb: SP.C compress-zstd: 3 - Compression Speed npb: FT.C hpcg: amg: incompact3d: input.i3d 129 Cells Per Direction mt-dgemm: Sustained Floating-Point Rate incompact3d: input.i3d 193 Cells Per Direction stress-ng: CPU Stress simdjson: DistinctUserID mrbayes: Primate Phylogeny Analysis npb: IS.D tensorflow-lite: Mobilenet Float gpaw: Carbon Nanotube avifenc: 2 simdjson: PartialTweets lulesh: apache: 100 astcenc: Thorough gromacs: MPI CPU - water_GMX50_bare tensorflow-lite: Inception V4 apache: 500 apache: 200 simdjson: Kostya npb: BT.C openssl: RSA4096 tensorflow-lite: Inception ResNet V2 apache: 1000 tensorflow-lite: SqueezeNet astcenc: Exhaustive rodinia: OpenMP CFD Solver openssl: RSA4096 avifenc: 0 build-nodejs: Time To Compile lammps: Rhodopsin Protein pybench: Total For Average Test Times phpbench: PHP Benchmark Suite build-imagemagick: Time To Compile tensorflow-lite: NASNet Mobile build-apache: Time To Compile dacapobench: Jython build-llvm: Ninja npb: EP.D ngspice: C2670 dacapobench: Tradesoap simdjson: LargeRand securemark: SecureMark-TLS dacapobench: Tradebeans liquid-dsp: 16 - 256 - 57 build2: Time To Compile build-php: Time To Compile compress-7zip: Compression Rating ngspice: C7552 webp: Quality 100, Lossless, Highest Compression build-gem5: Time To Compile webp: Quality 100, Lossless avifenc: 6, Lossless nginx: 1000 nginx: 500 nginx: 200 compress-zstd: 19 - Decompression Speed nginx: 100 tscp: AI Chess Performance compress-zstd: 19, Long Mode - Decompression Speed stockfish: Total Time rodinia: OpenMP LavaMD povray: Trace Time compress-zstd: 19, Long Mode - Compression Speed compress-zstd: 19 - Compression Speed dacapobench: H2 stress-ng: Crypto asmfish: 1024 Hash Memory, 26 Depth synthmark: VoiceMark_100 openssl: SHA256 stress-ng: Vector Math npb: LU.C lczero: Eigen lczero: BLAS coremark: CoreMark Size 666 - Iterations Per Second n-queens: Elapsed Time compress-7zip: Decompression Rating m-queens: Time To Solve stress-ng: IO_uring onnx: super-resolution-10 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: bertsquad-12 - CPU - Standard onnx: GPT-2 - CPU - Standard tensorflow-lite: Mobilenet Quant c-ray: Total Time - 4K, 16 Rays Per Pixel rodinia: OpenMP Streamcluster a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 798.24 3266.36 1213.15 1293.80 633.9 2927.16 3.77834 186716933 53.7706274 0.891391 182.583939 2366.00 0.8 644.788 197.57 9990.15 769.346 449.022 0.78 2328.2724 18636.43 33.5198 0.316 188910 20133.49 20887.58 0.63 3148.18 588.3 171169 19278.68 12014.7 277.7669 41.450 45328.6 768.302 1765.910 3.245 3452 241259 93.632 30986.7 74.742 12997 1784.600 339.20 473.901 11182 0.3 74356 9045 165513333 353.912 196.029 32498 480.793 124.708 1155.615 61.801 33.991 138205.11 139414.84 141436.20 1121.7 143155.48 538500 1213.9 10980430 360.304 93.801 16 16.9 6740 11985.38 15331550 331.070 6785689517 27341.47 2558.12 128 135 203869.402017 32.285 40891 110.368 918172.37 757 165 10 115 2312 5724.66 104.761 47.430 2903.00 6720.68 3520.86 2356.16 2888.3 6244.48 19.7218 932652900 11.5733547 4.785123 41.0240835 3404.94 1.53 384.753 372.76 2500.87 215.528 238.205 1.51 6016.1627 46995.35 16.5222 0.781 46793.9 50077.81 50059.97 1.19 6449.11 660.6 45955.7 46629.45 3969.35 159.2039 17.035 53951.5 406.937 628.401 7.935 1741 449855 40.333 14985.4 34.201 5626 682.981 558.88 263.724 4506 0.49 120301 4344 262890000 142.277 88.897 71285 255.205 66.147 488.805 31.082 16.518 308213.13 310596.58 308938.67 2051.6 307349.36 872313 2196.3 21679245 215.666 51.047 31.0 34.6 3964 17924.18 26540482 470.389 10723184083 37753.89 5133.89 834 864 315464.339800 23.136 59445 75.224 770521.81 2072 334 28 322 6948 1980.24 62.323 15.484 6693.32 13481.61 6571.95 4467.19 4639.1 11791.77 26.3058 1258807333 8.01671425 5.853864 29.1258570 5029.71 2.69 251.397 1041.90 2156.60 155.180 141.698 2.62 10940.939 67231.88 13.9248 1.128 41855.1 73546.32 73676.95 1.94 10339.53 2546.4 40051.3 72719.33 3257.94 139.3797 10.478 178460.4 256.841 497.579 11.291 1185 666484 27.904 11591.9 26.940 3940 544.929 934.72 198.224 3524 0.7 183708 3203 383606667 115.020 69.483 97824 191.286 48.208 391.171 22.769 11.908 346814.75 346613.34 352380.98 3050.3 345710.87 1370094 3240.6 27608891 143.334 37.863 39.5 41.2 2951 23181.81 32134123 675.635 13722045973 55258.17 7730.41 1189 1103 405413.860554 21.536 73054 66.822 843015.78 2817 609 38 407 7990 1502.95 38.517 13.296 3551.80 16826.43 6169.22 8094.79 2784.0 18299.96 5.06042 267670700 28.2797661 2.432432 110.770027 13304.50 4.30 120.636 541.35 2159.72 302.956 93.946 3.64 5452.1051 77567.69 7.9818 1.004 44920.6 81995.64 83070.00 2.80 13134.46 2088.9 41366.6 71537.11 3103.12 72.3908 21.789 136784.2 195.532 664.347 5.067 1961 480741 32.626 9266.86 23.532 4616 760.344 466.21 245.886 4052 0.95 213288 3167 509746667 150.994 67.084 62562 180.356 48.677 515.201 26.708 16.394 388657.76 389030.11 390932.79 2907.5 388010.76 1442631 2826.0 23857623 224.331 49.435 25.9 30.0 3019 13556.06 26187688 663.073 11691403353 53787.61 25140.55 1001 1091 345133.440541 16.378 57318 72.330 768723.46 3696 1192 65 488 5617 3847.96 69.349 18.383 3150.49 26298.81 9522.82 9563.22 3440.6 20423.57 8.66031 661364767 17.8682772 2.230545 69.2169978 12527.16 4.30 134.924 861.57 1965.07 202.106 97.735 3.71 8112.3715 86545.57 7.2625 1.452 41185.7 91746.57 94458.22 2.46 13888.40 2161.3 41179.7 79830.96 2983.93 69.6387 20.446 140964.4 204.994 604.620 6.220 997 828186 29.737 10900.6 22.527 4013 685.704 1103.22 147.893 3815 0.86 230549 2928 373100000 136.801 64.337 66631 161.081 41.805 469.940 21.122 17.529 347345.49 351672.92 356829.93 2582.0 356302.84 1272596 2666.1 22081961 281.389 52.784 33.8 38.1 2921 10210.34 23746200 565.690 7096993937 40140.30 38136.77 1466 1397 285378.841661 18.839 45653 91.231 1037943.37 3450 1374 139 773 7944 3967.39 92.545 23.512 OpenBenchmarking.org
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1400 2800 4200 5600 7000 SE +/- 0.91, N = 3 SE +/- 11.57, N = 3 SE +/- 3.75, N = 3 SE +/- 0.94, N = 3 SE +/- 3.52, N = 3 798.24 3551.80 2903.00 3150.49 6693.32 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 6K 12K 18K 24K 30K SE +/- 1.64, N = 3 SE +/- 30.62, N = 3 SE +/- 1.39, N = 3 SE +/- 184.24, N = 3 SE +/- 4.69, N = 3 3266.36 16826.43 6720.68 26298.81 13481.61 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 11.79, N = 6 SE +/- 81.25, N = 3 SE +/- 9.95, N = 3 SE +/- 66.44, N = 3 SE +/- 17.12, N = 3 1213.15 6169.22 3520.86 9522.82 6571.95 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 2.51, N = 3 SE +/- 24.63, N = 3 SE +/- 0.57, N = 3 SE +/- 73.65, N = 3 SE +/- 9.61, N = 3 1293.80 8094.79 2356.16 9563.22 4467.19 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3 - Compression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1000 2000 3000 4000 5000 SE +/- 4.47, N = 3 SE +/- 18.93, N = 3 SE +/- 3.74, N = 3 SE +/- 29.53, N = 3 SE +/- 9.57, N = 3 633.9 2768.7 2878.8 3440.6 4639.1 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 4K 8K 12K 16K 20K SE +/- 1.73, N = 3 SE +/- 45.90, N = 3 SE +/- 1.10, N = 3 SE +/- 40.24, N = 3 SE +/- 1.17, N = 3 2927.16 18299.96 6244.48 20423.57 11791.77 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
High Performance Conjugate Gradient HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 6 12 18 24 30 SE +/- 0.00065, N = 3 SE +/- 0.00225, N = 3 SE +/- 0.01639, N = 3 SE +/- 0.04033, N = 3 SE +/- 0.03738, N = 3 3.77834 5.06042 19.72180 8.66031 26.30580 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300M 600M 900M 1200M 1500M SE +/- 176548.39, N = 3 SE +/- 103921.81, N = 3 SE +/- 3420043.89, N = 3 SE +/- 5114517.12, N = 3 SE +/- 952437.28, N = 3 186716933 267670700 932652900 661364767 1258807333 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 12 24 36 48 60 SE +/- 0.02862870, N = 3 SE +/- 0.03718674, N = 3 SE +/- 0.01351889, N = 3 SE +/- 0.09619197, N = 3 SE +/- 0.01401446, N = 3 53.77062740 28.27976610 11.57335470 17.86827720 8.01671425 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
ACES DGEMM This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better ACES DGEMM 1.0 Sustained Floating-Point Rate a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1.3171 2.6342 3.9513 5.2684 6.5855 SE +/- 0.002370, N = 3 SE +/- 0.023324, N = 6 SE +/- 0.007139, N = 3 SE +/- 0.003819, N = 3 SE +/- 0.016350, N = 3 0.891391 2.432432 4.785123 2.230545 5.853864 1. (CC) gcc options: -O3 -march=native -fopenmp
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40 80 120 160 200 SE +/- 0.15, N = 3 SE +/- 0.12, N = 3 SE +/- 0.01, N = 3 SE +/- 0.14, N = 3 SE +/- 0.03, N = 3 182.58 110.77 41.02 69.22 29.13 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3K 6K 9K 12K 15K SE +/- 0.16, N = 3 SE +/- 37.60, N = 3 SE +/- 0.54, N = 3 SE +/- 155.66, N = 3 SE +/- 0.41, N = 3 2366.00 13304.50 3404.94 12527.16 5029.71 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: DistinctUserID a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.9675 1.935 2.9025 3.87 4.8375 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.80 4.30 1.53 4.30 2.69 1. (CXX) g++ options: -O3
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 140 280 420 560 700 SE +/- 0.49, N = 3 SE +/- 0.35, N = 3 SE +/- 0.11, N = 3 SE +/- 1.43, N = 3 SE +/- 0.24, N = 3 644.79 120.64 384.75 134.92 251.40 -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm 1. (CC) gcc options: -O3 -std=c99 -pedantic -lm
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200 400 600 800 1000 SE +/- 0.31, N = 3 SE +/- 0.47, N = 3 SE +/- 0.20, N = 3 SE +/- 2.14, N = 3 SE +/- 2.29, N = 3 197.57 541.35 372.76 861.57 1041.90 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 113.94, N = 3 SE +/- 1.03, N = 3 SE +/- 28.63, N = 3 SE +/- 1.81, N = 3 SE +/- 19.61, N = 3 9990.15 2159.72 2500.87 1965.07 2156.60
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 22.1 Input: Carbon Nanotube a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 170 340 510 680 850 SE +/- 5.37, N = 3 SE +/- 0.17, N = 3 SE +/- 0.13, N = 3 SE +/- 0.24, N = 3 SE +/- 0.08, N = 3 769.35 302.96 215.53 202.11 155.18 1. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 100 200 300 400 500 SE +/- 0.29, N = 3 SE +/- 0.44, N = 3 SE +/- 0.12, N = 3 SE +/- 0.26, N = 3 SE +/- 0.11, N = 3 449.02 93.95 238.21 97.74 141.70 1. (CXX) g++ options: -O3 -fPIC -lm
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: PartialTweets a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.8348 1.6696 2.5044 3.3392 4.174 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.78 3.64 1.51 3.71 2.62 1. (CXX) g++ options: -O3
LULESH LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org z/s, More Is Better LULESH 2.0.3 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 6.27, N = 3 SE +/- 5.52, N = 3 SE +/- 4.88, N = 3 SE +/- 14.20, N = 3 SE +/- 76.73, N = 3 2328.27 5452.11 6016.16 8112.37 10940.94 1. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 100 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 28.97, N = 3 SE +/- 211.56, N = 3 SE +/- 93.03, N = 3 SE +/- 389.13, N = 3 SE +/- 38.09, N = 3 18636.43 77567.69 46995.35 86545.57 67231.88 1. (CC) gcc options: -shared -fPIC -O2
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 8 16 24 32 40 SE +/- 0.0061, N = 3 SE +/- 0.0154, N = 3 SE +/- 0.0064, N = 3 SE +/- 0.0001, N = 3 SE +/- 0.0011, N = 3 33.5198 7.9818 16.5222 7.2625 13.9248 1. (CXX) g++ options: -O3 -flto -pthread
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.3267 0.6534 0.9801 1.3068 1.6335 SE +/- 0.000, N = 3 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 0.316 1.004 0.781 1.452 1.128 1. (CXX) g++ options: -O3
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40K 80K 120K 160K 200K SE +/- 1746.17, N = 3 SE +/- 53.95, N = 3 SE +/- 197.89, N = 3 SE +/- 75.14, N = 3 SE +/- 210.27, N = 3 188910.0 44920.6 46793.9 41185.7 41855.1
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 500 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 93.64, N = 3 SE +/- 636.46, N = 13 SE +/- 578.32, N = 3 SE +/- 833.50, N = 7 SE +/- 89.82, N = 3 20133.49 81995.64 50077.81 91746.57 73546.32 1. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 200 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 59.55, N = 3 SE +/- 644.29, N = 3 SE +/- 112.65, N = 3 SE +/- 615.05, N = 3 SE +/- 649.31, N = 3 20887.58 83070.00 50059.97 94458.22 73676.95 1. (CC) gcc options: -shared -fPIC -O2
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: Kostya a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.63 1.26 1.89 2.52 3.15 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.63 2.80 1.19 2.46 1.94 1. (CXX) g++ options: -O3
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3K 6K 9K 12K 15K SE +/- 3.44, N = 3 SE +/- 98.45, N = 3 SE +/- 3.20, N = 3 SE +/- 22.04, N = 3 SE +/- 7.36, N = 3 3148.18 13134.46 6449.11 13888.40 10339.53 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 500 1000 1500 2000 2500 SE +/- 0.12, N = 3 SE +/- 1.40, N = 3 SE +/- 0.03, N = 3 SE +/- 4.47, N = 3 SE +/- 0.23, N = 3 588.3 2088.9 660.6 2161.3 2546.4 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40K 80K 120K 160K 200K SE +/- 825.35, N = 3 SE +/- 27.66, N = 3 SE +/- 336.95, N = 3 SE +/- 110.01, N = 3 SE +/- 305.31, N = 3 171169.0 41366.6 45955.7 41179.7 40051.3
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 1000 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 98.61, N = 3 SE +/- 397.88, N = 3 SE +/- 276.10, N = 3 SE +/- 335.63, N = 3 SE +/- 83.83, N = 3 19278.68 71537.11 46629.45 79830.96 72719.33 1. (CC) gcc options: -shared -fPIC -O2
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3K 6K 9K 12K 15K SE +/- 46.48, N = 3 SE +/- 1.37, N = 3 SE +/- 37.23, N = 3 SE +/- 3.54, N = 3 SE +/- 22.07, N = 3 12014.70 3103.12 3969.35 2983.93 3257.94
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 60 120 180 240 300 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 277.77 72.39 159.20 69.64 139.38 1. (CXX) g++ options: -O3 -flto -pthread
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP CFD Solver a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 9 18 27 36 45 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 41.45 21.79 17.04 20.45 10.48 1. (CXX) g++ options: -O2 -lOpenCL
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40K 80K 120K 160K 200K SE +/- 63.75, N = 3 SE +/- 74.60, N = 3 SE +/- 3.30, N = 3 SE +/- 47.94, N = 3 SE +/- 82.61, N = 3 45328.6 136784.2 53951.5 140964.4 178460.4 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 170 340 510 680 850 SE +/- 0.58, N = 3 SE +/- 0.62, N = 3 SE +/- 0.13, N = 3 SE +/- 0.33, N = 3 SE +/- 0.18, N = 3 768.30 195.53 406.94 204.99 256.84 1. (CXX) g++ options: -O3 -fPIC -lm
Timed Node.js Compilation This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 17.3 Time To Compile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 400 800 1200 1600 2000 SE +/- 1.80, N = 3 SE +/- 0.26, N = 3 SE +/- 0.37, N = 3 SE +/- 0.42, N = 3 SE +/- 2.06, N = 3 1765.91 664.35 628.40 604.62 497.58
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 700 1400 2100 2800 3500 SE +/- 18.15, N = 3 SE +/- 1.53, N = 3 SE +/- 1.67, N = 3 SE +/- 3.84, N = 3 SE +/- 0.33, N = 3 3452 1961 1741 997 1185
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200K 400K 600K 800K 1000K SE +/- 816.27, N = 3 SE +/- 2681.41, N = 3 SE +/- 743.13, N = 3 SE +/- 983.65, N = 3 SE +/- 525.83, N = 3 241259 480741 449855 828186 666484
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 7K 14K 21K 28K 35K SE +/- 49.84, N = 3 SE +/- 23.44, N = 3 SE +/- 203.15, N = 15 SE +/- 166.62, N = 14 SE +/- 121.56, N = 15 30986.70 9266.86 14985.40 10900.60 11591.90
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200 400 600 800 1000 SE +/- 0.24, N = 3 SE +/- 0.06, N = 3 SE +/- 0.23, N = 3 SE +/- 19.93, N = 9 SE +/- 0.39, N = 3 339.20 466.21 558.88 1103.22 934.72 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 100 200 300 400 500 SE +/- 3.48, N = 3 SE +/- 1.17, N = 3 SE +/- 0.91, N = 3 SE +/- 1.80, N = 4 SE +/- 0.86, N = 3 473.90 245.89 263.72 147.89 198.22 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: LargeRandom a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.2138 0.4276 0.6414 0.8552 1.069 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.30 0.95 0.49 0.86 0.70 1. (CXX) g++ options: -O3
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 50K 100K 150K 200K 250K SE +/- 59.40, N = 3 SE +/- 3310.19, N = 9 SE +/- 23.07, N = 3 SE +/- 864.34, N = 3 SE +/- 773.26, N = 3 74356 213288 120301 230549 183708 1. (CC) gcc options: -pedantic -O3
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 110M 220M 330M 440M 550M SE +/- 8819.17, N = 3 SE +/- 489364.67, N = 3 SE +/- 35118.85, N = 3 SE +/- 41633.32, N = 3 SE +/- 400097.21, N = 3 165513333 509746667 262890000 373100000 383606667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Build2 This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.13 Time To Compile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80 160 240 320 400 SE +/- 1.89, N = 3 SE +/- 0.87, N = 3 SE +/- 0.70, N = 3 SE +/- 0.69, N = 3 SE +/- 0.64, N = 3 353.91 150.99 142.28 136.80 115.02
7-Zip Compression This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Compression Rating a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 91.00, N = 3 SE +/- 16.02, N = 3 SE +/- 44.77, N = 3 SE +/- 174.34, N = 3 SE +/- 159.36, N = 3 32498 62562 71285 66631 97824 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 100 200 300 400 500 SE +/- 1.19, N = 3 SE +/- 0.66, N = 3 SE +/- 2.40, N = 7 SE +/- 0.33, N = 3 SE +/- 1.94, N = 3 480.79 180.36 255.21 161.08 191.29 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 30 60 90 120 150 SE +/- 0.09, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 SE +/- 0.34, N = 3 SE +/- 0.01, N = 3 124.71 48.68 66.15 41.81 48.21 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
Timed Gem5 Compilation This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 21.2 Time To Compile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200 400 600 800 1000 SE +/- 0.78, N = 3 SE +/- 0.79, N = 3 SE +/- 0.53, N = 3 SE +/- 0.59, N = 3 SE +/- 1.33, N = 3 1155.62 515.20 488.81 469.94 391.17
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 14 28 42 56 70 SE +/- 0.06, N = 3 SE +/- 0.17, N = 15 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.09, N = 3 61.80 26.71 31.08 21.12 22.77 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 8 16 24 32 40 SE +/- 0.31, N = 3 SE +/- 0.12, N = 3 SE +/- 0.17, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 33.99 16.39 16.52 17.53 11.91 1. (CXX) g++ options: -O3 -fPIC -lm
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 66.96, N = 3 SE +/- 781.49, N = 3 SE +/- 1677.89, N = 3 SE +/- 2637.25, N = 3 SE +/- 1410.11, N = 3 138205.11 388657.76 308213.13 347345.49 346814.75 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 500 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 141.15, N = 3 SE +/- 771.95, N = 3 SE +/- 3783.68, N = 3 SE +/- 1620.39, N = 3 SE +/- 1017.52, N = 3 139414.84 389030.11 310596.58 351672.92 346613.34 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 200 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 133.96, N = 3 SE +/- 1242.81, N = 3 SE +/- 1347.28, N = 3 SE +/- 1582.66, N = 3 SE +/- 3986.77, N = 3 141436.20 390932.79 308938.67 356829.93 352380.98 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 700 1400 2100 2800 3500 SE +/- 4.74, N = 3 SE +/- 3.25, N = 3 SE +/- 12.10, N = 3 SE +/- 24.18, N = 3 SE +/- 7.75, N = 3 1121.7 2907.5 2051.6 2582.0 3050.3 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 100 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 22.67, N = 3 SE +/- 436.72, N = 3 SE +/- 3992.58, N = 3 SE +/- 1727.81, N = 3 SE +/- 2009.97, N = 3 143155.48 388010.76 307349.36 356302.84 345710.87 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
TSCP This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better TSCP 1.81 AI Chess Performance a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300K 600K 900K 1200K 1500K SE +/- 196.86, N = 5 SE +/- 4180.17, N = 5 SE +/- 338.27, N = 5 SE +/- 1099.67, N = 5 SE +/- 0.00, N = 5 538500 1442631 872313 1272596 1370094 1. (CC) gcc options: -O3 -march=native
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 700 1400 2100 2800 3500 SE +/- 15.28, N = 3 SE +/- 6.53, N = 3 SE +/- 2.93, N = 3 SE +/- 7.82, N = 3 SE +/- 6.93, N = 3 1213.9 2826.0 2196.3 2666.1 3240.6 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 6M 12M 18M 24M 30M SE +/- 123749.22, N = 3 SE +/- 149731.77, N = 3 SE +/- 292329.99, N = 3 SE +/- 242448.39, N = 3 SE +/- 153578.64, N = 3 10980430 23857623 21679245 22081961 27608891 -m64 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -m64 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 1. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP LavaMD a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80 160 240 320 400 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.14, N = 3 SE +/- 0.15, N = 3 360.30 224.33 215.67 281.39 143.33 1. (CXX) g++ options: -O2 -lOpenCL
POV-Ray This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray 3.7.0.7 Trace Time a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20 40 60 80 100 SE +/- 0.94, N = 15 SE +/- 0.18, N = 3 SE +/- 0.00, N = 3 SE +/- 0.12, N = 3 SE +/- 0.01, N = 3 93.80 49.44 51.05 52.78 37.86 -march=native -march=native 1. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 9 18 27 36 45 SE +/- 0.00, N = 3 SE +/- 0.27, N = 3 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 SE +/- 0.23, N = 3 16.0 25.9 31.0 33.8 39.5 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 9 18 27 36 45 SE +/- 0.03, N = 3 SE +/- 0.21, N = 3 SE +/- 0.06, N = 3 SE +/- 0.40, N = 3 SE +/- 0.00, N = 3 16.9 30.0 34.6 38.1 41.2 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 5K 10K 15K 20K 25K SE +/- 6.29, N = 3 SE +/- 3.93, N = 3 SE +/- 92.83, N = 3 SE +/- 5.89, N = 3 SE +/- 32.01, N = 3 11985.38 13556.06 17924.18 10210.34 23181.81 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
asmFish This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes/second, More Is Better asmFish 2018-07-23 1024 Hash Memory, 26 Depth a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 7M 14M 21M 28M 35M SE +/- 106812.26, N = 3 SE +/- 303648.79, N = 3 SE +/- 359309.26, N = 3 SE +/- 325631.00, N = 3 SE +/- 104795.40, N = 3 15331550 26187688 26540482 23746200 32134123
Google SynthMark SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Voices, More Is Better Google SynthMark 20201109 Test: VoiceMark_100 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 150 300 450 600 750 SE +/- 0.00, N = 3 SE +/- 7.09, N = 3 SE +/- 0.33, N = 3 SE +/- 2.00, N = 3 SE +/- 0.32, N = 3 331.07 663.07 470.39 565.69 675.64 1. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3000M 6000M 9000M 12000M 15000M SE +/- 12563225.46, N = 3 SE +/- 8616254.20, N = 3 SE +/- 47755430.47, N = 3 SE +/- 606684.16, N = 3 SE +/- 7739237.92, N = 3 6785689517 11691403353 10723184083 7096993937 13722045973 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 12K 24K 36K 48K 60K SE +/- 0.49, N = 3 SE +/- 2.46, N = 3 SE +/- 15.72, N = 3 SE +/- 28.50, N = 3 SE +/- 17.05, N = 3 27341.47 53787.61 37753.89 40140.30 55258.17 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 8K 16K 24K 32K 40K SE +/- 0.15, N = 3 SE +/- 18.06, N = 3 SE +/- 0.90, N = 3 SE +/- 160.86, N = 3 SE +/- 1.96, N = 3 2558.12 25140.55 5133.89 38136.77 7730.41 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: Eigen a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300 600 900 1200 1500 SE +/- 0.67, N = 3 SE +/- 11.74, N = 9 SE +/- 12.00, N = 3 SE +/- 13.37, N = 3 SE +/- 9.70, N = 3 128 1001 834 1466 1189 1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: BLAS a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300 600 900 1200 1500 SE +/- 0.88, N = 3 SE +/- 12.82, N = 9 SE +/- 10.22, N = 4 SE +/- 12.41, N = 9 SE +/- 6.44, N = 3 135 1091 864 1397 1103 1. (CXX) g++ options: -flto -pthread
Coremark This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations/Sec, More Is Better Coremark 1.0 CoreMark Size 666 - Iterations Per Second a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 90K 180K 270K 360K 450K SE +/- 116.54, N = 3 SE +/- 2163.00, N = 3 SE +/- 49.84, N = 3 SE +/- 80.93, N = 3 SE +/- 3211.91, N = 3 203869.40 345133.44 315464.34 285378.84 405413.86 1. (CC) gcc options: -O2 -lrt" -lrt
N-Queens This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better N-Queens 1.0 Elapsed Time a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 7 14 21 28 35 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 32.29 16.38 23.14 18.84 21.54 1. (CC) gcc options: -static -fopenmp -O3 -march=native
7-Zip Compression This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Decompression Rating a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 16K 32K 48K 64K 80K SE +/- 31.21, N = 3 SE +/- 142.56, N = 3 SE +/- 239.68, N = 3 SE +/- 35.00, N = 3 SE +/- 12.88, N = 3 40891 57318 59445 45653 73054 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
m-queens A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better m-queens 1.2 Time To Solve a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 SE +/- 0.06, N = 3 SE +/- 0.00, N = 3 110.37 72.33 75.22 91.23 66.82 1. (CXX) g++ options: -fopenmp -O2 -march=native
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: IO_uring a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200K 400K 600K 800K 1000K SE +/- 3840.04, N = 3 SE +/- 713.03, N = 3 SE +/- 2395.13, N = 3 SE +/- 405.56, N = 3 SE +/- 614.16, N = 3 918172.37 768723.46 770521.81 1037943.37 843015.78 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 800 1600 2400 3200 4000 SE +/- 0.50, N = 3 SE +/- 234.97, N = 12 SE +/- 1.74, N = 3 SE +/- 1.61, N = 3 SE +/- 1.86, N = 3 757 3696 2072 3450 2817 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300 600 900 1200 1500 SE +/- 0.50, N = 3 SE +/- 82.60, N = 12 SE +/- 0.17, N = 3 SE +/- 91.51, N = 12 SE +/- 0.00, N = 3 165 1192 334 1374 609 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 30 60 90 120 150 SE +/- 0.00, N = 3 SE +/- 5.55, N = 12 SE +/- 0.00, N = 3 SE +/- 0.60, N = 3 SE +/- 0.00, N = 3 10 65 28 139 38 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 170 340 510 680 850 SE +/- 0.88, N = 3 SE +/- 0.58, N = 3 SE +/- 0.17, N = 3 SE +/- 50.92, N = 12 SE +/- 0.17, N = 3 115 488 322 773 407 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 2.20, N = 3 SE +/- 75.29, N = 12 SE +/- 3.50, N = 3 SE +/- 322.41, N = 12 SE +/- 2.40, N = 3 2312 5617 6948 7944 7990 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1200 2400 3600 4800 6000 SE +/- 20.90, N = 3 SE +/- 53.31, N = 15 SE +/- 14.44, N = 3 SE +/- 80.05, N = 12 SE +/- 17.76, N = 3 5724.66 3847.96 1980.24 3967.39 1502.95
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20 40 60 80 100 SE +/- 2.00, N = 15 SE +/- 0.77, N = 5 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 104.76 69.35 62.32 92.55 38.52 1. (CC) gcc options: -lm -lpthread -O3
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Streamcluster a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 11 22 33 44 55 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 SE +/- 0.26, N = 15 SE +/- 0.07, N = 3 SE +/- 0.33, N = 12 47.43 18.38 15.48 23.51 13.30 1. (CXX) g++ options: -O2 -lOpenCL
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 00:29 by user ubuntu.
c6g.4xlarge Graviton2 Processor: ARMv8 Neoverse-N1 (16 Cores), Motherboard: Amazon EC2 c6g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 13:01 by user ubuntu.
c7g.4xlarge Graviton3 Processor: ARMv8 Neoverse-V1 (16 Cores), Motherboard: Amazon EC2 c7g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 24 May 2022 11:30 by user ubuntu.
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:31 by user ubuntu.
c6i.4xlarge Xeon Processor: Intel Xeon Platinum 8375C (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6i.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xd000331Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:32 by user ubuntu.