Amazon AWS Graviton3 benchmarks by Michael Larabel.
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
c6g.4xlarge Graviton2 Changed Processor to ARMv8 Neoverse-N1 (16 Cores) .
Changed Motherboard to Amazon EC2 c6g.4xlarge (1.0 BIOS) .
Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
c7g.4xlarge Graviton3 Changed Processor to ARMv8 Neoverse-V1 (16 Cores) .
Changed Motherboard to Amazon EC2 c7g.4xlarge (1.0 BIOS) .
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads) , Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS) , Chipset: Intel 440FX 82441FX PMC , Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
c6i.4xlarge Xeon Changed Processor to Intel Xeon Platinum 8375C (8 Cores / 16 Threads) .
Changed Motherboard to Amazon EC2 c6i.4xlarge (1.0 BIOS) .
Processor Change: CPU Microcode: 0xd000331Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Amazon EC2 Graviton3 Benchmark Comparison Processor Motherboard Chipset Memory Disk Network OS Kernel Compiler File-System System Layer a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon ARMv8 Cortex-A72 (16 Cores) Amazon EC2 a1.4xlarge (1.0 BIOS) Amazon Device 0200 32GB 193GB Amazon Elastic Block Store Amazon Elastic Ubuntu 22.04 5.15.0-1004-aws (aarch64) GCC 11.2.0 ext4 amazon ARMv8 Neoverse-N1 (16 Cores) Amazon EC2 c6g.4xlarge (1.0 BIOS) ARMv8 Neoverse-V1 (16 Cores) Amazon EC2 c7g.4xlarge (1.0 BIOS) AMD EPYC 7R13 (8 Cores / 16 Threads) Amazon EC2 c6a.4xlarge (1.0 BIOS) Intel 440FX 82441FX PMC 5.15.0-1004-aws (x86_64) Intel Xeon Platinum 8375C (8 Cores / 16 Threads) Amazon EC2 c6i.4xlarge (1.0 BIOS) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - a1.4xlarge Graviton: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6g.4xlarge Graviton2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c7g.4xlarge Graviton3: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6a.4xlarge EPYC: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c6i.4xlarge Xeon: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - Python 3.10.4 Security Details - a1.4xlarge Graviton: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected - c6g.4xlarge Graviton2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c7g.4xlarge Graviton3: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c6a.4xlarge EPYC: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - c6i.4xlarge Xeon: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Processor Details - c6a.4xlarge EPYC: CPU Microcode: 0xa001144 - c6i.4xlarge Xeon: CPU Microcode: 0xd000331
a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon Logarithmic Result Overview Phoronix Test Suite High Performance Conjugate Gradient Algebraic Multi-Grid Benchmark ACES DGEMM ONNX Runtime Xcompact3d Incompact3d NAS Parallel Benchmarks Timed MrBayes Analysis GPAW LULESH GROMACS Apache HTTP Server simdjson TensorFlow Lite ASTC Encoder Timed Node.js Compilation LAMMPS Molecular Dynamics Simulator PyBench PHPBench libavif avifenc Timed ImageMagick Compilation Timed Apache Compilation Timed LLVM Compilation Rodinia OpenSSL Zstd Compression SecureMark Ngspice Liquid-DSP Build2 Timed PHP Compilation DaCapo Benchmark WebP Image Encode Timed Gem5 Compilation nginx C-Ray TSCP Stockfish POV-Ray 7-Zip Compression Stress-NG asmFish Google SynthMark LeelaChessZero Coremark N-Queens m-queens
Amazon EC2 Graviton3 Benchmark Comparison stress-ng: Memory Copying npb: MG.C npb: CG.C npb: SP.C compress-zstd: 3 - Compression Speed npb: FT.C hpcg: amg: incompact3d: input.i3d 129 Cells Per Direction mt-dgemm: Sustained Floating-Point Rate incompact3d: input.i3d 193 Cells Per Direction stress-ng: CPU Stress simdjson: DistinctUserID mrbayes: Primate Phylogeny Analysis npb: IS.D tensorflow-lite: Mobilenet Float gpaw: Carbon Nanotube avifenc: 2 simdjson: PartialTweets lulesh: apache: 100 astcenc: Thorough gromacs: MPI CPU - water_GMX50_bare tensorflow-lite: Inception V4 apache: 500 apache: 200 simdjson: Kostya npb: BT.C openssl: RSA4096 tensorflow-lite: Inception ResNet V2 apache: 1000 tensorflow-lite: SqueezeNet astcenc: Exhaustive rodinia: OpenMP CFD Solver openssl: RSA4096 avifenc: 0 build-nodejs: Time To Compile lammps: Rhodopsin Protein pybench: Total For Average Test Times phpbench: PHP Benchmark Suite build-imagemagick: Time To Compile tensorflow-lite: NASNet Mobile build-apache: Time To Compile dacapobench: Jython build-llvm: Ninja npb: EP.D ngspice: C2670 dacapobench: Tradesoap simdjson: LargeRand securemark: SecureMark-TLS dacapobench: Tradebeans liquid-dsp: 16 - 256 - 57 build2: Time To Compile build-php: Time To Compile compress-7zip: Compression Rating ngspice: C7552 webp: Quality 100, Lossless, Highest Compression build-gem5: Time To Compile webp: Quality 100, Lossless avifenc: 6, Lossless nginx: 1000 nginx: 500 nginx: 200 compress-zstd: 19 - Decompression Speed nginx: 100 tscp: AI Chess Performance compress-zstd: 19, Long Mode - Decompression Speed stockfish: Total Time rodinia: OpenMP LavaMD povray: Trace Time compress-zstd: 19, Long Mode - Compression Speed compress-zstd: 19 - Compression Speed dacapobench: H2 stress-ng: Crypto asmfish: 1024 Hash Memory, 26 Depth synthmark: VoiceMark_100 openssl: SHA256 stress-ng: Vector Math npb: LU.C lczero: Eigen lczero: BLAS coremark: CoreMark Size 666 - Iterations Per Second n-queens: Elapsed Time compress-7zip: Decompression Rating m-queens: Time To Solve stress-ng: IO_uring onnx: super-resolution-10 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: bertsquad-12 - CPU - Standard onnx: GPT-2 - CPU - Standard tensorflow-lite: Mobilenet Quant c-ray: Total Time - 4K, 16 Rays Per Pixel rodinia: OpenMP Streamcluster a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 798.24 3266.36 1213.15 1293.80 633.9 2927.16 3.77834 186716933 53.7706274 0.891391 182.583939 2366.00 0.8 644.788 197.57 9990.15 769.346 449.022 0.78 2328.2724 18636.43 33.5198 0.316 188910 20133.49 20887.58 0.63 3148.18 588.3 171169 19278.68 12014.7 277.7669 41.450 45328.6 768.302 1765.910 3.245 3452 241259 93.632 30986.7 74.742 12997 1784.600 339.20 473.901 11182 0.3 74356 9045 165513333 353.912 196.029 32498 480.793 124.708 1155.615 61.801 33.991 138205.11 139414.84 141436.20 1121.7 143155.48 538500 1213.9 10980430 360.304 93.801 16 16.9 6740 11985.38 15331550 331.070 6785689517 27341.47 2558.12 128 135 203869.402017 32.285 40891 110.368 918172.37 757 165 10 115 2312 5724.66 104.761 47.430 2903.00 6720.68 3520.86 2356.16 2888.3 6244.48 19.7218 932652900 11.5733547 4.785123 41.0240835 3404.94 1.53 384.753 372.76 2500.87 215.528 238.205 1.51 6016.1627 46995.35 16.5222 0.781 46793.9 50077.81 50059.97 1.19 6449.11 660.6 45955.7 46629.45 3969.35 159.2039 17.035 53951.5 406.937 628.401 7.935 1741 449855 40.333 14985.4 34.201 5626 682.981 558.88 263.724 4506 0.49 120301 4344 262890000 142.277 88.897 71285 255.205 66.147 488.805 31.082 16.518 308213.13 310596.58 308938.67 2051.6 307349.36 872313 2196.3 21679245 215.666 51.047 31.0 34.6 3964 17924.18 26540482 470.389 10723184083 37753.89 5133.89 834 864 315464.339800 23.136 59445 75.224 770521.81 2072 334 28 322 6948 1980.24 62.323 15.484 6693.32 13481.61 6571.95 4467.19 4639.1 11791.77 26.3058 1258807333 8.01671425 5.853864 29.1258570 5029.71 2.69 251.397 1041.90 2156.60 155.180 141.698 2.62 10940.939 67231.88 13.9248 1.128 41855.1 73546.32 73676.95 1.94 10339.53 2546.4 40051.3 72719.33 3257.94 139.3797 10.478 178460.4 256.841 497.579 11.291 1185 666484 27.904 11591.9 26.940 3940 544.929 934.72 198.224 3524 0.7 183708 3203 383606667 115.020 69.483 97824 191.286 48.208 391.171 22.769 11.908 346814.75 346613.34 352380.98 3050.3 345710.87 1370094 3240.6 27608891 143.334 37.863 39.5 41.2 2951 23181.81 32134123 675.635 13722045973 55258.17 7730.41 1189 1103 405413.860554 21.536 73054 66.822 843015.78 2817 609 38 407 7990 1502.95 38.517 13.296 3551.80 16826.43 6169.22 8094.79 2784.0 18299.96 5.06042 267670700 28.2797661 2.432432 110.770027 13304.50 4.30 120.636 541.35 2159.72 302.956 93.946 3.64 5452.1051 77567.69 7.9818 1.004 44920.6 81995.64 83070.00 2.80 13134.46 2088.9 41366.6 71537.11 3103.12 72.3908 21.789 136784.2 195.532 664.347 5.067 1961 480741 32.626 9266.86 23.532 4616 760.344 466.21 245.886 4052 0.95 213288 3167 509746667 150.994 67.084 62562 180.356 48.677 515.201 26.708 16.394 388657.76 389030.11 390932.79 2907.5 388010.76 1442631 2826.0 23857623 224.331 49.435 25.9 30.0 3019 13556.06 26187688 663.073 11691403353 53787.61 25140.55 1001 1091 345133.440541 16.378 57318 72.330 768723.46 3696 1192 65 488 5617 3847.96 69.349 18.383 3150.49 26298.81 9522.82 9563.22 3440.6 20423.57 8.66031 661364767 17.8682772 2.230545 69.2169978 12527.16 4.30 134.924 861.57 1965.07 202.106 97.735 3.71 8112.3715 86545.57 7.2625 1.452 41185.7 91746.57 94458.22 2.46 13888.40 2161.3 41179.7 79830.96 2983.93 69.6387 20.446 140964.4 204.994 604.620 6.220 997 828186 29.737 10900.6 22.527 4013 685.704 1103.22 147.893 3815 0.86 230549 2928 373100000 136.801 64.337 66631 161.081 41.805 469.940 21.122 17.529 347345.49 351672.92 356829.93 2582.0 356302.84 1272596 2666.1 22081961 281.389 52.784 33.8 38.1 2921 10210.34 23746200 565.690 7096993937 40140.30 38136.77 1466 1397 285378.841661 18.839 45653 91.231 1037943.37 3450 1374 139 773 7944 3967.39 92.545 23.512 OpenBenchmarking.org
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 1400 2800 4200 5600 7000 SE +/- 3.52, N = 3 SE +/- 11.57, N = 3 SE +/- 0.94, N = 3 SE +/- 3.75, N = 3 SE +/- 0.91, N = 3 6693.32 3551.80 3150.49 2903.00 798.24 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 6K 12K 18K 24K 30K SE +/- 184.24, N = 3 SE +/- 30.62, N = 3 SE +/- 4.69, N = 3 SE +/- 1.39, N = 3 SE +/- 1.64, N = 3 26298.81 16826.43 13481.61 6720.68 3266.36 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 2K 4K 6K 8K 10K SE +/- 66.44, N = 3 SE +/- 17.12, N = 3 SE +/- 81.25, N = 3 SE +/- 9.95, N = 3 SE +/- 11.79, N = 6 9522.82 6571.95 6169.22 3520.86 1213.15 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 2K 4K 6K 8K 10K SE +/- 73.65, N = 3 SE +/- 24.63, N = 3 SE +/- 9.61, N = 3 SE +/- 0.57, N = 3 SE +/- 2.51, N = 3 9563.22 8094.79 4467.19 2356.16 1293.80 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3 - Compression Speed c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 1000 2000 3000 4000 5000 SE +/- 9.57, N = 3 SE +/- 29.53, N = 3 SE +/- 6.37, N = 3 SE +/- 2.65, N = 3 SE +/- 4.47, N = 3 4639.1 3440.6 2888.3 2784.0 633.9 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 4K 8K 12K 16K 20K SE +/- 40.24, N = 3 SE +/- 45.90, N = 3 SE +/- 1.17, N = 3 SE +/- 1.10, N = 3 SE +/- 1.73, N = 3 20423.57 18299.96 11791.77 6244.48 2927.16 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
High Performance Conjugate Gradient HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6i.4xlarge Xeon c6a.4xlarge EPYC a1.4xlarge Graviton 6 12 18 24 30 SE +/- 0.03738, N = 3 SE +/- 0.01639, N = 3 SE +/- 0.04033, N = 3 SE +/- 0.00225, N = 3 SE +/- 0.00065, N = 3 26.30580 19.72180 8.66031 5.06042 3.77834 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6i.4xlarge Xeon c6a.4xlarge EPYC a1.4xlarge Graviton 300M 600M 900M 1200M 1500M SE +/- 952437.28, N = 3 SE +/- 3420043.89, N = 3 SE +/- 5114517.12, N = 3 SE +/- 103921.81, N = 3 SE +/- 176548.39, N = 3 1258807333 932652900 661364767 267670700 186716933 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6i.4xlarge Xeon c6a.4xlarge EPYC a1.4xlarge Graviton 12 24 36 48 60 SE +/- 0.01401446, N = 3 SE +/- 0.01351889, N = 3 SE +/- 0.09619197, N = 3 SE +/- 0.03718674, N = 3 SE +/- 0.02862870, N = 3 8.01671425 11.57335470 17.86827720 28.27976610 53.77062740 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
ACES DGEMM This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better ACES DGEMM 1.0 Sustained Floating-Point Rate c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC c6i.4xlarge Xeon a1.4xlarge Graviton 1.3171 2.6342 3.9513 5.2684 6.5855 SE +/- 0.016350, N = 3 SE +/- 0.007139, N = 3 SE +/- 0.023324, N = 6 SE +/- 0.003819, N = 3 SE +/- 0.002370, N = 3 5.853864 4.785123 2.432432 2.230545 0.891391 1. (CC) gcc options: -O3 -march=native -fopenmp
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6i.4xlarge Xeon c6a.4xlarge EPYC a1.4xlarge Graviton 40 80 120 160 200 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.14, N = 3 SE +/- 0.12, N = 3 SE +/- 0.15, N = 3 29.13 41.02 69.22 110.77 182.58 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 3K 6K 9K 12K 15K SE +/- 37.60, N = 3 SE +/- 155.66, N = 3 SE +/- 0.41, N = 3 SE +/- 0.54, N = 3 SE +/- 0.16, N = 3 13304.50 12527.16 5029.71 3404.94 2366.00 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: DistinctUserID c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 0.9675 1.935 2.9025 3.87 4.8375 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 4.30 4.30 2.69 1.53 0.80 1. (CXX) g++ options: -O3
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 140 280 420 560 700 SE +/- 0.35, N = 3 SE +/- 1.43, N = 3 SE +/- 0.24, N = 3 SE +/- 0.11, N = 3 SE +/- 0.49, N = 3 120.64 134.92 251.40 384.75 644.79 -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm 1. (CC) gcc options: -O3 -std=c99 -pedantic -lm
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 200 400 600 800 1000 SE +/- 2.29, N = 3 SE +/- 2.14, N = 3 SE +/- 0.47, N = 3 SE +/- 0.20, N = 3 SE +/- 0.31, N = 3 1041.90 861.57 541.35 372.76 197.57 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 2K 4K 6K 8K 10K SE +/- 1.81, N = 3 SE +/- 19.61, N = 3 SE +/- 1.03, N = 3 SE +/- 28.63, N = 3 SE +/- 113.94, N = 3 1965.07 2156.60 2159.72 2500.87 9990.15
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 22.1 Input: Carbon Nanotube c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 170 340 510 680 850 SE +/- 0.08, N = 3 SE +/- 0.24, N = 3 SE +/- 0.13, N = 3 SE +/- 0.17, N = 3 SE +/- 5.37, N = 3 155.18 202.11 215.53 302.96 769.35 1. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 100 200 300 400 500 SE +/- 0.44, N = 3 SE +/- 0.26, N = 3 SE +/- 0.11, N = 3 SE +/- 0.12, N = 3 SE +/- 0.29, N = 3 93.95 97.74 141.70 238.21 449.02 1. (CXX) g++ options: -O3 -fPIC -lm
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: PartialTweets c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 0.8348 1.6696 2.5044 3.3392 4.174 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 3.71 3.64 2.62 1.51 0.78 1. (CXX) g++ options: -O3
LULESH LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org z/s, More Is Better LULESH 2.0.3 c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 2K 4K 6K 8K 10K SE +/- 76.73, N = 3 SE +/- 14.20, N = 3 SE +/- 4.88, N = 3 SE +/- 5.52, N = 3 SE +/- 6.27, N = 3 10940.94 8112.37 6016.16 5452.11 2328.27 1. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 100 c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 20K 40K 60K 80K 100K SE +/- 389.13, N = 3 SE +/- 211.56, N = 3 SE +/- 38.09, N = 3 SE +/- 93.03, N = 3 SE +/- 28.97, N = 3 86545.57 77567.69 67231.88 46995.35 18636.43 1. (CC) gcc options: -shared -fPIC -O2
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 8 16 24 32 40 SE +/- 0.0001, N = 3 SE +/- 0.0154, N = 3 SE +/- 0.0011, N = 3 SE +/- 0.0064, N = 3 SE +/- 0.0061, N = 3 7.2625 7.9818 13.9248 16.5222 33.5198 1. (CXX) g++ options: -O3 -flto -pthread
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 0.3267 0.6534 0.9801 1.3068 1.6335 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 1.452 1.128 1.004 0.781 0.316 1. (CXX) g++ options: -O3
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 40K 80K 120K 160K 200K SE +/- 75.14, N = 3 SE +/- 210.27, N = 3 SE +/- 53.95, N = 3 SE +/- 197.89, N = 3 SE +/- 1746.17, N = 3 41185.7 41855.1 44920.6 46793.9 188910.0
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 500 c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 20K 40K 60K 80K 100K SE +/- 833.50, N = 7 SE +/- 636.46, N = 13 SE +/- 89.82, N = 3 SE +/- 578.32, N = 3 SE +/- 93.64, N = 3 91746.57 81995.64 73546.32 50077.81 20133.49 1. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 200 c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 20K 40K 60K 80K 100K SE +/- 615.05, N = 3 SE +/- 644.29, N = 3 SE +/- 649.31, N = 3 SE +/- 112.65, N = 3 SE +/- 59.55, N = 3 94458.22 83070.00 73676.95 50059.97 20887.58 1. (CC) gcc options: -shared -fPIC -O2
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: Kostya c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 0.63 1.26 1.89 2.52 3.15 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2.80 2.46 1.94 1.19 0.63 1. (CXX) g++ options: -O3
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 3K 6K 9K 12K 15K SE +/- 22.04, N = 3 SE +/- 98.45, N = 3 SE +/- 7.36, N = 3 SE +/- 3.20, N = 3 SE +/- 3.44, N = 3 13888.40 13134.46 10339.53 6449.11 3148.18 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 500 1000 1500 2000 2500 SE +/- 0.23, N = 3 SE +/- 4.47, N = 3 SE +/- 1.40, N = 3 SE +/- 0.03, N = 3 SE +/- 0.12, N = 3 2546.4 2161.3 2088.9 660.6 588.3 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 40K 80K 120K 160K 200K SE +/- 305.31, N = 3 SE +/- 110.01, N = 3 SE +/- 27.66, N = 3 SE +/- 336.95, N = 3 SE +/- 825.35, N = 3 40051.3 41179.7 41366.6 45955.7 171169.0
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 1000 c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 20K 40K 60K 80K 100K SE +/- 335.63, N = 3 SE +/- 83.83, N = 3 SE +/- 397.88, N = 3 SE +/- 276.10, N = 3 SE +/- 98.61, N = 3 79830.96 72719.33 71537.11 46629.45 19278.68 1. (CC) gcc options: -shared -fPIC -O2
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 3K 6K 9K 12K 15K SE +/- 3.54, N = 3 SE +/- 1.37, N = 3 SE +/- 22.07, N = 3 SE +/- 37.23, N = 3 SE +/- 46.48, N = 3 2983.93 3103.12 3257.94 3969.35 12014.70
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 60 120 180 240 300 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.07, N = 3 69.64 72.39 139.38 159.20 277.77 1. (CXX) g++ options: -O3 -flto -pthread
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP CFD Solver c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6i.4xlarge Xeon c6a.4xlarge EPYC a1.4xlarge Graviton 9 18 27 36 45 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 10.48 17.04 20.45 21.79 41.45 1. (CXX) g++ options: -O2 -lOpenCL
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 40K 80K 120K 160K 200K SE +/- 82.61, N = 3 SE +/- 47.94, N = 3 SE +/- 74.60, N = 3 SE +/- 3.30, N = 3 SE +/- 63.75, N = 3 178460.4 140964.4 136784.2 53951.5 45328.6 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 170 340 510 680 850 SE +/- 0.62, N = 3 SE +/- 0.33, N = 3 SE +/- 0.18, N = 3 SE +/- 0.13, N = 3 SE +/- 0.58, N = 3 195.53 204.99 256.84 406.94 768.30 1. (CXX) g++ options: -O3 -fPIC -lm
Timed Node.js Compilation This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 17.3 Time To Compile c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 400 800 1200 1600 2000 SE +/- 2.06, N = 3 SE +/- 0.42, N = 3 SE +/- 0.37, N = 3 SE +/- 0.26, N = 3 SE +/- 1.80, N = 3 497.58 604.62 628.40 664.35 1765.91
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 700 1400 2100 2800 3500 SE +/- 3.84, N = 3 SE +/- 0.33, N = 3 SE +/- 1.67, N = 3 SE +/- 1.53, N = 3 SE +/- 18.15, N = 3 997 1185 1741 1961 3452
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 200K 400K 600K 800K 1000K SE +/- 983.65, N = 3 SE +/- 525.83, N = 3 SE +/- 2681.41, N = 3 SE +/- 743.13, N = 3 SE +/- 816.27, N = 3 828186 666484 480741 449855 241259
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 7K 14K 21K 28K 35K SE +/- 23.44, N = 3 SE +/- 166.62, N = 14 SE +/- 121.56, N = 15 SE +/- 203.15, N = 15 SE +/- 49.84, N = 3 9266.86 10900.60 11591.90 14985.40 30986.70
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 200 400 600 800 1000 SE +/- 19.93, N = 9 SE +/- 0.39, N = 3 SE +/- 0.23, N = 3 SE +/- 0.06, N = 3 SE +/- 0.24, N = 3 1103.22 934.72 558.88 466.21 339.20 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 100 200 300 400 500 SE +/- 1.80, N = 4 SE +/- 0.86, N = 3 SE +/- 1.17, N = 3 SE +/- 0.91, N = 3 SE +/- 3.48, N = 3 147.89 198.22 245.89 263.72 473.90 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: LargeRandom c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 0.2138 0.4276 0.6414 0.8552 1.069 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.95 0.86 0.70 0.49 0.30 1. (CXX) g++ options: -O3
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 50K 100K 150K 200K 250K SE +/- 864.34, N = 3 SE +/- 3310.19, N = 9 SE +/- 773.26, N = 3 SE +/- 23.07, N = 3 SE +/- 59.40, N = 3 230549 213288 183708 120301 74356 1. (CC) gcc options: -pedantic -O3
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 110M 220M 330M 440M 550M SE +/- 489364.67, N = 3 SE +/- 400097.21, N = 3 SE +/- 41633.32, N = 3 SE +/- 35118.85, N = 3 SE +/- 8819.17, N = 3 509746667 383606667 373100000 262890000 165513333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Build2 This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.13 Time To Compile c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 80 160 240 320 400 SE +/- 0.64, N = 3 SE +/- 0.69, N = 3 SE +/- 0.70, N = 3 SE +/- 0.87, N = 3 SE +/- 1.89, N = 3 115.02 136.80 142.28 150.99 353.91
7-Zip Compression This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Compression Rating c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6i.4xlarge Xeon c6a.4xlarge EPYC a1.4xlarge Graviton 20K 40K 60K 80K 100K SE +/- 159.36, N = 3 SE +/- 44.77, N = 3 SE +/- 174.34, N = 3 SE +/- 16.02, N = 3 SE +/- 91.00, N = 3 97824 71285 66631 62562 32498 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 100 200 300 400 500 SE +/- 0.33, N = 3 SE +/- 0.66, N = 3 SE +/- 1.94, N = 3 SE +/- 2.40, N = 7 SE +/- 1.19, N = 3 161.08 180.36 191.29 255.21 480.79 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 30 60 90 120 150 SE +/- 0.34, N = 3 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 SE +/- 0.09, N = 3 41.81 48.21 48.68 66.15 124.71 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
Timed Gem5 Compilation This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 21.2 Time To Compile c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 200 400 600 800 1000 SE +/- 1.33, N = 3 SE +/- 0.59, N = 3 SE +/- 0.53, N = 3 SE +/- 0.79, N = 3 SE +/- 0.78, N = 3 391.17 469.94 488.81 515.20 1155.62
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 14 28 42 56 70 SE +/- 0.03, N = 3 SE +/- 0.09, N = 3 SE +/- 0.17, N = 15 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 21.12 22.77 26.71 31.08 61.80 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon a1.4xlarge Graviton 8 16 24 32 40 SE +/- 0.01, N = 3 SE +/- 0.12, N = 3 SE +/- 0.17, N = 3 SE +/- 0.03, N = 3 SE +/- 0.31, N = 3 11.91 16.39 16.52 17.53 33.99 1. (CXX) g++ options: -O3 -fPIC -lm
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 80K 160K 240K 320K 400K SE +/- 781.49, N = 3 SE +/- 2637.25, N = 3 SE +/- 1410.11, N = 3 SE +/- 1677.89, N = 3 SE +/- 66.96, N = 3 388657.76 347345.49 346814.75 308213.13 138205.11 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 500 c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 80K 160K 240K 320K 400K SE +/- 771.95, N = 3 SE +/- 1620.39, N = 3 SE +/- 1017.52, N = 3 SE +/- 3783.68, N = 3 SE +/- 141.15, N = 3 389030.11 351672.92 346613.34 310596.58 139414.84 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 200 c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 80K 160K 240K 320K 400K SE +/- 1242.81, N = 3 SE +/- 1582.66, N = 3 SE +/- 3986.77, N = 3 SE +/- 1347.28, N = 3 SE +/- 133.96, N = 3 390932.79 356829.93 352380.98 308938.67 141436.20 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 700 1400 2100 2800 3500 SE +/- 7.75, N = 3 SE +/- 3.25, N = 3 SE +/- 24.18, N = 3 SE +/- 12.10, N = 3 SE +/- 4.74, N = 3 3050.3 2907.5 2582.0 2051.6 1121.7 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 100 c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 80K 160K 240K 320K 400K SE +/- 436.72, N = 3 SE +/- 1727.81, N = 3 SE +/- 2009.97, N = 3 SE +/- 3992.58, N = 3 SE +/- 22.67, N = 3 388010.76 356302.84 345710.87 307349.36 143155.48 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
TSCP This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better TSCP 1.81 AI Chess Performance c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 300K 600K 900K 1200K 1500K SE +/- 4180.17, N = 5 SE +/- 0.00, N = 5 SE +/- 1099.67, N = 5 SE +/- 338.27, N = 5 SE +/- 196.86, N = 5 1442631 1370094 1272596 872313 538500 1. (CC) gcc options: -O3 -march=native
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 700 1400 2100 2800 3500 SE +/- 6.93, N = 3 SE +/- 6.53, N = 3 SE +/- 7.82, N = 3 SE +/- 2.93, N = 3 SE +/- 15.28, N = 3 3240.6 2826.0 2666.1 2196.3 1213.9 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 6M 12M 18M 24M 30M SE +/- 153578.64, N = 3 SE +/- 149731.77, N = 3 SE +/- 242448.39, N = 3 SE +/- 292329.99, N = 3 SE +/- 123749.22, N = 3 27608891 23857623 22081961 21679245 10980430 -m64 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -m64 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 1. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP LavaMD c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC c6i.4xlarge Xeon a1.4xlarge Graviton 80 160 240 320 400 SE +/- 0.15, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.14, N = 3 SE +/- 0.07, N = 3 143.33 215.67 224.33 281.39 360.30 1. (CXX) g++ options: -O2 -lOpenCL
POV-Ray This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray 3.7.0.7 Trace Time c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon a1.4xlarge Graviton 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.18, N = 3 SE +/- 0.00, N = 3 SE +/- 0.12, N = 3 SE +/- 0.94, N = 15 37.86 49.44 51.05 52.78 93.80 -march=native -march=native 1. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 9 18 27 36 45 SE +/- 0.23, N = 3 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 SE +/- 0.27, N = 3 SE +/- 0.00, N = 3 39.5 33.8 31.0 25.9 16.0 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 9 18 27 36 45 SE +/- 0.00, N = 3 SE +/- 0.40, N = 3 SE +/- 0.06, N = 3 SE +/- 0.21, N = 3 SE +/- 0.03, N = 3 41.2 38.1 34.6 30.0 16.9 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton c6i.4xlarge Xeon 5K 10K 15K 20K 25K SE +/- 32.01, N = 3 SE +/- 92.83, N = 3 SE +/- 3.93, N = 3 SE +/- 6.29, N = 3 SE +/- 5.89, N = 3 23181.81 17924.18 13556.06 11985.38 10210.34 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
asmFish This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes/second, More Is Better asmFish 2018-07-23 1024 Hash Memory, 26 Depth c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC c6i.4xlarge Xeon a1.4xlarge Graviton 7M 14M 21M 28M 35M SE +/- 104795.40, N = 3 SE +/- 359309.26, N = 3 SE +/- 303648.79, N = 3 SE +/- 325631.00, N = 3 SE +/- 106812.26, N = 3 32134123 26540482 26187688 23746200 15331550
Google SynthMark SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Voices, More Is Better Google SynthMark 20201109 Test: VoiceMark_100 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 150 300 450 600 750 SE +/- 0.32, N = 3 SE +/- 7.09, N = 3 SE +/- 2.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 675.64 663.07 565.69 470.39 331.07 1. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon a1.4xlarge Graviton 3000M 6000M 9000M 12000M 15000M SE +/- 7739237.92, N = 3 SE +/- 8616254.20, N = 3 SE +/- 47755430.47, N = 3 SE +/- 606684.16, N = 3 SE +/- 12563225.46, N = 3 13722045973 11691403353 10723184083 7096993937 6785689517 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon c6g.4xlarge Graviton2 a1.4xlarge Graviton 12K 24K 36K 48K 60K SE +/- 17.05, N = 3 SE +/- 2.46, N = 3 SE +/- 28.50, N = 3 SE +/- 15.72, N = 3 SE +/- 0.49, N = 3 55258.17 53787.61 40140.30 37753.89 27341.47 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 8K 16K 24K 32K 40K SE +/- 160.86, N = 3 SE +/- 18.06, N = 3 SE +/- 1.96, N = 3 SE +/- 0.90, N = 3 SE +/- 0.15, N = 3 38136.77 25140.55 7730.41 5133.89 2558.12 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: Eigen c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 300 600 900 1200 1500 SE +/- 13.37, N = 3 SE +/- 9.70, N = 3 SE +/- 11.74, N = 9 SE +/- 12.00, N = 3 SE +/- 0.67, N = 3 1466 1189 1001 834 128 1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: BLAS c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 a1.4xlarge Graviton 300 600 900 1200 1500 SE +/- 12.41, N = 9 SE +/- 6.44, N = 3 SE +/- 12.82, N = 9 SE +/- 10.22, N = 4 SE +/- 0.88, N = 3 1397 1103 1091 864 135 1. (CXX) g++ options: -flto -pthread
Coremark This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations/Sec, More Is Better Coremark 1.0 CoreMark Size 666 - Iterations Per Second c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon a1.4xlarge Graviton 90K 180K 270K 360K 450K SE +/- 3211.91, N = 3 SE +/- 2163.00, N = 3 SE +/- 49.84, N = 3 SE +/- 80.93, N = 3 SE +/- 116.54, N = 3 405413.86 345133.44 315464.34 285378.84 203869.40 1. (CC) gcc options: -O2 -lrt" -lrt
N-Queens This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better N-Queens 1.0 Elapsed Time c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 7 14 21 28 35 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 16.38 18.84 21.54 23.14 32.29 1. (CC) gcc options: -static -fopenmp -O3 -march=native
7-Zip Compression This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Decompression Rating c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC c6i.4xlarge Xeon a1.4xlarge Graviton 16K 32K 48K 64K 80K SE +/- 12.88, N = 3 SE +/- 239.68, N = 3 SE +/- 142.56, N = 3 SE +/- 35.00, N = 3 SE +/- 31.21, N = 3 73054 59445 57318 45653 40891 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
m-queens A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better m-queens 1.2 Time To Solve c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon a1.4xlarge Graviton 20 40 60 80 100 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 66.82 72.33 75.22 91.23 110.37 1. (CXX) g++ options: -fopenmp -O2 -march=native
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: IO_uring c6i.4xlarge Xeon a1.4xlarge Graviton c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC 200K 400K 600K 800K 1000K SE +/- 405.56, N = 3 SE +/- 3840.04, N = 3 SE +/- 614.16, N = 3 SE +/- 2395.13, N = 3 SE +/- 713.03, N = 3 1037943.37 918172.37 843015.78 770521.81 768723.46 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard c6a.4xlarge EPYC c6i.4xlarge Xeon c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 800 1600 2400 3200 4000 SE +/- 234.97, N = 12 SE +/- 1.61, N = 3 SE +/- 1.86, N = 3 SE +/- 1.74, N = 3 SE +/- 0.50, N = 3 3696 3450 2817 2072 757 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 300 600 900 1200 1500 SE +/- 91.51, N = 12 SE +/- 82.60, N = 12 SE +/- 0.00, N = 3 SE +/- 0.17, N = 3 SE +/- 0.50, N = 3 1374 1192 609 334 165 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 30 60 90 120 150 SE +/- 0.60, N = 3 SE +/- 5.55, N = 12 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 139 65 38 28 10 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard c6i.4xlarge Xeon c6a.4xlarge EPYC c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 a1.4xlarge Graviton 170 340 510 680 850 SE +/- 50.92, N = 12 SE +/- 0.58, N = 3 SE +/- 0.17, N = 3 SE +/- 0.17, N = 3 SE +/- 0.88, N = 3 773 488 407 322 115 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard c7g.4xlarge Graviton3 c6i.4xlarge Xeon c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton 2K 4K 6K 8K 10K SE +/- 2.40, N = 3 SE +/- 322.41, N = 12 SE +/- 3.50, N = 3 SE +/- 75.29, N = 12 SE +/- 2.20, N = 3 7990 7944 6948 5617 2312 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC c6i.4xlarge Xeon a1.4xlarge Graviton 1200 2400 3600 4800 6000 SE +/- 17.76, N = 3 SE +/- 14.44, N = 3 SE +/- 53.31, N = 15 SE +/- 80.05, N = 12 SE +/- 20.90, N = 3 1502.95 1980.24 3847.96 3967.39 5724.66
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC c6i.4xlarge Xeon a1.4xlarge Graviton 20 40 60 80 100 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.77, N = 5 SE +/- 0.04, N = 3 SE +/- 2.00, N = 15 38.52 62.32 69.35 92.55 104.76 1. (CC) gcc options: -lm -lpthread -O3
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Streamcluster c7g.4xlarge Graviton3 c6g.4xlarge Graviton2 c6a.4xlarge EPYC c6i.4xlarge Xeon a1.4xlarge Graviton 11 22 33 44 55 SE +/- 0.33, N = 12 SE +/- 0.26, N = 15 SE +/- 0.05, N = 3 SE +/- 0.07, N = 3 SE +/- 0.02, N = 3 13.30 15.48 18.38 23.51 47.43 1. (CXX) g++ options: -O2 -lOpenCL
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 00:29 by user ubuntu.
c6g.4xlarge Graviton2 Processor: ARMv8 Neoverse-N1 (16 Cores), Motherboard: Amazon EC2 c6g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 13:01 by user ubuntu.
c7g.4xlarge Graviton3 Processor: ARMv8 Neoverse-V1 (16 Cores), Motherboard: Amazon EC2 c7g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 24 May 2022 11:30 by user ubuntu.
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:31 by user ubuntu.
c6i.4xlarge Xeon Processor: Intel Xeon Platinum 8375C (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6i.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xd000331Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:32 by user ubuntu.