amazon testing on Ubuntu 22.04 via the Phoronix Test Suite.
Tau T2A: 32 vCPUs Processor: ARMv8 Neoverse-N1 (32 Cores), Motherboard: KVM Google Compute Engine, Memory: 128GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1016-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
m6g.8xlarge Processor: ARMv8 Neoverse-N1 (32 Cores), Motherboard: Amazon EC2 m6g.8xlarge (1.0 BIOS) , Chipset: Amazon Device 0200 , Memory: 128GB , Disk: 215GB Amazon Elastic Block Store , Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1009-aws (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Tau T2A 16 vCPUs Processor Motherboard Memory Disk Network Chipset OS Kernel Compiler File-System System Layer Tau T2A: 32 vCPUs m6g.8xlarge ARMv8 Neoverse-N1 (32 Cores) KVM Google Compute Engine 128GB 215GB nvme_card-pd Google Compute Engine Virtual Ubuntu 22.04 5.15.0-1016-gcp (aarch64) GCC 12.0.1 20220319 ext4 KVM Amazon EC2 m6g.8xlarge (1.0 BIOS) Amazon Device 0200 215GB Amazon Elastic Block Store Amazon Elastic 5.15.0-1009-aws (aarch64) amazon OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Environment Details - CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native" Compiler Details - --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.4 Security Details - Tau T2A: 32 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - m6g.8xlarge: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Tau T2A: 32 vCPUs vs. m6g.8xlarge Comparison Phoronix Test Suite Baseline +283.2% +283.2% +566.4% +566.4% +849.6% +849.6% +1132.8% +1132.8% 153.7% 142.1% 99.3% 56.2% 56.2% 41.3% 39.9% 31.4% 27.4% 24.7% 21.7% 21.1% 21% 19.8% 18% 17.3% 16.7% 16.4% 16% 15.7% 13.5% 13% 12.8% 12.4% 11.5% 11.4% 11.1% 10.9% 10.9% 10.5% 10.4% 10% 9.9% 9.7% 9.1% 8.8% 8.5% 8.2% 8.1% 7.4% 7.4% 7% 6.3% 6.3% 6.1% 4.9% 4.6% 4.4% 4.1% 3.4% 3.1% 3% 2.7% 2.4% 2.2% 100 - 250 - Read Write - Average Latency 100 - 250 - Read Write CPU Cache 1132.9% Update Rand 100 - 100 - Read Write 100 - 100 - Read Write - Average Latency F.H.R Tradesoap R.R.W.R Tradebeans Writes 500 H2 1000 CPU - SqueezeNet v2 20.4% CPU - Numpy - 16384 - Equation of State 20% 1000000 - 100 - Group By Test Time CoreMark Size 666 - I.P.S 19.3% CPU - SqueezeNet v1.1 19% CPU 19% Rand Read 19% RSA4096 18.9% EP.D 18.9% RSA4096 18.9% SHA256 18.6% Vector Math 18.6% CPU Stress 18.5% Matrix Math 18.4% 1000000 - 100 - C.P.B 18.4% 40000000 - 100 - C.P.B 18.4% Mobilenet Quant 18.3% 1000000 - 2000 - C.P.B 18.2% tConvolve MT - Degridding 40000000 - 2000 - C.P.B 18% 2 17.9% 17.6% S.C.m.j CPU - MobileNet v2 17.3% 16.9% tConvolve MPI - Gridding 1000000 - 100 - I.J.T.T 6 16.1% S.C.c.j Exhaustive 16% 1000000 - 100 - S.5.B.T Medium 15.7% Thorough 15.5% CPU - Numpy - 16384 - Isoneutral Mixing 14.3% Speed 5 - Bosphorus 1080p 13.7% I.M.D.S 6, Lossless 13.4% 20k Atoms 13.1% CPU - Numpy - 1048576 - Equation of State 40000000 - 2000 - S.5.B.T Savina Reactors.IO 12.7% SET 12.5% tConvolve MPI - Degridding BMW27 - CPU-Only 12.1% Rhodopsin Protein 12% 1000000 - 2000 - S.5.B.T CPU - DenseNet 11.4% 26 tConvolve OpenMP - Gridding 100 - 100 - Read Only - Average Latency Speed 0 - Bosphorus 1080p 10.9% 40000000 - 100 - S.5.B.T MPI CPU - water_GMX50_bare 10.6% 100 - 100 - Read Only CPU - Numpy - 4194304 - Equation of State Speed 0 - Bosphorus 4K 10.4% Jython 10.3% Speed 5 - Bosphorus 4K 10.3% 1000000 - 2000 - C.P.B.U.D 10.2% 40000000 - 100 - C.P.B.U.D 10.1% 40000000 - 2000 - C.P.B.U.D 10% 1000000 - 2000 - Group By Test Time Classroom - CPU-Only 10% NUMA 1000000 - 2000 - R.T.T 1000000 - 100 - C.P.B.U.D 9.4% GET 9.3% Fishy Cat - CPU-Only 9.1% 1000000 - 2000 - I.J.T.T Read While Writing LU.C 8.6% 26 100 - 250 - Read Only - Average Latency Apache Spark Bayes 8.1% 100 - 250 - Read Only 10, Lossless 7.5% S.V.M.P tConvolve MT - Gridding 26 Time To Compile 6.6% Mobilenet Float 6.5% 1000000 - 100 - R.T.T IS.D 40000000 - 2000 - Group By Test Time 6% A.U.C.T 5% tConvolve OpenMP - Degridding Futex 1000000 - 2000 - B.I.J.T.T Time To Compile 4.2% ALS Movie Lens I.R.V 3.9% 40000000 - 100 - R.T.T 3.8% 40000000 - 2000 - R.T.T 3.7% 40000000 - 100 - I.J.T.T 3.7% BT.C 3.6% Rand Forest 3.6% SP.C FT.C 3.1% 1000000 - 100 - B.I.J.T.T MG.C 3% 40000000 - 100 - Group By Test Time 40000000 - 2000 - I.J.T.T 3% Inception V4 3% 40000000 - 100 - B.I.J.T.T 2.9% Scala Dotty G.A.U.J.F 2.7% Apache Spark ALS 2.7% 40000000 - 2000 - B.I.J.T.T 2.5% H.C.O A.S.P 2.4% CG.C 2.4% NASNet Mobile 2.3% 26 PostgreSQL pgbench PostgreSQL pgbench Stress-NG Facebook RocksDB PostgreSQL pgbench PostgreSQL pgbench Renaissance DaCapo Benchmark Facebook RocksDB DaCapo Benchmark Apache Cassandra nginx DaCapo Benchmark nginx TNN PyHPC Benchmarks Apache Spark Coremark TNN Sysbench Facebook RocksDB OpenSSL NAS Parallel Benchmarks OpenSSL OpenSSL Stress-NG Stress-NG Stress-NG Apache Spark Apache Spark TensorFlow Lite Apache Spark ASKAP Apache Spark libavif avifenc libavif avifenc SPECjbb 2015 TNN Aircrack-ng ASKAP Apache Spark libavif avifenc SPECjbb 2015 ASTC Encoder Apache Spark ASTC Encoder ASTC Encoder PyHPC Benchmarks VP9 libvpx Encoding Renaissance libavif avifenc LAMMPS Molecular Dynamics Simulator PyHPC Benchmarks Apache Spark Renaissance Redis ASKAP Blender LAMMPS Molecular Dynamics Simulator Apache Spark TNN Graph500 ASKAP PostgreSQL pgbench VP9 libvpx Encoding Apache Spark GROMACS PostgreSQL pgbench PyHPC Benchmarks VP9 libvpx Encoding DaCapo Benchmark VP9 libvpx Encoding Apache Spark Apache Spark Apache Spark Apache Spark Blender Stress-NG Apache Spark Apache Spark Redis Blender Apache Spark Facebook RocksDB NAS Parallel Benchmarks Graph500 PostgreSQL pgbench Renaissance PostgreSQL pgbench libavif avifenc Stress-NG ASKAP Graph500 Timed MPlayer Compilation TensorFlow Lite Apache Spark NAS Parallel Benchmarks Apache Spark High Performance Conjugate Gradient Renaissance ASKAP Stress-NG Apache Spark Timed FFmpeg Compilation Renaissance TensorFlow Lite Apache Spark Apache Spark Apache Spark NAS Parallel Benchmarks Renaissance NAS Parallel Benchmarks NAS Parallel Benchmarks Apache Spark NAS Parallel Benchmarks Apache Spark Apache Spark TensorFlow Lite Apache Spark Renaissance Renaissance Renaissance Apache Spark ASKAP Renaissance NAS Parallel Benchmarks TensorFlow Lite Graph500 Tau T2A: 32 vCPUs m6g.8xlarge
Tau T2A 16 vCPUs aircrack-ng: cassandra: Writes spark: 1000000 - 100 - SHA-512 Benchmark Time spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - SHA-512 Benchmark Time spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 100 - Group By Test Time spark: 40000000 - 100 - Repartition Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Broadcast Inner Join Test Time spark: 40000000 - 2000 - SHA-512 Benchmark Time spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 2000 - Group By Test Time spark: 40000000 - 2000 - Repartition Test Time spark: 40000000 - 2000 - Inner Join Test Time spark: 40000000 - 2000 - Broadcast Inner Join Test Time askap: tConvolve MT - Gridding askap: tConvolve MT - Degridding askap: tConvolve MPI - Degridding askap: tConvolve MPI - Gridding askap: tConvolve OpenMP - Gridding askap: tConvolve OpenMP - Degridding askap: Hogbom Clean OpenMP astcenc: Medium astcenc: Thorough astcenc: Exhaustive blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only coremark: CoreMark Size 666 - Iterations Per Second dacapobench: H2 dacapobench: Jython dacapobench: Tradesoap dacapobench: Tradebeans rocksdb: Rand Read rocksdb: Update Rand rocksdb: Read While Writing rocksdb: Read Rand Write Rand gpaw: Carbon Nanotube graph500: 26 graph500: 26 graph500: 26 graph500: 26 gromacs: MPI CPU - water_GMX50_bare hpcg: lammps: 20k Atoms lammps: Rhodopsin Protein avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless npb: BT.C npb: CG.C npb: EP.D npb: FT.C npb: IS.D npb: LU.C npb: MG.C npb: SP.B npb: SP.C nginx: 500 nginx: 1000 openfoam: drivaerFastback, Medium Mesh Size - Mesh Time openfoam: drivaerFastback, Medium Mesh Size - Execution Time openssl: SHA256 openssl: RSA4096 openssl: RSA4096 pgbench: 100 - 100 - Read Only pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 250 - Read Only pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 100 - Read Write pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 250 - Read Write pgbench: 100 - 250 - Read Write - Average Latency pyhpc: CPU - Numpy - 16384 - Equation of State pyhpc: CPU - Numpy - 16384 - Isoneutral Mixing pyhpc: CPU - Numpy - 1048576 - Equation of State pyhpc: CPU - Numpy - 1048576 - Isoneutral Mixing pyhpc: CPU - Numpy - 4194304 - Equation of State pyhpc: CPU - Numpy - 4194304 - Isoneutral Mixing redis: GET redis: SET renaissance: Scala Dotty renaissance: Rand Forest renaissance: ALS Movie Lens renaissance: Apache Spark ALS renaissance: Apache Spark Bayes renaissance: Savina Reactors.IO renaissance: Apache Spark PageRank renaissance: Finagle HTTP Requests renaissance: In-Memory Database Shootout renaissance: Akka Unbalanced Cobwebbed Tree renaissance: Genetic Algorithm Using Jenetics + Futures spec-jbb2015: SPECjbb2015-Composite max-jOPS spec-jbb2015: SPECjbb2015-Composite critical-jOPS stress-ng: NUMA stress-ng: Futex stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Matrix Math stress-ng: Vector Math stress-ng: System V Message Passing sysbench: CPU tensorflow-lite: SqueezeNet tensorflow-lite: Inception V4 tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Float tensorflow-lite: Mobilenet Quant tensorflow-lite: Inception ResNet V2 build-ffmpeg: Time To Compile build-gem5: Time To Compile build-mplayer: Time To Compile tnn: CPU - DenseNet tnn: CPU - MobileNet v2 tnn: CPU - SqueezeNet v2 tnn: CPU - SqueezeNet v1.1 vpxenc: Speed 0 - Bosphorus 4K vpxenc: Speed 5 - Bosphorus 4K vpxenc: Speed 0 - Bosphorus 1080p vpxenc: Speed 5 - Bosphorus 1080p Tau T2A: 32 vCPUs m6g.8xlarge 33647.548 87819 4.79 69.77 4.79 6.72 2.01 2.13 1.68 4.96 69.92 4.80 6.72 2.60 2.87 2.12 46.30 69.57 4.76 27.64 24.36 30.32 31.98 39.22 69.79 4.78 22.84 22.22 28.66 26.55 4456.55 5522.07 3962.08 3899.28 7262.74 9181.24 996.700 5.9825 7.1619 68.6557 112.47 249.89 214.41 700917.944737 5175 5079 5015 5954 124704201 211735 2610992 1321827 130.353 477377000 508372000 124702000 169542000 1.718 22.0930 16.550 16.596 266.337 169.639 6.682 10.341 6.775 69530.64 21433.92 3265.68 52309.81 1822.77 87702.30 50939.05 34381.91 26843.58 235749.36 233484.08 206.4 994.53 25788919913 1570.2 128273.1 329539 0.304 312239 0.803 3383 29.559 2282 114.801 0.005 0.014 0.392 0.915 2.055 3.723 1926297.79 1411234.92 1871.7 1047.3 17606.6 4118.3 766.4 10705.9 5174.3 9430.8 6566.5 29296.7 3084.2 35075 22955 549.13 1437660.62 566.91 8209.47 151792.83 97749.08 6128517.10 108241.61 3853.90 31657.3 28372.8 2093.25 3550.65 33994.9 38.958 312.120 28.928 3056.897 322.768 95.473 301.150 2.13 6.99 4.99 12.11 28780.706 109525 4.14 82.60 5.24 5.61 1.89 1.83 1.63 4.45 82.62 5.29 6.11 2.37 2.63 2.03 41.76 82.34 5.24 26.83 25.28 31.44 32.92 34.77 82.32 5.26 21.52 23.05 29.52 27.21 4785.75 6515.57 4453.69 4550.23 8068.36 9626.54 1020.48 6.9189 8.2706 79.6164 126.12 274.81 233.99 587539.277646 4272 5604 3584 4673 104825086 421971 2839853 1737021 132.680 510761000 519646000 138918000 183940000 1.554 20.8354 14.635 14.824 313.273 199.973 7.755 11.727 7.285 67111.72 20938.75 2746.81 50732.78 1937.35 80791.93 49445.81 34983.68 27767.10 286801.67 282425.09 208.2 977.47 21748728233 1320.9 107872.3 364193 0.274 337416 0.742 5285 18.922 5524 45.258 0.006 0.016 0.347 0.915 1.862 3.660 1761869.13 1254760.84 1821.7 1084.5 16918.7 4229.8 828.8 12067.9 5297.4 6674.3 5783.7 30764.8 3167.9 41157 26638 603.25 1503894.12 45.98 6927.34 128190.24 82451.97 6581912.04 90962.22 3806.31 32600.7 29037.2 2230.11 4200.71 35336.4 40.613 316.278 30.838 3406.538 378.550 114.952 358.425 1.93 6.34 4.50 10.65 OpenBenchmarking.org
Aircrack-ng Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 m6g.8xlarge Tau T2A: 32 vCPUs 7K 14K 21K 28K 35K SE +/- 4.44, N = 3 SE +/- 287.54, N = 15 28780.71 33647.55 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time Tau T2A: 32 vCPUs m6g.8xlarge 1.0778 2.1556 3.2334 4.3112 5.389 SE +/- 0.11, N = 15 SE +/- 0.02, N = 3 4.79 4.14
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark m6g.8xlarge Tau T2A: 32 vCPUs 20 40 60 80 100 SE +/- 0.15, N = 3 SE +/- 0.06, N = 15 82.60 69.77
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe m6g.8xlarge Tau T2A: 32 vCPUs 1.179 2.358 3.537 4.716 5.895 SE +/- 0.04, N = 3 SE +/- 0.01, N = 15 5.24 4.79
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time Tau T2A: 32 vCPUs m6g.8xlarge 2 4 6 8 10 SE +/- 0.23, N = 15 SE +/- 0.06, N = 3 6.72 5.61
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time Tau T2A: 32 vCPUs m6g.8xlarge 0.4523 0.9046 1.3569 1.8092 2.2615 SE +/- 0.03, N = 15 SE +/- 0.03, N = 3 2.01 1.89
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time Tau T2A: 32 vCPUs m6g.8xlarge 0.4793 0.9586 1.4379 1.9172 2.3965 SE +/- 0.02, N = 15 SE +/- 0.01, N = 3 2.13 1.83
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time Tau T2A: 32 vCPUs m6g.8xlarge 0.378 0.756 1.134 1.512 1.89 SE +/- 0.03, N = 15 SE +/- 0.08, N = 3 1.68 1.63
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time Tau T2A: 32 vCPUs m6g.8xlarge 1.116 2.232 3.348 4.464 5.58 SE +/- 0.04, N = 15 SE +/- 0.03, N = 11 4.96 4.45
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark m6g.8xlarge Tau T2A: 32 vCPUs 20 40 60 80 100 SE +/- 0.05, N = 11 SE +/- 0.06, N = 15 82.62 69.92
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe m6g.8xlarge Tau T2A: 32 vCPUs 1.1903 2.3806 3.5709 4.7612 5.9515 SE +/- 0.01, N = 11 SE +/- 0.01, N = 15 5.29 4.80
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time Tau T2A: 32 vCPUs m6g.8xlarge 2 4 6 8 10 SE +/- 0.05, N = 15 SE +/- 0.03, N = 11 6.72 6.11
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time Tau T2A: 32 vCPUs m6g.8xlarge 0.585 1.17 1.755 2.34 2.925 SE +/- 0.03, N = 15 SE +/- 0.03, N = 11 2.60 2.37
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time Tau T2A: 32 vCPUs m6g.8xlarge 0.6458 1.2916 1.9374 2.5832 3.229 SE +/- 0.04, N = 15 SE +/- 0.03, N = 11 2.87 2.63
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time Tau T2A: 32 vCPUs m6g.8xlarge 0.477 0.954 1.431 1.908 2.385 SE +/- 0.02, N = 15 SE +/- 0.05, N = 11 2.12 2.03
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time Tau T2A: 32 vCPUs m6g.8xlarge 10 20 30 40 50 SE +/- 0.45, N = 9 SE +/- 0.11, N = 3 46.30 41.76
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark m6g.8xlarge Tau T2A: 32 vCPUs 20 40 60 80 100 SE +/- 0.23, N = 3 SE +/- 0.08, N = 9 82.34 69.57
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe m6g.8xlarge Tau T2A: 32 vCPUs 1.179 2.358 3.537 4.716 5.895 SE +/- 0.02, N = 3 SE +/- 0.01, N = 9 5.24 4.76
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time Tau T2A: 32 vCPUs m6g.8xlarge 7 14 21 28 35 SE +/- 0.16, N = 9 SE +/- 0.31, N = 3 27.64 26.83
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time m6g.8xlarge Tau T2A: 32 vCPUs 6 12 18 24 30 SE +/- 0.11, N = 3 SE +/- 0.12, N = 9 25.28 24.36
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time m6g.8xlarge Tau T2A: 32 vCPUs 7 14 21 28 35 SE +/- 0.34, N = 3 SE +/- 0.44, N = 9 31.44 30.32
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time m6g.8xlarge Tau T2A: 32 vCPUs 8 16 24 32 40 SE +/- 0.39, N = 3 SE +/- 0.26, N = 9 32.92 31.98
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time Tau T2A: 32 vCPUs m6g.8xlarge 9 18 27 36 45 SE +/- 0.55, N = 12 SE +/- 0.37, N = 12 39.22 34.77
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark m6g.8xlarge Tau T2A: 32 vCPUs 20 40 60 80 100 SE +/- 0.07, N = 12 SE +/- 0.11, N = 12 82.32 69.79
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe m6g.8xlarge Tau T2A: 32 vCPUs 1.1835 2.367 3.5505 4.734 5.9175 SE +/- 0.01, N = 12 SE +/- 0.02, N = 12 5.26 4.78
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time Tau T2A: 32 vCPUs m6g.8xlarge 5 10 15 20 25 SE +/- 0.32, N = 12 SE +/- 0.18, N = 12 22.84 21.52
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time m6g.8xlarge Tau T2A: 32 vCPUs 6 12 18 24 30 SE +/- 0.09, N = 12 SE +/- 0.24, N = 12 23.05 22.22
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time m6g.8xlarge Tau T2A: 32 vCPUs 7 14 21 28 35 SE +/- 0.13, N = 12 SE +/- 0.19, N = 12 29.52 28.66
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time m6g.8xlarge Tau T2A: 32 vCPUs 6 12 18 24 30 SE +/- 0.10, N = 12 SE +/- 0.17, N = 12 27.21 26.55
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding Tau T2A: 32 vCPUs m6g.8xlarge 1000 2000 3000 4000 5000 SE +/- 35.89, N = 15 SE +/- 7.65, N = 3 4456.55 4785.75 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding Tau T2A: 32 vCPUs m6g.8xlarge 1400 2800 4200 5600 7000 SE +/- 80.56, N = 15 SE +/- 4.40, N = 3 5522.07 6515.57 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding Tau T2A: 32 vCPUs m6g.8xlarge 1000 2000 3000 4000 5000 SE +/- 54.84, N = 15 SE +/- 6.31, N = 3 3962.08 4453.69 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding Tau T2A: 32 vCPUs m6g.8xlarge 1000 2000 3000 4000 5000 SE +/- 42.99, N = 15 SE +/- 6.58, N = 3 3899.28 4550.23 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding Tau T2A: 32 vCPUs m6g.8xlarge 2K 4K 6K 8K 10K SE +/- 66.63, N = 3 SE +/- 0.00, N = 3 7262.74 8068.36 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding Tau T2A: 32 vCPUs m6g.8xlarge 2K 4K 6K 8K 10K SE +/- 0.00, N = 3 SE +/- 117.40, N = 3 9181.24 9626.54 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP Tau T2A: 32 vCPUs m6g.8xlarge 200 400 600 800 1000 SE +/- 3.30, N = 3 SE +/- 6.01, N = 3 996.70 1020.48 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Medium m6g.8xlarge Tau T2A: 32 vCPUs 2 4 6 8 10 SE +/- 0.0111, N = 3 SE +/- 0.0035, N = 3 6.9189 5.9825 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough m6g.8xlarge Tau T2A: 32 vCPUs 2 4 6 8 10 SE +/- 0.0024, N = 3 SE +/- 0.0033, N = 3 8.2706 7.1619 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive m6g.8xlarge Tau T2A: 32 vCPUs 20 40 60 80 100 SE +/- 0.02, N = 3 SE +/- 0.08, N = 3 79.62 68.66 1. (CXX) g++ options: -O3 -march=native -flto -pthread
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: BMW27 - Compute: CPU-Only m6g.8xlarge Tau T2A: 32 vCPUs 30 60 90 120 150 SE +/- 0.21, N = 3 SE +/- 0.10, N = 3 126.12 112.47
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: Classroom - Compute: CPU-Only m6g.8xlarge Tau T2A: 32 vCPUs 60 120 180 240 300 SE +/- 0.46, N = 3 SE +/- 0.07, N = 3 274.81 249.89
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: Fishy Cat - Compute: CPU-Only m6g.8xlarge Tau T2A: 32 vCPUs 50 100 150 200 250 SE +/- 0.24, N = 3 SE +/- 0.42, N = 3 233.99 214.41
Facebook RocksDB This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Random Read m6g.8xlarge Tau T2A: 32 vCPUs 30M 60M 90M 120M 150M SE +/- 726664.50, N = 13 SE +/- 376574.31, N = 3 104825086 124704201 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Update Random Tau T2A: 32 vCPUs m6g.8xlarge 90K 180K 270K 360K 450K SE +/- 705.83, N = 3 SE +/- 1212.60, N = 3 211735 421971 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read While Writing Tau T2A: 32 vCPUs m6g.8xlarge 600K 1200K 1800K 2400K 3000K SE +/- 32390.32, N = 12 SE +/- 25190.27, N = 7 2610992 2839853 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read Random Write Random Tau T2A: 32 vCPUs m6g.8xlarge 400K 800K 1200K 1600K 2000K SE +/- 9643.50, N = 15 SE +/- 17104.02, N = 15 1321827 1737021 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 22.1 Input: Carbon Nanotube m6g.8xlarge Tau T2A: 32 vCPUs 30 60 90 120 150 SE +/- 0.03, N = 3 SE +/- 0.30, N = 3 132.68 130.35 1. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi
Graph500 This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org bfs median_TEPS, More Is Better Graph500 3.0 Scale: 26 Tau T2A: 32 vCPUs m6g.8xlarge 110M 220M 330M 440M 550M 477377000 510761000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
OpenBenchmarking.org bfs max_TEPS, More Is Better Graph500 3.0 Scale: 26 Tau T2A: 32 vCPUs m6g.8xlarge 110M 220M 330M 440M 550M 508372000 519646000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
OpenBenchmarking.org sssp median_TEPS, More Is Better Graph500 3.0 Scale: 26 Tau T2A: 32 vCPUs m6g.8xlarge 30M 60M 90M 120M 150M 124702000 138918000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
OpenBenchmarking.org sssp max_TEPS, More Is Better Graph500 3.0 Scale: 26 Tau T2A: 32 vCPUs m6g.8xlarge 40M 80M 120M 160M 200M 169542000 183940000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare m6g.8xlarge Tau T2A: 32 vCPUs 0.3866 0.7732 1.1598 1.5464 1.933 SE +/- 0.001, N = 3 SE +/- 0.010, N = 3 1.554 1.718 1. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 m6g.8xlarge Tau T2A: 32 vCPUs 40 80 120 160 200 SE +/- 0.14, N = 3 SE +/- 0.13, N = 3 199.97 169.64 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6 m6g.8xlarge Tau T2A: 32 vCPUs 2 4 6 8 10 SE +/- 0.039, N = 3 SE +/- 0.020, N = 3 7.755 6.682 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless m6g.8xlarge Tau T2A: 32 vCPUs 3 6 9 12 15 SE +/- 0.14, N = 3 SE +/- 0.00, N = 3 11.73 10.34 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 10, Lossless m6g.8xlarge Tau T2A: 32 vCPUs 2 4 6 8 10 SE +/- 0.019, N = 3 SE +/- 0.072, N = 3 7.285 6.775 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C m6g.8xlarge Tau T2A: 32 vCPUs 15K 30K 45K 60K 75K SE +/- 44.47, N = 3 SE +/- 272.46, N = 3 67111.72 69530.64 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C m6g.8xlarge Tau T2A: 32 vCPUs 5K 10K 15K 20K 25K SE +/- 28.45, N = 3 SE +/- 35.67, N = 3 20938.75 21433.92 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D m6g.8xlarge Tau T2A: 32 vCPUs 700 1400 2100 2800 3500 SE +/- 1.15, N = 3 SE +/- 2.04, N = 3 2746.81 3265.68 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C m6g.8xlarge Tau T2A: 32 vCPUs 11K 22K 33K 44K 55K SE +/- 144.89, N = 3 SE +/- 41.18, N = 3 50732.78 52309.81 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D Tau T2A: 32 vCPUs m6g.8xlarge 400 800 1200 1600 2000 SE +/- 0.86, N = 3 SE +/- 5.23, N = 3 1822.77 1937.35 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C m6g.8xlarge Tau T2A: 32 vCPUs 20K 40K 60K 80K 100K SE +/- 115.81, N = 3 SE +/- 137.48, N = 3 80791.93 87702.30 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C m6g.8xlarge Tau T2A: 32 vCPUs 11K 22K 33K 44K 55K SE +/- 47.68, N = 3 SE +/- 31.40, N = 3 49445.81 50939.05 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B Tau T2A: 32 vCPUs m6g.8xlarge 7K 14K 21K 28K 35K SE +/- 38.20, N = 3 SE +/- 44.23, N = 3 34381.91 34983.68 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C Tau T2A: 32 vCPUs m6g.8xlarge 6K 12K 18K 24K 30K SE +/- 31.60, N = 3 SE +/- 32.63, N = 3 26843.58 27767.10 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 500 Tau T2A: 32 vCPUs m6g.8xlarge 60K 120K 180K 240K 300K SE +/- 245.82, N = 3 SE +/- 1454.63, N = 3 235749.36 286801.67 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 Tau T2A: 32 vCPUs m6g.8xlarge 60K 120K 180K 240K 300K SE +/- 479.91, N = 3 SE +/- 2142.75, N = 3 233484.08 282425.09 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 9 Input: drivaerFastback, Medium Mesh Size - Mesh Time m6g.8xlarge Tau T2A: 32 vCPUs 50 100 150 200 250 208.2 206.4 1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 9 Input: drivaerFastback, Medium Mesh Size - Execution Time Tau T2A: 32 vCPUs m6g.8xlarge 200 400 600 800 1000 994.53 977.47 1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 m6g.8xlarge Tau T2A: 32 vCPUs 6000M 12000M 18000M 24000M 30000M SE +/- 4842815.19, N = 3 SE +/- 119493320.18, N = 3 21748728233 25788919913 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 m6g.8xlarge Tau T2A: 32 vCPUs 300 600 900 1200 1500 SE +/- 0.12, N = 3 SE +/- 0.06, N = 3 1320.9 1570.2 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 m6g.8xlarge Tau T2A: 32 vCPUs 30K 60K 90K 120K 150K SE +/- 2.96, N = 3 SE +/- 29.86, N = 3 107872.3 128273.1 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only Tau T2A: 32 vCPUs m6g.8xlarge 80K 160K 240K 320K 400K SE +/- 1811.74, N = 3 SE +/- 3146.40, N = 3 329539 364193 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency Tau T2A: 32 vCPUs m6g.8xlarge 0.0684 0.1368 0.2052 0.2736 0.342 SE +/- 0.002, N = 3 SE +/- 0.002, N = 3 0.304 0.274 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only Tau T2A: 32 vCPUs m6g.8xlarge 70K 140K 210K 280K 350K SE +/- 4561.68, N = 12 SE +/- 3802.23, N = 4 312239 337416 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency Tau T2A: 32 vCPUs m6g.8xlarge 0.1807 0.3614 0.5421 0.7228 0.9035 SE +/- 0.012, N = 12 SE +/- 0.008, N = 4 0.803 0.742 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write Tau T2A: 32 vCPUs m6g.8xlarge 1100 2200 3300 4400 5500 SE +/- 6.74, N = 3 SE +/- 10.72, N = 3 3383 5285 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency Tau T2A: 32 vCPUs m6g.8xlarge 7 14 21 28 35 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 29.56 18.92 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write Tau T2A: 32 vCPUs m6g.8xlarge 1200 2400 3600 4800 6000 SE +/- 154.25, N = 12 SE +/- 6.46, N = 3 2282 5524 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency Tau T2A: 32 vCPUs m6g.8xlarge 30 60 90 120 150 SE +/- 7.17, N = 12 SE +/- 0.05, N = 3 114.80 45.26 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
PyHPC Benchmarks PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State m6g.8xlarge Tau T2A: 32 vCPUs 0.0014 0.0028 0.0042 0.0056 0.007 SE +/- 0.000, N = 3 SE +/- 0.000, N = 14 0.006 0.005
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing m6g.8xlarge Tau T2A: 32 vCPUs 0.0036 0.0072 0.0108 0.0144 0.018 SE +/- 0.000, N = 3 SE +/- 0.000, N = 15 0.016 0.014
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State Tau T2A: 32 vCPUs m6g.8xlarge 0.0882 0.1764 0.2646 0.3528 0.441 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 0.392 0.347
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing m6g.8xlarge Tau T2A: 32 vCPUs 0.2059 0.4118 0.6177 0.8236 1.0295 SE +/- 0.006, N = 3 SE +/- 0.005, N = 3 0.915 0.915
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Tau T2A: 32 vCPUs m6g.8xlarge 0.4624 0.9248 1.3872 1.8496 2.312 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 2.055 1.862
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Tau T2A: 32 vCPUs m6g.8xlarge 0.8377 1.6754 2.5131 3.3508 4.1885 SE +/- 0.016, N = 3 SE +/- 0.021, N = 3 3.723 3.660
Redis Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET m6g.8xlarge Tau T2A: 32 vCPUs 400K 800K 1200K 1600K 2000K SE +/- 5705.98, N = 3 SE +/- 10764.67, N = 3 1761869.13 1926297.79 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SET m6g.8xlarge Tau T2A: 32 vCPUs 300K 600K 900K 1200K 1500K SE +/- 6854.78, N = 3 SE +/- 9294.72, N = 3 1254760.84 1411234.92 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Random Forest m6g.8xlarge Tau T2A: 32 vCPUs 200 400 600 800 1000 SE +/- 2.98, N = 3 SE +/- 12.92, N = 3 1084.5 1047.3 MIN: 958.08 / MAX: 1325.97 MIN: 904.64 / MAX: 1280.13
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: ALS Movie Lens Tau T2A: 32 vCPUs m6g.8xlarge 4K 8K 12K 16K 20K SE +/- 57.21, N = 3 SE +/- 233.21, N = 3 17606.6 16918.7 MIN: 17544.26 / MAX: 19037.24 MIN: 16601.1 / MAX: 18787.37
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark ALS m6g.8xlarge Tau T2A: 32 vCPUs 900 1800 2700 3600 4500 SE +/- 34.35, N = 9 SE +/- 32.04, N = 3 4229.8 4118.3 MIN: 4008.75 / MAX: 4594.43 MIN: 3925.84 / MAX: 4358.22
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark Bayes m6g.8xlarge Tau T2A: 32 vCPUs 200 400 600 800 1000 SE +/- 6.49, N = 15 SE +/- 9.73, N = 3 828.8 766.4 MIN: 538.38 / MAX: 1090.62 MIN: 495.95 / MAX: 1178.88
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Savina Reactors.IO m6g.8xlarge Tau T2A: 32 vCPUs 3K 6K 9K 12K 15K SE +/- 101.80, N = 3 SE +/- 131.70, N = 4 12067.9 10705.9 MIN: 11869.31 / MAX: 19424.22 MIN: 10505.49 / MAX: 14847.21
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark PageRank m6g.8xlarge Tau T2A: 32 vCPUs 1100 2200 3300 4400 5500 SE +/- 49.78, N = 3 SE +/- 77.61, N = 12 5297.4 5174.3 MIN: 4909.93 / MAX: 5385.51 MIN: 4316.47 / MAX: 6446.52
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Finagle HTTP Requests Tau T2A: 32 vCPUs m6g.8xlarge 2K 4K 6K 8K 10K SE +/- 122.37, N = 3 SE +/- 15.10, N = 3 9430.8 6674.3 MIN: 8793.75 / MAX: 9955.78 MIN: 6412.79 / MAX: 6795.18
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: In-Memory Database Shootout Tau T2A: 32 vCPUs m6g.8xlarge 1400 2800 4200 5600 7000 SE +/- 37.00, N = 3 SE +/- 39.12, N = 3 6566.5 5783.7 MIN: 5609.26 / MAX: 13128.6 MIN: 5350.98 / MAX: 6150.47
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Akka Unbalanced Cobwebbed Tree m6g.8xlarge Tau T2A: 32 vCPUs 7K 14K 21K 28K 35K SE +/- 1085.96, N = 6 SE +/- 344.06, N = 4 30764.8 29296.7 MIN: 23349.61 / MAX: 36054.32 MIN: 20859.52 / MAX: 30225.51
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Genetic Algorithm Using Jenetics + Futures m6g.8xlarge Tau T2A: 32 vCPUs 700 1400 2100 2800 3500 SE +/- 14.17, N = 3 SE +/- 8.81, N = 3 3167.9 3084.2 MIN: 3061.76 / MAX: 3232.32 MIN: 2993.8 / MAX: 3192.9
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex Tau T2A: 32 vCPUs m6g.8xlarge 300K 600K 900K 1200K 1500K SE +/- 15026.23, N = 3 SE +/- 17854.23, N = 15 1437660.62 1503894.12 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache m6g.8xlarge Tau T2A: 32 vCPUs 120 240 360 480 600 SE +/- 2.14, N = 12 SE +/- 0.28, N = 3 45.98 566.91 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress m6g.8xlarge Tau T2A: 32 vCPUs 2K 4K 6K 8K 10K SE +/- 0.61, N = 3 SE +/- 4.23, N = 3 6927.34 8209.47 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math m6g.8xlarge Tau T2A: 32 vCPUs 30K 60K 90K 120K 150K SE +/- 2.06, N = 3 SE +/- 9.80, N = 3 128190.24 151792.83 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math m6g.8xlarge Tau T2A: 32 vCPUs 20K 40K 60K 80K 100K SE +/- 0.29, N = 3 SE +/- 190.70, N = 3 82451.97 97749.08 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing Tau T2A: 32 vCPUs m6g.8xlarge 1.4M 2.8M 4.2M 5.6M 7M SE +/- 7551.56, N = 3 SE +/- 1469.54, N = 3 6128517.10 6581912.04 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU m6g.8xlarge Tau T2A: 32 vCPUs 20K 40K 60K 80K 100K SE +/- 123.73, N = 3 SE +/- 23.77, N = 3 90962.22 108241.61 1. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet Tau T2A: 32 vCPUs m6g.8xlarge 800 1600 2400 3200 4000 SE +/- 31.57, N = 8 SE +/- 54.35, N = 15 3853.90 3806.31
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet m6g.8xlarge Tau T2A: 32 vCPUs 700 1400 2100 2800 3500 SE +/- 5.54, N = 3 SE +/- 6.90, N = 3 3406.54 3056.90 MIN: 3340.13 / MAX: 3491.43 MIN: 2928.19 / MAX: 3237.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 m6g.8xlarge Tau T2A: 32 vCPUs 80 160 240 320 400 SE +/- 0.20, N = 3 SE +/- 0.05, N = 3 378.55 322.77 MIN: 377.31 / MAX: 380.26 MIN: 319.63 / MAX: 326.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 m6g.8xlarge Tau T2A: 32 vCPUs 30 60 90 120 150 SE +/- 1.17, N = 3 SE +/- 0.00, N = 3 114.95 95.47 MIN: 113.55 / MAX: 117.72 MIN: 95.15 / MAX: 96.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 m6g.8xlarge Tau T2A: 32 vCPUs 80 160 240 320 400 SE +/- 0.08, N = 3 SE +/- 0.07, N = 3 358.43 301.15 MIN: 357.7 / MAX: 359.41 MIN: 299.13 / MAX: 307.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
VP9 libvpx Encoding This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 0 - Input: Bosphorus 4K m6g.8xlarge Tau T2A: 32 vCPUs 0.4793 0.9586 1.4379 1.9172 2.3965 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 1.93 2.13 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 5 - Input: Bosphorus 4K m6g.8xlarge Tau T2A: 32 vCPUs 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 6.34 6.99 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 0 - Input: Bosphorus 1080p m6g.8xlarge Tau T2A: 32 vCPUs 1.1228 2.2456 3.3684 4.4912 5.614 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 4.50 4.99 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 5 - Input: Bosphorus 1080p m6g.8xlarge Tau T2A: 32 vCPUs 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 10.65 12.11 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
Tau T2A: 32 vCPUs Processor: ARMv8 Neoverse-N1 (32 Cores), Motherboard: KVM Google Compute Engine, Memory: 128GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1016-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 11 August 2022 21:42 by user michael_larabel.
m6g.8xlarge Processor: ARMv8 Neoverse-N1 (32 Cores), Motherboard: Amazon EC2 m6g.8xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 128GB, Disk: 215GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1009-aws (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 18 August 2022 19:29 by user ubuntu.