Tau T2A: 8 vCPUs Processor: ARMv8 Neoverse-N1 (8 Cores), Motherboard: KVM Google Compute Engine, Memory: 32GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1013-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Tau T2A: 16 vCPUs Changed Processor to ARMv8 Neoverse-N1 (16 Cores) .
Changed Memory to 64GB .
Tau T2A: 32 vCPUs Processor: ARMv8 Neoverse-N1 (32 Cores) , Motherboard: KVM Google Compute Engine, Memory: 128GB , Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1016-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C 8 vCPUs 16 vCPUs 32 vCPUs 15K 30K 45K 60K 75K SE +/- 23.11, N = 3 SE +/- 18.18, N = 3 SE +/- 272.46, N = 3 14368.29 49125.93 69530.64 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C 8 vCPUs 16 vCPUs 32 vCPUs 5K 10K 15K 20K 25K SE +/- 49.63, N = 15 SE +/- 171.15, N = 3 SE +/- 35.67, N = 3 6855.81 12171.95 21433.92 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D 8 vCPUs 16 vCPUs 32 vCPUs 700 1400 2100 2800 3500 SE +/- 0.56, N = 3 SE +/- 1.03, N = 3 SE +/- 2.04, N = 3 820.94 1634.99 3265.68 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C 8 vCPUs 16 vCPUs 32 vCPUs 11K 22K 33K 44K 55K SE +/- 15.96, N = 3 SE +/- 300.01, N = 3 SE +/- 41.18, N = 3 18574.23 32644.85 52309.81 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D 8 vCPUs 16 vCPUs 32 vCPUs 400 800 1200 1600 2000 SE +/- 1.14, N = 3 SE +/- 14.70, N = 3 SE +/- 0.86, N = 3 1104.26 1498.45 1822.77 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C 8 vCPUs 16 vCPUs 32 vCPUs 20K 40K 60K 80K 100K SE +/- 50.76, N = 3 SE +/- 701.24, N = 3 SE +/- 137.48, N = 3 32029.14 55447.31 87702.30 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C 8 vCPUs 16 vCPUs 32 vCPUs 11K 22K 33K 44K 55K SE +/- 46.23, N = 3 SE +/- 102.49, N = 3 SE +/- 31.40, N = 3 27703.33 33309.76 50939.05 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B 8 vCPUs 16 vCPUs 32 vCPUs 7K 14K 21K 28K 35K SE +/- 17.11, N = 3 SE +/- 244.58, N = 3 SE +/- 38.20, N = 3 7338.98 19552.45 34381.91 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C 8 vCPUs 16 vCPUs 32 vCPUs 6K 12K 18K 24K 30K SE +/- 39.91, N = 3 SE +/- 112.56, N = 3 SE +/- 31.60, N = 3 7115.28 19710.90 26843.58 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 9 Input: drivaerFastback, Medium Mesh Size - Mesh Time 8 vCPUs 16 vCPUs 32 vCPUs 90 180 270 360 450 425.95 303.71 206.40 -ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats 1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 9 Input: drivaerFastback, Medium Mesh Size - Execution Time 8 vCPUs 16 vCPUs 32 vCPUs 500 1000 1500 2000 2500 2426.16 1534.72 994.53 -ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats 1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm
Renaissance Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark Bayes 8 vCPUs 16 vCPUs 32 vCPUs 500 1000 1500 2000 2500 SE +/- 42.77, N = 15 SE +/- 7.45, N = 3 SE +/- 9.73, N = 3 2249.8 1262.0 766.4 MIN: 1478.18 / MAX: 2434.18 MIN: 877.37 / MAX: 1398.23 MIN: 495.95 / MAX: 1178.88
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Savina Reactors.IO 8 vCPUs 16 vCPUs 32 vCPUs 6K 12K 18K 24K 30K SE +/- 435.60, N = 9 SE +/- 583.25, N = 12 SE +/- 131.70, N = 4 26456.4 15981.5 10705.9 MIN: 13667.16 / MAX: 42318.14 MIN: 12776.53 / MAX: 36273.51 MIN: 10505.49 / MAX: 14847.21
VP9 libvpx Encoding This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 5 - Input: Bosphorus 4K 8 vCPUs 16 vCPUs 32 vCPUs 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 6.11 6.68 6.99 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 0 - Input: Bosphorus 1080p 8 vCPUs 16 vCPUs 32 vCPUs 1.1228 2.2456 3.3684 4.4912 5.614 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 4.65 4.84 4.99 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 5 - Input: Bosphorus 1080p 8 vCPUs 16 vCPUs 32 vCPUs 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 11.27 11.78 12.11 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 8 vCPUs 16 vCPUs 32 vCPUs 50 100 150 200 250 SE +/- 0.32, N = 3 SE +/- 0.47, N = 3 SE +/- 0.13, N = 3 245.60 194.77 169.64 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6 8 vCPUs 16 vCPUs 32 vCPUs 5 10 15 20 25 SE +/- 0.130, N = 3 SE +/- 0.037, N = 3 SE +/- 0.020, N = 3 20.273 11.168 6.682 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless 8 vCPUs 16 vCPUs 32 vCPUs 6 12 18 24 30 SE +/- 0.11, N = 3 SE +/- 0.19, N = 3 SE +/- 0.00, N = 3 23.31 14.70 10.34 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 10, Lossless 8 vCPUs 16 vCPUs 32 vCPUs 3 6 9 12 15 SE +/- 0.096, N = 3 SE +/- 0.065, N = 8 SE +/- 0.072, N = 3 9.997 7.658 6.775 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
Aircrack-ng Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 8 vCPUs 16 vCPUs 32 vCPUs 7K 14K 21K 28K 35K SE +/- 103.85, N = 15 SE +/- 192.97, N = 15 SE +/- 287.54, N = 15 8308.58 16697.92 33647.55 -lpcre -lpcre 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 8 vCPUs 16 vCPUs 32 vCPUs 6000M 12000M 18000M 24000M 30000M SE +/- 19026629.44, N = 3 SE +/- 19283388.31, N = 3 SE +/- 119493320.18, N = 3 6456083507 12926411527 25788919913 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 8 vCPUs 16 vCPUs 32 vCPUs 300 600 900 1200 1500 SE +/- 0.07, N = 3 SE +/- 0.06, N = 3 SE +/- 0.06, N = 3 393.7 786.7 1570.2 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 8 vCPUs 16 vCPUs 32 vCPUs 30K 60K 90K 120K 150K SE +/- 10.26, N = 3 SE +/- 8.35, N = 3 SE +/- 29.86, N = 3 32136.8 64247.0 128273.1 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time 8 vCPUs 16 vCPUs 32 vCPUs 2 4 6 8 10 SE +/- 0.03, N = 3 SE +/- 0.05, N = 12 SE +/- 0.11, N = 15 6.28 4.93 4.79
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark 8 vCPUs 16 vCPUs 32 vCPUs 60 120 180 240 300 SE +/- 0.17, N = 3 SE +/- 0.11, N = 12 SE +/- 0.06, N = 15 277.89 137.76 69.77
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe 8 vCPUs 16 vCPUs 32 vCPUs 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.02, N = 12 SE +/- 0.01, N = 15 15.92 8.39 4.79
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time 8 vCPUs 16 vCPUs 32 vCPUs 1.0305 2.061 3.0915 4.122 5.1525 SE +/- 0.02, N = 3 SE +/- 0.03, N = 12 SE +/- 0.03, N = 15 4.58 2.55 2.01
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 0.783 1.566 2.349 3.132 3.915 SE +/- 0.04, N = 3 SE +/- 0.03, N = 12 SE +/- 0.02, N = 15 3.48 2.22 2.13
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 0.6435 1.287 1.9305 2.574 3.2175 SE +/- 0.03, N = 3 SE +/- 0.04, N = 12 SE +/- 0.03, N = 15 2.86 1.79 1.68
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time 8 vCPUs 16 vCPUs 32 vCPUs 2 4 6 8 10 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 SE +/- 0.04, N = 15 8.09 5.91 4.96
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark 8 vCPUs 16 vCPUs 32 vCPUs 60 120 180 240 300 SE +/- 0.35, N = 3 SE +/- 0.20, N = 3 SE +/- 0.06, N = 15 278.47 137.17 69.92
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe 8 vCPUs 16 vCPUs 32 vCPUs 4 8 12 16 20 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 15 15.81 8.29 4.80
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time 8 vCPUs 16 vCPUs 32 vCPUs 2 4 6 8 10 SE +/- 0.13, N = 3 SE +/- 0.10, N = 3 SE +/- 0.05, N = 15 8.85 7.43 6.72
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time 8 vCPUs 16 vCPUs 32 vCPUs 1.2758 2.5516 3.8274 5.1032 6.379 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 15 5.67 3.36 2.60
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 1.3455 2.691 4.0365 5.382 6.7275 SE +/- 0.09, N = 3 SE +/- 0.11, N = 3 SE +/- 0.04, N = 15 5.98 3.65 2.87
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 1.1655 2.331 3.4965 4.662 5.8275 SE +/- 0.07, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 15 5.18 2.65 2.12
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time 8 vCPUs 16 vCPUs 32 vCPUs 20 40 60 80 100 SE +/- 0.85, N = 3 SE +/- 0.22, N = 3 SE +/- 0.45, N = 9 93.67 51.40 46.30
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark 8 vCPUs 16 vCPUs 32 vCPUs 60 120 180 240 300 SE +/- 0.14, N = 3 SE +/- 0.06, N = 3 SE +/- 0.08, N = 9 278.37 136.85 69.57
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe 8 vCPUs 16 vCPUs 32 vCPUs 4 8 12 16 20 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 9 15.65 8.40 4.76
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time 8 vCPUs 16 vCPUs 32 vCPUs 11 22 33 44 55 SE +/- 0.53, N = 3 SE +/- 0.24, N = 3 SE +/- 0.16, N = 9 50.87 35.64 27.64
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time 8 vCPUs 16 vCPUs 32 vCPUs 15 30 45 60 75 SE +/- 0.25, N = 3 SE +/- 0.92, N = 3 SE +/- 0.12, N = 9 68.68 37.22 24.36
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 20 40 60 80 100 SE +/- 0.19, N = 3 SE +/- 0.72, N = 3 SE +/- 0.44, N = 9 80.02 44.62 30.32
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 20 40 60 80 100 SE +/- 0.22, N = 3 SE +/- 0.44, N = 3 SE +/- 0.26, N = 9 80.26 45.29 31.98
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time 8 vCPUs 16 vCPUs 32 vCPUs 20 40 60 80 100 SE +/- 0.52, N = 3 SE +/- 0.08, N = 3 SE +/- 0.55, N = 12 89.23 51.55 39.22
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark 8 vCPUs 16 vCPUs 32 vCPUs 60 120 180 240 300 SE +/- 0.13, N = 3 SE +/- 0.17, N = 3 SE +/- 0.11, N = 12 277.83 137.04 69.79
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe 8 vCPUs 16 vCPUs 32 vCPUs 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 12 15.71 8.34 4.78
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time 8 vCPUs 16 vCPUs 32 vCPUs 10 20 30 40 50 SE +/- 0.57, N = 3 SE +/- 0.23, N = 3 SE +/- 0.32, N = 12 45.61 30.70 22.84
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time 8 vCPUs 16 vCPUs 32 vCPUs 15 30 45 60 75 SE +/- 0.36, N = 3 SE +/- 0.20, N = 3 SE +/- 0.24, N = 12 66.27 35.45 22.22
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 20 40 60 80 100 SE +/- 1.11, N = 3 SE +/- 0.73, N = 3 SE +/- 0.19, N = 12 78.00 44.39 28.66
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time 8 vCPUs 16 vCPUs 32 vCPUs 20 40 60 80 100 SE +/- 0.33, N = 3 SE +/- 0.40, N = 3 SE +/- 0.17, N = 12 74.71 42.75 26.55
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding 8 vCPUs 16 vCPUs 32 vCPUs 1000 2000 3000 4000 5000 SE +/- 5.73, N = 3 SE +/- 5.95, N = 3 SE +/- 35.89, N = 15 2360.63 3789.01 4456.55 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding 8 vCPUs 16 vCPUs 32 vCPUs 1200 2400 3600 4800 6000 SE +/- 7.87, N = 3 SE +/- 2.61, N = 3 SE +/- 80.56, N = 15 2196.74 4083.16 5522.07 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding 8 vCPUs 16 vCPUs 32 vCPUs 800 1600 2400 3200 4000 SE +/- 24.91, N = 15 SE +/- 12.80, N = 3 SE +/- 54.84, N = 15 1325.06 2585.31 3962.08 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding 8 vCPUs 16 vCPUs 32 vCPUs 800 1600 2400 3200 4000 SE +/- 23.42, N = 15 SE +/- 32.26, N = 3 SE +/- 42.99, N = 15 1977.89 3343.25 3899.28 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding 8 vCPUs 16 vCPUs 32 vCPUs 1600 3200 4800 6400 8000 SE +/- 29.91, N = 3 SE +/- 43.40, N = 3 SE +/- 66.63, N = 3 2296.10 3631.81 7262.74 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding 8 vCPUs 16 vCPUs 32 vCPUs 2K 4K 6K 8K 10K SE +/- 33.24, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2421.43 5023.70 9181.24 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP 8 vCPUs 16 vCPUs 32 vCPUs 200 400 600 800 1000 SE +/- 1.21, N = 3 SE +/- 0.00, N = 3 SE +/- 3.30, N = 3 371.30 645.16 996.70 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
Graph500 This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org bfs median_TEPS, More Is Better Graph500 3.0 Scale: 26 16 vCPUs 32 vCPUs 100M 200M 300M 400M 500M 257478000 477377000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
OpenBenchmarking.org bfs max_TEPS, More Is Better Graph500 3.0 Scale: 26 16 vCPUs 32 vCPUs 110M 220M 330M 440M 550M 262563000 508372000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
OpenBenchmarking.org sssp median_TEPS, More Is Better Graph500 3.0 Scale: 26 16 vCPUs 32 vCPUs 30M 60M 90M 120M 150M 70750200 124702000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
OpenBenchmarking.org sssp max_TEPS, More Is Better Graph500 3.0 Scale: 26 16 vCPUs 32 vCPUs 40M 80M 120M 160M 200M 95265500 169542000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare 8 vCPUs 16 vCPUs 32 vCPUs 0.3866 0.7732 1.1598 1.5464 1.933 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 SE +/- 0.010, N = 3 0.450 0.880 1.718 1. (CXX) g++ options: -O3 -march=native
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet 8 vCPUs 16 vCPUs 32 vCPUs 1400 2800 4200 5600 7000 SE +/- 9.96, N = 3 SE +/- 11.05, N = 3 SE +/- 31.57, N = 8 6618.32 3955.89 3853.90
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only 8 vCPUs 16 vCPUs 32 vCPUs 70K 140K 210K 280K 350K SE +/- 663.61, N = 3 SE +/- 697.61, N = 3 SE +/- 1811.74, N = 3 54237 157894 329539 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency 8 vCPUs 16 vCPUs 32 vCPUs 0.4149 0.8298 1.2447 1.6596 2.0745 SE +/- 0.023, N = 3 SE +/- 0.003, N = 3 SE +/- 0.002, N = 3 1.844 0.633 0.304 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only 8 vCPUs 16 vCPUs 32 vCPUs 70K 140K 210K 280K 350K SE +/- 588.20, N = 12 SE +/- 1418.81, N = 3 SE +/- 4561.68, N = 12 49628 131607 312239 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency 8 vCPUs 16 vCPUs 32 vCPUs 1.1351 2.2702 3.4053 4.5404 5.6755 SE +/- 0.060, N = 12 SE +/- 0.021, N = 3 SE +/- 0.012, N = 12 5.045 1.900 0.803 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Medium 8 vCPUs 16 vCPUs 32 vCPUs 3 6 9 12 15 SE +/- 0.0253, N = 3 SE +/- 0.0194, N = 3 SE +/- 0.0035, N = 3 9.0505 6.9449 5.9825 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough 8 vCPUs 16 vCPUs 32 vCPUs 7 14 21 28 35 SE +/- 0.0316, N = 3 SE +/- 0.0106, N = 3 SE +/- 0.0033, N = 3 29.0505 14.2146 7.1619 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive 8 vCPUs 16 vCPUs 32 vCPUs 60 120 180 240 300 SE +/- 3.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 276.88 137.62 68.66 1. (CXX) g++ options: -O3 -march=native -flto -pthread
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex 8 vCPUs 16 vCPUs 32 vCPUs 300K 600K 900K 1200K 1500K SE +/- 36001.77, N = 15 SE +/- 30917.20, N = 15 SE +/- 15026.23, N = 3 937451.99 1198681.87 1437660.62 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache 8 vCPUs 16 vCPUs 32 vCPUs 120 240 360 480 600 SE +/- 2.30, N = 3 SE +/- 2.05, N = 3 SE +/- 0.28, N = 3 436.31 551.25 566.91 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress 8 vCPUs 16 vCPUs 32 vCPUs 2K 4K 6K 8K 10K SE +/- 1.28, N = 3 SE +/- 2.80, N = 3 SE +/- 4.23, N = 3 2065.53 4116.96 8209.47 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math 8 vCPUs 16 vCPUs 32 vCPUs 30K 60K 90K 120K 150K SE +/- 25.04, N = 3 SE +/- 10.44, N = 3 SE +/- 9.80, N = 3 38215.95 76177.56 151792.83 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math 8 vCPUs 16 vCPUs 32 vCPUs 20K 40K 60K 80K 100K SE +/- 6.31, N = 3 SE +/- 27.43, N = 3 SE +/- 190.70, N = 3 24633.99 49102.30 97749.08 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing 8 vCPUs 16 vCPUs 32 vCPUs 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 12538.93, N = 3 SE +/- 15929.45, N = 3 SE +/- 7551.56, N = 3 4507844.17 5475267.36 6128517.10 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 22.1 Input: Carbon Nanotube 8 vCPUs 16 vCPUs 32 vCPUs 80 160 240 320 400 SE +/- 0.63, N = 3 SE +/- 0.03, N = 3 SE +/- 0.30, N = 3 381.20 208.97 130.35 1. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet 8 vCPUs 16 vCPUs 32 vCPUs 800 1600 2400 3200 4000 SE +/- 9.75, N = 3 SE +/- 12.43, N = 3 SE +/- 6.90, N = 3 3842.12 3358.73 3056.90 MIN: 3619.38 / MAX: 4060.16 MIN: 3163.2 / MAX: 3575.85 MIN: 2928.19 / MAX: 3237.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 8 vCPUs 16 vCPUs 32 vCPUs 70 140 210 280 350 SE +/- 0.78, N = 3 SE +/- 1.36, N = 3 SE +/- 0.05, N = 3 331.34 328.89 322.77 MIN: 327.36 / MAX: 339.94 MIN: 322.15 / MAX: 373.8 MIN: 319.63 / MAX: 326.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU 8 vCPUs 16 vCPUs 32 vCPUs 20K 40K 60K 80K 100K SE +/- 6.95, N = 3 SE +/- 12.70, N = 3 SE +/- 23.77, N = 3 27237.28 54317.42 108241.61 1. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
Facebook RocksDB This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Random Read 8 vCPUs 16 vCPUs 32 vCPUs 30M 60M 90M 120M 150M SE +/- 252880.06, N = 3 SE +/- 735054.27, N = 3 SE +/- 376574.31, N = 3 31055689 62048967 124704201 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read While Writing 8 vCPUs 16 vCPUs 32 vCPUs 600K 1200K 1800K 2400K 3000K SE +/- 9067.95, N = 15 SE +/- 20446.66, N = 15 SE +/- 32390.32, N = 12 594702 1264826 2610992 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read Random Write Random 8 vCPUs 16 vCPUs 32 vCPUs 300K 600K 900K 1200K 1500K SE +/- 4976.00, N = 15 SE +/- 1701.74, N = 3 SE +/- 9643.50, N = 15 548353 884700 1321827 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: BMW27 - Compute: CPU-Only 8 vCPUs 16 vCPUs 32 vCPUs 100 200 300 400 500 SE +/- 0.04, N = 3 SE +/- 0.50, N = 3 SE +/- 0.10, N = 3 447.71 226.26 112.47
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: Classroom - Compute: CPU-Only 8 vCPUs 16 vCPUs 32 vCPUs 200 400 600 800 1000 SE +/- 1.99, N = 3 SE +/- 0.22, N = 3 SE +/- 0.07, N = 3 1016.66 506.10 249.89
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: Fishy Cat - Compute: CPU-Only 8 vCPUs 16 vCPUs 32 vCPUs 200 400 600 800 1000 SE +/- 1.74, N = 3 SE +/- 0.85, N = 3 SE +/- 0.42, N = 3 841.18 426.04 214.41
Tau T2A: 8 vCPUs Processor: ARMv8 Neoverse-N1 (8 Cores), Motherboard: KVM Google Compute Engine, Memory: 32GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1013-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 August 2022 22:02 by user michael_larabel.
Tau T2A: 16 vCPUs Processor: ARMv8 Neoverse-N1 (16 Cores), Motherboard: KVM Google Compute Engine, Memory: 64GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1013-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 August 2022 00:35 by user michael_larabel.
Tau T2A: 32 vCPUs Processor: ARMv8 Neoverse-N1 (32 Cores), Motherboard: KVM Google Compute Engine, Memory: 128GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1016-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 11 August 2022 21:42 by user michael_larabel.