Tau T2A: 8 vCPUs Processor: ARMv8 Neoverse-N1 (8 Cores), Motherboard: KVM Google Compute Engine, Memory: 32GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1013-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Tau T2A: 16 vCPUs Changed Processor to ARMv8 Neoverse-N1 (16 Cores) .
Changed Memory to 64GB .
Tau T2A: 32 vCPUs Processor: ARMv8 Neoverse-N1 (32 Cores) , Motherboard: KVM Google Compute Engine, Memory: 128GB , Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1016-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Graph500 This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org bfs max_TEPS, More Is Better Graph500 3.0 Scale: 26 32 vCPUs 16 vCPUs 110M 220M 330M 440M 550M 508372000 262563000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
OpenBenchmarking.org bfs median_TEPS, More Is Better Graph500 3.0 Scale: 26 32 vCPUs 16 vCPUs 100M 200M 300M 400M 500M 477377000 257478000 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex 32 vCPUs 16 vCPUs 8 vCPUs 300K 600K 900K 1200K 1500K SE +/- 15026.23, N = 3 SE +/- 30917.20, N = 15 SE +/- 36001.77, N = 15 1437660.62 1198681.87 937451.99 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache 32 vCPUs 16 vCPUs 8 vCPUs 120 240 360 480 600 SE +/- 0.28, N = 3 SE +/- 2.05, N = 3 SE +/- 2.30, N = 3 566.91 551.25 436.31 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress 32 vCPUs 16 vCPUs 8 vCPUs 2K 4K 6K 8K 10K SE +/- 4.23, N = 3 SE +/- 2.80, N = 3 SE +/- 1.28, N = 3 8209.47 4116.96 2065.53 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math 32 vCPUs 16 vCPUs 8 vCPUs 30K 60K 90K 120K 150K SE +/- 9.80, N = 3 SE +/- 10.44, N = 3 SE +/- 25.04, N = 3 151792.83 76177.56 38215.95 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math 32 vCPUs 16 vCPUs 8 vCPUs 20K 40K 60K 80K 100K SE +/- 190.70, N = 3 SE +/- 27.43, N = 3 SE +/- 6.31, N = 3 97749.08 49102.30 24633.99 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing 32 vCPUs 16 vCPUs 8 vCPUs 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 7551.56, N = 3 SE +/- 15929.45, N = 3 SE +/- 12538.93, N = 3 6128517.10 5475267.36 4507844.17 1. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 32 vCPUs 16 vCPUs 8 vCPUs 6000M 12000M 18000M 24000M 30000M SE +/- 119493320.18, N = 3 SE +/- 19283388.31, N = 3 SE +/- 19026629.44, N = 3 25788919913 12926411527 6456083507 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU 32 vCPUs 16 vCPUs 8 vCPUs 20K 40K 60K 80K 100K SE +/- 23.77, N = 3 SE +/- 12.70, N = 3 SE +/- 6.95, N = 3 108241.61 54317.42 27237.28 1. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
VP9 libvpx Encoding This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 5 - Input: Bosphorus 4K 32 vCPUs 16 vCPUs 8 vCPUs 2 4 6 8 10 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 6.99 6.68 6.11 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 0 - Input: Bosphorus 1080p 32 vCPUs 16 vCPUs 8 vCPUs 1.1228 2.2456 3.3684 4.4912 5.614 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 4.99 4.84 4.65 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 5 - Input: Bosphorus 1080p 32 vCPUs 16 vCPUs 8 vCPUs 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 12.11 11.78 11.27 1. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP 32 vCPUs 16 vCPUs 8 vCPUs 200 400 600 800 1000 SE +/- 3.30, N = 3 SE +/- 0.00, N = 3 SE +/- 1.21, N = 3 996.70 645.16 371.30 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
Aircrack-ng Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 32 vCPUs 16 vCPUs 8 vCPUs 7K 14K 21K 28K 35K SE +/- 287.54, N = 15 SE +/- 192.97, N = 15 SE +/- 103.85, N = 15 33647.55 16697.92 8308.58 -lpcre -lpcre 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding 32 vCPUs 16 vCPUs 8 vCPUs 1000 2000 3000 4000 5000 SE +/- 35.89, N = 15 SE +/- 5.95, N = 3 SE +/- 5.73, N = 3 4456.55 3789.01 2360.63 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding 32 vCPUs 16 vCPUs 8 vCPUs 1200 2400 3600 4800 6000 SE +/- 80.56, N = 15 SE +/- 2.61, N = 3 SE +/- 7.87, N = 3 5522.07 4083.16 2196.74 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding 32 vCPUs 16 vCPUs 8 vCPUs 1600 3200 4800 6400 8000 SE +/- 66.63, N = 3 SE +/- 43.40, N = 3 SE +/- 29.91, N = 3 7262.74 3631.81 2296.10 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding 32 vCPUs 16 vCPUs 8 vCPUs 2K 4K 6K 8K 10K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 33.24, N = 3 9181.24 5023.70 2421.43 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding 32 vCPUs 16 vCPUs 8 vCPUs 800 1600 2400 3200 4000 SE +/- 54.84, N = 15 SE +/- 12.80, N = 3 SE +/- 24.91, N = 15 3962.08 2585.31 1325.06 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding 32 vCPUs 16 vCPUs 8 vCPUs 800 1600 2400 3200 4000 SE +/- 42.99, N = 15 SE +/- 32.26, N = 3 SE +/- 23.42, N = 15 3899.28 3343.25 1977.89 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare 32 vCPUs 16 vCPUs 8 vCPUs 0.3866 0.7732 1.1598 1.5464 1.933 SE +/- 0.010, N = 3 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 1.718 0.880 0.450 1. (CXX) g++ options: -O3 -march=native
Facebook RocksDB This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Random Read 32 vCPUs 16 vCPUs 8 vCPUs 30M 60M 90M 120M 150M SE +/- 376574.31, N = 3 SE +/- 735054.27, N = 3 SE +/- 252880.06, N = 3 124704201 62048967 31055689 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read While Writing 32 vCPUs 16 vCPUs 8 vCPUs 600K 1200K 1800K 2400K 3000K SE +/- 32390.32, N = 12 SE +/- 20446.66, N = 15 SE +/- 9067.95, N = 15 2610992 1264826 594702 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.0.1 Test: Read Random Write Random 32 vCPUs 16 vCPUs 8 vCPUs 300K 600K 900K 1200K 1500K SE +/- 9643.50, N = 15 SE +/- 1701.74, N = 3 SE +/- 4976.00, N = 15 1321827 884700 548353 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 32 vCPUs 16 vCPUs 8 vCPUs 300 600 900 1200 1500 SE +/- 0.06, N = 3 SE +/- 0.06, N = 3 SE +/- 0.07, N = 3 1570.2 786.7 393.7 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
Graph500 This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org sssp max_TEPS, More Is Better Graph500 3.0 Scale: 26 32 vCPUs 16 vCPUs 40M 80M 120M 160M 200M 169542000 95265500 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
OpenBenchmarking.org sssp median_TEPS, More Is Better Graph500 3.0 Scale: 26 32 vCPUs 16 vCPUs 30M 60M 90M 120M 150M 124702000 70750200 1. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi
Scale: 26
Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C 32 vCPUs 16 vCPUs 8 vCPUs 15K 30K 45K 60K 75K SE +/- 272.46, N = 3 SE +/- 18.18, N = 3 SE +/- 23.11, N = 3 69530.64 49125.93 14368.29 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C 32 vCPUs 16 vCPUs 8 vCPUs 5K 10K 15K 20K 25K SE +/- 35.67, N = 3 SE +/- 171.15, N = 3 SE +/- 49.63, N = 15 21433.92 12171.95 6855.81 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D 32 vCPUs 16 vCPUs 8 vCPUs 700 1400 2100 2800 3500 SE +/- 2.04, N = 3 SE +/- 1.03, N = 3 SE +/- 0.56, N = 3 3265.68 1634.99 820.94 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C 32 vCPUs 16 vCPUs 8 vCPUs 11K 22K 33K 44K 55K SE +/- 41.18, N = 3 SE +/- 300.01, N = 3 SE +/- 15.96, N = 3 52309.81 32644.85 18574.23 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D 32 vCPUs 16 vCPUs 8 vCPUs 400 800 1200 1600 2000 SE +/- 0.86, N = 3 SE +/- 14.70, N = 3 SE +/- 1.14, N = 3 1822.77 1498.45 1104.26 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C 32 vCPUs 16 vCPUs 8 vCPUs 20K 40K 60K 80K 100K SE +/- 137.48, N = 3 SE +/- 701.24, N = 3 SE +/- 50.76, N = 3 87702.30 55447.31 32029.14 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C 32 vCPUs 16 vCPUs 8 vCPUs 11K 22K 33K 44K 55K SE +/- 31.40, N = 3 SE +/- 102.49, N = 3 SE +/- 46.23, N = 3 50939.05 33309.76 27703.33 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B 32 vCPUs 16 vCPUs 8 vCPUs 7K 14K 21K 28K 35K SE +/- 38.20, N = 3 SE +/- 244.58, N = 3 SE +/- 17.11, N = 3 34381.91 19552.45 7338.98 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C 32 vCPUs 16 vCPUs 8 vCPUs 6K 12K 18K 24K 30K SE +/- 31.60, N = 3 SE +/- 112.56, N = 3 SE +/- 39.91, N = 3 26843.58 19710.90 7115.28 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only 32 vCPUs 16 vCPUs 8 vCPUs 70K 140K 210K 280K 350K SE +/- 1811.74, N = 3 SE +/- 697.61, N = 3 SE +/- 663.61, N = 3 329539 157894 54237 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only 32 vCPUs 16 vCPUs 8 vCPUs 70K 140K 210K 280K 350K SE +/- 4561.68, N = 12 SE +/- 1418.81, N = 3 SE +/- 588.20, N = 12 312239 131607 49628 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 32 vCPUs 16 vCPUs 8 vCPUs 30K 60K 90K 120K 150K SE +/- 29.86, N = 3 SE +/- 8.35, N = 3 SE +/- 10.26, N = 3 128273.1 64247.0 32136.8 1. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet 32 vCPUs 16 vCPUs 8 vCPUs 1400 2800 4200 5600 7000 SE +/- 31.57, N = 8 SE +/- 11.05, N = 3 SE +/- 9.96, N = 3 3853.90 3955.89 6618.32
Renaissance Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark Bayes 32 vCPUs 16 vCPUs 8 vCPUs 500 1000 1500 2000 2500 SE +/- 9.73, N = 3 SE +/- 7.45, N = 3 SE +/- 42.77, N = 15 766.4 1262.0 2249.8 MIN: 495.95 / MAX: 1178.88 MIN: 877.37 / MAX: 1398.23 MIN: 1478.18 / MAX: 2434.18
OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Savina Reactors.IO 32 vCPUs 16 vCPUs 8 vCPUs 6K 12K 18K 24K 30K SE +/- 131.70, N = 4 SE +/- 583.25, N = 12 SE +/- 435.60, N = 9 10705.9 15981.5 26456.4 MIN: 10505.49 / MAX: 14847.21 MIN: 12776.53 / MAX: 36273.51 MIN: 13667.16 / MAX: 42318.14
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency 32 vCPUs 16 vCPUs 8 vCPUs 0.4149 0.8298 1.2447 1.6596 2.0745 SE +/- 0.002, N = 3 SE +/- 0.003, N = 3 SE +/- 0.023, N = 3 0.304 0.633 1.844 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency 32 vCPUs 16 vCPUs 8 vCPUs 1.1351 2.2702 3.4053 4.5404 5.6755 SE +/- 0.012, N = 12 SE +/- 0.021, N = 3 SE +/- 0.060, N = 12 0.803 1.900 5.045 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet 32 vCPUs 16 vCPUs 8 vCPUs 800 1600 2400 3200 4000 SE +/- 6.90, N = 3 SE +/- 12.43, N = 3 SE +/- 9.75, N = 3 3056.90 3358.73 3842.12 MIN: 2928.19 / MAX: 3237.58 MIN: 3163.2 / MAX: 3575.85 MIN: 3619.38 / MAX: 4060.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 32 vCPUs 16 vCPUs 8 vCPUs 70 140 210 280 350 SE +/- 0.05, N = 3 SE +/- 1.36, N = 3 SE +/- 0.78, N = 3 322.77 328.89 331.34 MIN: 319.63 / MAX: 326.43 MIN: 322.15 / MAX: 373.8 MIN: 327.36 / MAX: 339.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 9 Input: drivaerFastback, Medium Mesh Size - Mesh Time 32 vCPUs 16 vCPUs 8 vCPUs 90 180 270 360 450 206.40 303.71 425.95 -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats -ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling 1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 9 Input: drivaerFastback, Medium Mesh Size - Execution Time 32 vCPUs 16 vCPUs 8 vCPUs 500 1000 1500 2000 2500 994.53 1534.72 2426.16 -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats -lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats -ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling 1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 32 vCPUs 16 vCPUs 8 vCPUs 50 100 150 200 250 SE +/- 0.13, N = 3 SE +/- 0.47, N = 3 SE +/- 0.32, N = 3 169.64 194.77 245.60 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6 32 vCPUs 16 vCPUs 8 vCPUs 5 10 15 20 25 SE +/- 0.020, N = 3 SE +/- 0.037, N = 3 SE +/- 0.130, N = 3 6.682 11.168 20.273 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless 32 vCPUs 16 vCPUs 8 vCPUs 6 12 18 24 30 SE +/- 0.00, N = 3 SE +/- 0.19, N = 3 SE +/- 0.11, N = 3 10.34 14.70 23.31 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 10, Lossless 32 vCPUs 16 vCPUs 8 vCPUs 3 6 9 12 15 SE +/- 0.072, N = 3 SE +/- 0.065, N = 8 SE +/- 0.096, N = 3 6.775 7.658 9.997 1. (CXX) g++ options: -O3 -fPIC -march=native -lm
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time 32 vCPUs 16 vCPUs 8 vCPUs 2 4 6 8 10 SE +/- 0.11, N = 15 SE +/- 0.05, N = 12 SE +/- 0.03, N = 3 4.79 4.93 6.28
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark 32 vCPUs 16 vCPUs 8 vCPUs 60 120 180 240 300 SE +/- 0.06, N = 15 SE +/- 0.11, N = 12 SE +/- 0.17, N = 3 69.77 137.76 277.89
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe 32 vCPUs 16 vCPUs 8 vCPUs 4 8 12 16 20 SE +/- 0.01, N = 15 SE +/- 0.02, N = 12 SE +/- 0.02, N = 3 4.79 8.39 15.92
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time 32 vCPUs 16 vCPUs 8 vCPUs 1.0305 2.061 3.0915 4.122 5.1525 SE +/- 0.03, N = 15 SE +/- 0.03, N = 12 SE +/- 0.02, N = 3 2.01 2.55 4.58
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 0.783 1.566 2.349 3.132 3.915 SE +/- 0.02, N = 15 SE +/- 0.03, N = 12 SE +/- 0.04, N = 3 2.13 2.22 3.48
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 0.6435 1.287 1.9305 2.574 3.2175 SE +/- 0.03, N = 15 SE +/- 0.04, N = 12 SE +/- 0.03, N = 3 1.68 1.79 2.86
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time 32 vCPUs 16 vCPUs 8 vCPUs 2 4 6 8 10 SE +/- 0.04, N = 15 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 4.96 5.91 8.09
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark 32 vCPUs 16 vCPUs 8 vCPUs 60 120 180 240 300 SE +/- 0.06, N = 15 SE +/- 0.20, N = 3 SE +/- 0.35, N = 3 69.92 137.17 278.47
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe 32 vCPUs 16 vCPUs 8 vCPUs 4 8 12 16 20 SE +/- 0.01, N = 15 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 4.80 8.29 15.81
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time 32 vCPUs 16 vCPUs 8 vCPUs 2 4 6 8 10 SE +/- 0.05, N = 15 SE +/- 0.10, N = 3 SE +/- 0.13, N = 3 6.72 7.43 8.85
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time 32 vCPUs 16 vCPUs 8 vCPUs 1.2758 2.5516 3.8274 5.1032 6.379 SE +/- 0.03, N = 15 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 2.60 3.36 5.67
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 1.3455 2.691 4.0365 5.382 6.7275 SE +/- 0.04, N = 15 SE +/- 0.11, N = 3 SE +/- 0.09, N = 3 2.87 3.65 5.98
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 1.1655 2.331 3.4965 4.662 5.8275 SE +/- 0.02, N = 15 SE +/- 0.05, N = 3 SE +/- 0.07, N = 3 2.12 2.65 5.18
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time 32 vCPUs 16 vCPUs 8 vCPUs 20 40 60 80 100 SE +/- 0.45, N = 9 SE +/- 0.22, N = 3 SE +/- 0.85, N = 3 46.30 51.40 93.67
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark 32 vCPUs 16 vCPUs 8 vCPUs 60 120 180 240 300 SE +/- 0.08, N = 9 SE +/- 0.06, N = 3 SE +/- 0.14, N = 3 69.57 136.85 278.37
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe 32 vCPUs 16 vCPUs 8 vCPUs 4 8 12 16 20 SE +/- 0.01, N = 9 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 4.76 8.40 15.65
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time 32 vCPUs 16 vCPUs 8 vCPUs 11 22 33 44 55 SE +/- 0.16, N = 9 SE +/- 0.24, N = 3 SE +/- 0.53, N = 3 27.64 35.64 50.87
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time 32 vCPUs 16 vCPUs 8 vCPUs 15 30 45 60 75 SE +/- 0.12, N = 9 SE +/- 0.92, N = 3 SE +/- 0.25, N = 3 24.36 37.22 68.68
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 20 40 60 80 100 SE +/- 0.44, N = 9 SE +/- 0.72, N = 3 SE +/- 0.19, N = 3 30.32 44.62 80.02
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 20 40 60 80 100 SE +/- 0.26, N = 9 SE +/- 0.44, N = 3 SE +/- 0.22, N = 3 31.98 45.29 80.26
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time 32 vCPUs 16 vCPUs 8 vCPUs 20 40 60 80 100 SE +/- 0.55, N = 12 SE +/- 0.08, N = 3 SE +/- 0.52, N = 3 39.22 51.55 89.23
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark 32 vCPUs 16 vCPUs 8 vCPUs 60 120 180 240 300 SE +/- 0.11, N = 12 SE +/- 0.17, N = 3 SE +/- 0.13, N = 3 69.79 137.04 277.83
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe 32 vCPUs 16 vCPUs 8 vCPUs 4 8 12 16 20 SE +/- 0.02, N = 12 SE +/- 0.03, N = 3 SE +/- 0.00, N = 3 4.78 8.34 15.71
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time 32 vCPUs 16 vCPUs 8 vCPUs 10 20 30 40 50 SE +/- 0.32, N = 12 SE +/- 0.23, N = 3 SE +/- 0.57, N = 3 22.84 30.70 45.61
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time 32 vCPUs 16 vCPUs 8 vCPUs 15 30 45 60 75 SE +/- 0.24, N = 12 SE +/- 0.20, N = 3 SE +/- 0.36, N = 3 22.22 35.45 66.27
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 20 40 60 80 100 SE +/- 0.19, N = 12 SE +/- 0.73, N = 3 SE +/- 1.11, N = 3 28.66 44.39 78.00
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time 32 vCPUs 16 vCPUs 8 vCPUs 20 40 60 80 100 SE +/- 0.17, N = 12 SE +/- 0.40, N = 3 SE +/- 0.33, N = 3 26.55 42.75 74.71
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Medium 32 vCPUs 16 vCPUs 8 vCPUs 3 6 9 12 15 SE +/- 0.0035, N = 3 SE +/- 0.0194, N = 3 SE +/- 0.0253, N = 3 5.9825 6.9449 9.0505 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough 32 vCPUs 16 vCPUs 8 vCPUs 7 14 21 28 35 SE +/- 0.0033, N = 3 SE +/- 0.0106, N = 3 SE +/- 0.0316, N = 3 7.1619 14.2146 29.0505 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive 32 vCPUs 16 vCPUs 8 vCPUs 60 120 180 240 300 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 3.08, N = 3 68.66 137.62 276.88 1. (CXX) g++ options: -O3 -march=native -flto -pthread
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 22.1 Input: Carbon Nanotube 32 vCPUs 16 vCPUs 8 vCPUs 80 160 240 320 400 SE +/- 0.30, N = 3 SE +/- 0.03, N = 3 SE +/- 0.63, N = 3 130.35 208.97 381.20 1. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: BMW27 - Compute: CPU-Only 32 vCPUs 16 vCPUs 8 vCPUs 100 200 300 400 500 SE +/- 0.10, N = 3 SE +/- 0.50, N = 3 SE +/- 0.04, N = 3 112.47 226.26 447.71
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: Classroom - Compute: CPU-Only 32 vCPUs 16 vCPUs 8 vCPUs 200 400 600 800 1000 SE +/- 0.07, N = 3 SE +/- 0.22, N = 3 SE +/- 1.99, N = 3 249.89 506.10 1016.66
OpenBenchmarking.org Seconds, Fewer Is Better Blender Blend File: Fishy Cat - Compute: CPU-Only 32 vCPUs 16 vCPUs 8 vCPUs 200 400 600 800 1000 SE +/- 0.42, N = 3 SE +/- 0.85, N = 3 SE +/- 1.74, N = 3 214.41 426.04 841.18
Tau T2A: 8 vCPUs Processor: ARMv8 Neoverse-N1 (8 Cores), Motherboard: KVM Google Compute Engine, Memory: 32GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1013-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 August 2022 22:02 by user michael_larabel.
Tau T2A: 16 vCPUs Processor: ARMv8 Neoverse-N1 (16 Cores), Motherboard: KVM Google Compute Engine, Memory: 64GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1013-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 August 2022 00:35 by user michael_larabel.
Tau T2A: 32 vCPUs Processor: ARMv8 Neoverse-N1 (32 Cores), Motherboard: KVM Google Compute Engine, Memory: 128GB, Disk: 215GB nvme_card-pd, Network: Google Compute Engine Virtual
OS: Ubuntu 22.04, Kernel: 5.15.0-1016-gcp (aarch64), Compiler: GCC 12.0.1 20220319, File-System: ext4, System Layer: KVM
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 11 August 2022 21:42 by user michael_larabel.