Intel Xeon Gold 6226R testing with a Supermicro X11SPL-F v1.02 (3.1 BIOS) and ASPEED on Ubuntu 20.10 via the Phoronix Test Suite.
1 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
2 3 Processor: Intel Xeon Gold 6226R @ 3.90GHz (16 Cores / 32 Threads), Motherboard: Supermicro X11SPL-F v1.02 (3.1 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 188GB, Disk: 280GB INTEL SSDPED1D280GA, Graphics: ASPEED, Network: 2 x Intel I210
OS: Ubuntu 20.10, Kernel: 5.11.0-rc4-max-boost-inv-patch (x86_64) 20210121, Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1024x768
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI 1 2 3 20 40 60 80 100 SE +/- 0.31, N = 3 83.96 83.62 83.85 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI - Decrypt 1 2 3 20 40 60 80 100 SE +/- 0.35, N = 3 81.21 81.25 81.31 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 1 2 3 800 1600 2400 3200 4000 SE +/- 0.22, N = 3 3647.61 3645.65 3633.67 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 - Decrypt 1 2 3 800 1600 2400 3200 4000 SE +/- 0.85, N = 3 3659.31 3661.52 3651.74 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish 1 2 3 70 140 210 280 350 SE +/- 0.60, N = 3 332.67 333.14 332.45 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish - Decrypt 1 2 3 70 140 210 280 350 SE +/- 0.24, N = 3 332.02 332.57 332.24 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish 1 2 3 90 180 270 360 450 SE +/- 0.24, N = 3 407.17 408.11 408.69 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish - Decrypt 1 2 3 90 180 270 360 450 SE +/- 0.15, N = 3 403.44 404.45 404.02 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 1 2 3 30 60 90 120 150 SE +/- 0.06, N = 3 128.32 128.92 129.01 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 - Decrypt 1 2 3 30 60 90 120 150 SE +/- 0.02, N = 3 128.82 129.41 129.02 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 1 2 3 140 280 420 560 700 SE +/- 1.77, N = 3 670.06 662.07 663.90 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 - Decrypt 1 2 3 140 280 420 560 700 SE +/- 0.48, N = 3 662.73 657.53 658.13 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 0 1 2 3 3 6 9 12 15 SE +/- 0.006, N = 3 9.041 9.035 9.011 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 2 1 2 3 6 12 18 24 30 SE +/- 0.04, N = 3 25.42 25.47 25.48 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 3 1 2 3 10 20 30 40 50 SE +/- 0.05, N = 3 45.21 45.30 45.21 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 1 2 3 50 100 150 200 250 SE +/- 1.40, N = 3 200.50 205.12 202.75 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 1 2 3 40 80 120 160 200 SE +/- 1.27, N = 3 171.36 171.17 173.74 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium 1 2 3 2 4 6 8 10 SE +/- 0.0030, N = 3 6.7155 6.7135 6.7708 1. (CXX) g++ options: -O3 -flto -pthread
JPEG XL The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 5 1 2 3 9 18 27 36 45 SE +/- 0.18, N = 3 40.73 40.79 41.09 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 7 1 2 3 2 4 6 8 10 SE +/- 0.03, N = 3 7.16 7.12 7.06 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 8 1 2 3 0.144 0.288 0.432 0.576 0.72 SE +/- 0.00, N = 3 0.63 0.64 0.63 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 5 1 2 3 10 20 30 40 50 SE +/- 0.23, N = 3 43.46 42.36 42.76 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 7 1 2 3 10 20 30 40 50 SE +/- 0.43, N = 3 41.56 42.04 41.91 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 8 1 2 3 5 10 15 20 25 SE +/- 0.19, N = 3 20.75 20.30 20.69 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
JPEG XL Decoding The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding 0.3.3 CPU Threads: 1 1 2 3 7 14 21 28 35 SE +/- 0.20, N = 3 32.08 32.20 31.68
Gcrypt Library Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Gcrypt Library 1.9 1 2 3 50 100 150 200 250 SE +/- 0.71, N = 3 236.85 235.26 237.49 1. (CC) gcc options: -O2 -fvisibility=hidden -lgpg-error
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU 1 2 3 0.5741 1.1482 1.7223 2.2964 2.8705 SE +/- 0.00338, N = 3 2.55148 2.54274 2.54776 MIN: 2.27 MIN: 2.26 MIN: 2.27 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU 1 2 3 0.7711 1.5422 2.3133 3.0844 3.8555 SE +/- 0.00459, N = 3 3.42701 3.41717 3.40519 MIN: 3.11 MIN: 3.11 MIN: 3.12 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.1218 0.2436 0.3654 0.4872 0.609 SE +/- 0.000426, N = 3 0.541364 0.538777 0.539335 MIN: 0.47 MIN: 0.47 MIN: 0.48 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.3166 0.6332 0.9498 1.2664 1.583 SE +/- 0.01012, N = 3 1.39215 1.40690 1.39261 MIN: 1.22 MIN: 1.21 MIN: 1.21 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 1.3233 2.6466 3.9699 5.2932 6.6165 SE +/- 0.01210, N = 3 5.88132 5.88065 5.85589 MIN: 5.43 MIN: 5.44 MIN: 5.47 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 0.652 1.304 1.956 2.608 3.26 SE +/- 0.00361, N = 3 2.83678 2.89756 2.88924 MIN: 2.55 MIN: 2.55 MIN: 2.57 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU 1 2 3 1.0576 2.1152 3.1728 4.2304 5.288 SE +/- 0.00914, N = 3 4.70048 4.68814 4.68043 MIN: 4.25 MIN: 4.24 MIN: 4.27 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU 1 2 3 1.25 2.5 3.75 5 6.25 SE +/- 0.09201, N = 15 5.55575 4.50935 3.98082 MIN: 3.26 MIN: 3.22 MIN: 3.27 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU 1 2 3 0.7813 1.5626 2.3439 3.1252 3.9065 SE +/- 0.03666, N = 3 3.47238 3.40838 3.33580 MIN: 3.17 MIN: 3.17 MIN: 3.17 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU 1 2 3 1.04 2.08 3.12 4.16 5.2 SE +/- 0.01741, N = 3 4.61416 4.57869 4.62231 MIN: 4.16 MIN: 4.13 MIN: 4.17 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.1309 0.2618 0.3927 0.5236 0.6545 SE +/- 0.001705, N = 3 0.581051 0.581814 0.579300 MIN: 0.52 MIN: 0.52 MIN: 0.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.2093 0.4186 0.6279 0.8372 1.0465 SE +/- 0.013165, N = 3 0.906022 0.930175 0.903897 MIN: 0.84 MIN: 0.84 MIN: 0.83 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU 1 2 3 400 800 1200 1600 2000 SE +/- 5.17, N = 3 1779.67 1774.74 1779.42 MIN: 1752.37 MIN: 1741.55 MIN: 1751.79 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU 1 2 3 200 400 600 800 1000 SE +/- 0.67, N = 3 1008.88 1003.28 991.16 MIN: 982.03 MIN: 978.75 MIN: 968.71 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU 1 2 3 400 800 1200 1600 2000 SE +/- 1.23, N = 3 1771.99 1785.72 1779.43 MIN: 1740.69 MIN: 1752.23 MIN: 1751.55 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 3 6 9 12 15 SE +/- 0.00857, N = 3 9.83488 9.84755 9.83093 MIN: 9.08 MIN: 9.08 MIN: 9.11 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 3 6 9 12 15 SE +/- 0.07, N = 3 12.36 12.42 12.52 MIN: 11.45 MIN: 11.45 MIN: 11.45 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 3 6 9 12 15 SE +/- 0.02, N = 3 12.75 12.69 12.67 MIN: 12.41 MIN: 12.41 MIN: 12.37 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU 1 2 3 200 400 600 800 1000 SE +/- 4.21, N = 3 990.54 999.49 995.41 MIN: 959.7 MIN: 966.55 MIN: 971.26 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU 1 2 3 0.2467 0.4934 0.7401 0.9868 1.2335 SE +/- 0.00360, N = 3 1.09663 1.09014 1.09422 MIN: 0.95 MIN: 0.94 MIN: 0.95 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 400 800 1200 1600 2000 SE +/- 4.93, N = 3 1764.31 1784.29 1786.55 MIN: 1737.4 MIN: 1739.16 MIN: 1755.21 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 200 400 600 800 1000 SE +/- 0.52, N = 3 993.82 1005.65 1011.78 MIN: 968.8 MIN: 980.08 MIN: 985.37 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.1345 0.269 0.4035 0.538 0.6725 SE +/- 0.002509, N = 3 0.597683 0.592321 0.597077 MIN: 0.52 MIN: 0.51 MIN: 0.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 0.5112 1.0224 1.5336 2.0448 2.556 SE +/- 0.00606, N = 3 2.27034 2.26126 2.27207 MIN: 2.01 MIN: 1.97 MIN: 2.01 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: leblancbig 1 2 3 6 12 18 24 30 SE +/- 0.01, N = 3 27.25 27.37 27.38 1. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction 1 2 3 4 8 12 16 20 SE +/- 0.20, N = 3 14.08 13.83 14.12 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction 1 2 3 12 24 36 48 60 SE +/- 0.28, N = 3 52.50 52.57 52.58 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time 1 2 3 9M 18M 27M 36M 45M SE +/- 343002.27, N = 3 38702741 40248549 41221711 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 3 - Compression Speed 1 2 3 600 1200 1800 2400 3000 SE +/- 17.00, N = 3 2913.2 2935.0 2883.0 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 3, Long Mode - Decompression Speed 1 2 3 700 1400 2100 2800 3500 SE +/- 15.06, N = 3 3303.5 3291.7 3318.7 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 8, Long Mode - Decompression Speed 1 2 3 700 1400 2100 2800 3500 SE +/- 16.76, N = 3 3269.1 3254.0 3258.6 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 19, Long Mode - Decompression Speed 1 2 3 600 1200 1800 2400 3000 SE +/- 16.34, N = 3 2729.2 2677.6 2721.4 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K 1 2 3 0.5333 1.0666 1.5999 2.1332 2.6665 SE +/- 0.01, N = 3 2.36 2.37 2.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K 1 2 3 2 4 6 8 10 SE +/- 0.01, N = 3 7.52 7.78 7.81 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K 1 2 3 0.9428 1.8856 2.8284 3.7712 4.714 SE +/- 0.05, N = 3 4.19 4.13 4.05 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K 1 2 3 4 8 12 16 20 SE +/- 0.17, N = 3 16.93 16.74 16.49 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K 1 2 3 6 12 18 24 30 SE +/- 0.09, N = 3 21.88 23.02 22.95 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p 1 2 3 0.0765 0.153 0.2295 0.306 0.3825 SE +/- 0.00, N = 3 0.34 0.34 0.34 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p 1 2 3 0.756 1.512 2.268 3.024 3.78 SE +/- 0.00, N = 3 3.36 3.36 3.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p 1 2 3 3 6 9 12 15 SE +/- 0.01, N = 3 11.09 11.09 10.94 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p 1 2 3 2 4 6 8 10 SE +/- 0.01, N = 3 8.47 8.46 8.49 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p 1 2 3 9 18 27 36 45 SE +/- 0.41, N = 3 38.54 38.14 38.94 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p 1 2 3 11 22 33 44 55 SE +/- 0.69, N = 3 48.85 48.12 46.17 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p 1 2 3 60 120 180 240 300 SE +/- 1.43, N = 3 267.03 264.14 266.29 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p 1 2 3 60 120 180 240 300 SE +/- 2.98, N = 3 267.34 269.85 273.50 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p 1 2 3 40 80 120 160 200 SE +/- 0.48, N = 3 201.96 204.13 201.78 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 4K 1 2 3 40 80 120 160 200 SE +/- 0.57, N = 3 174.26 172.64 187.90 MIN: 157.96 / MAX: 208.32 MIN: 139.18 / MAX: 203.2 MIN: 172.38 / MAX: 222.92 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 1080p 1 2 3 70 140 210 280 350 SE +/- 0.83, N = 3 269.99 271.02 317.53 MIN: 246.57 / MAX: 294.05 MIN: 239.91 / MAX: 295.67 MIN: 291.94 / MAX: 371.15 1. (CC) gcc options: -pthread
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p 1 2 3 3 6 9 12 15 SE +/- 0.03, N = 3 10.41 10.38 10.43 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p 1 2 3 30 60 90 120 150 SE +/- 0.36, N = 3 152.36 152.04 153.85 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p 1 2 3 70 140 210 280 350 SE +/- 1.08, N = 3 307.85 311.16 312.50 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sAXPY 1 2 3 30 60 90 120 150 SE +/- 0.58, N = 3 125 125 127 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dCOPY 1 2 3 16 32 48 64 80 SE +/- 0.15, N = 3 71.7 71.7 72.3 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-N 1 2 3 20 40 60 80 100 SE +/- 0.59, N = 3 99.6 99.8 99.3 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NN 1 2 3 13 26 39 52 65 SE +/- 1.50, N = 3 57.3 55.7 57.1 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NT 1 2 3 13 26 39 52 65 SE +/- 0.57, N = 3 56.2 55.5 50.9 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TN 1 2 3 13 26 39 52 65 SE +/- 1.21, N = 3 58.2 57.7 54.8 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TT 1 2 3 13 26 39 52 65 SE +/- 1.65, N = 3 59.2 57.9 54.7 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Signal Source (Cosine) 1 2 3 500 1000 1500 2000 2500 SE +/- 54.15, N = 3 2257.7 2152.7 2193.4 1. 3.8.1.0
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 1 - Buffer Length: 256 - Filter Length: 57 1 2 3 11M 22M 33M 44M 55M SE +/- 214667.44, N = 3 49219000 48978333 49287000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 2 - Buffer Length: 256 - Filter Length: 57 1 2 3 20M 40M 60M 80M 100M SE +/- 69848.25, N = 3 88934000 90398333 89791000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 4 - Buffer Length: 256 - Filter Length: 57 1 2 3 40M 80M 120M 160M 200M SE +/- 507181.76, N = 3 177020000 178210000 178570000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 8 - Buffer Length: 256 - Filter Length: 57 1 2 3 70M 140M 210M 280M 350M SE +/- 416706.66, N = 3 343710000 342623333 344060000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 1 2 3 140M 280M 420M 560M 700M SE +/- 451897.24, N = 3 638950000 638273333 636370000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 32 - Buffer Length: 256 - Filter Length: 57 1 2 3 160M 320M 480M 640M 800M SE +/- 131698.31, N = 3 728810000 728556667 728240000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
LuaRadio LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Five Back to Back FIR Filters 1 2 3 200 400 600 800 1000 SE +/- 2.65, N = 3 857.6 853.9 851.8
srsLTE srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsLTE 20.10.1 Test: OFDM_Test 1 2 3 20M 40M 60M 80M 100M SE +/- 348010.22, N = 3 112800000 113866667 114000000 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test 1 2 3 40 80 120 160 200 SE +/- 0.29, N = 3 190.6 190.4 190.2 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.org UE Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test 1 2 3 20 40 60 80 100 SE +/- 0.10, N = 3 78.3 77.8 77.7 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: Kostya 1 2 3 0.5333 1.0666 1.5999 2.1332 2.6665 SE +/- 0.01, N = 3 2.34 2.32 2.37 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: LargeRandom 1 2 3 0.1868 0.3736 0.5604 0.7472 0.934 SE +/- 0.00, N = 3 0.83 0.83 0.83 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: PartialTweets 1 2 3 0.7605 1.521 2.2815 3.042 3.8025 SE +/- 0.00, N = 3 3.38 3.38 3.38 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: DistinctUserID 1 2 3 0.909 1.818 2.727 3.636 4.545 SE +/- 0.00, N = 3 4.03 4.03 4.04 1. (CXX) g++ options: -O3 -pthread
1 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 27 March 2021 18:28 by user pts.
2 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 27 March 2021 20:44 by user pts.
3 Processor: Intel Xeon Gold 6226R @ 3.90GHz (16 Cores / 32 Threads), Motherboard: Supermicro X11SPL-F v1.02 (3.1 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 188GB, Disk: 280GB INTEL SSDPED1D280GA, Graphics: ASPEED, Network: 2 x Intel I210
OS: Ubuntu 20.10, Kernel: 5.11.0-rc4-max-boost-inv-patch (x86_64) 20210121, Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 28 March 2021 05:11 by user pts.