Intel Xeon Gold 6226R testing with a Supermicro X11SPL-F v1.02 (3.1 BIOS) and ASPEED on Ubuntu 20.10 via the Phoronix Test Suite.
1 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
2 3 Processor: Intel Xeon Gold 6226R @ 3.90GHz (16 Cores / 32 Threads), Motherboard: Supermicro X11SPL-F v1.02 (3.1 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 188GB, Disk: 280GB INTEL SSDPED1D280GA, Graphics: ASPEED, Network: 2 x Intel I210
OS: Ubuntu 20.10, Kernel: 5.11.0-rc4-max-boost-inv-patch (x86_64) 20210121, Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1024x768
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI 3 2 1 20 40 60 80 100 SE +/- 0.31, N = 3 83.85 83.62 83.96 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI - Decrypt 3 2 1 20 40 60 80 100 SE +/- 0.35, N = 3 81.31 81.25 81.21 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 3 2 1 800 1600 2400 3200 4000 SE +/- 0.22, N = 3 3633.67 3645.65 3647.61 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 - Decrypt 3 2 1 800 1600 2400 3200 4000 SE +/- 0.85, N = 3 3651.74 3661.52 3659.31 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish 3 2 1 70 140 210 280 350 SE +/- 0.60, N = 3 332.45 333.14 332.67 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish - Decrypt 3 2 1 70 140 210 280 350 SE +/- 0.24, N = 3 332.24 332.57 332.02 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish 3 2 1 90 180 270 360 450 SE +/- 0.24, N = 3 408.69 408.11 407.17 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish - Decrypt 3 2 1 90 180 270 360 450 SE +/- 0.15, N = 3 404.02 404.45 403.44 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 3 2 1 30 60 90 120 150 SE +/- 0.06, N = 3 129.01 128.92 128.32 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 - Decrypt 3 2 1 30 60 90 120 150 SE +/- 0.02, N = 3 129.02 129.41 128.82 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 3 2 1 140 280 420 560 700 SE +/- 1.77, N = 3 663.90 662.07 670.06 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 - Decrypt 3 2 1 140 280 420 560 700 SE +/- 0.48, N = 3 658.13 657.53 662.73 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 0 3 2 1 3 6 9 12 15 SE +/- 0.006, N = 3 9.011 9.035 9.041 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 2 3 2 1 6 12 18 24 30 SE +/- 0.04, N = 3 25.48 25.47 25.42 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 3 3 2 1 10 20 30 40 50 SE +/- 0.05, N = 3 45.21 45.30 45.21 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 3 2 1 50 100 150 200 250 SE +/- 1.40, N = 3 202.75 205.12 200.50 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 3 2 1 40 80 120 160 200 SE +/- 1.27, N = 3 173.74 171.17 171.36 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium 3 2 1 2 4 6 8 10 SE +/- 0.0030, N = 3 6.7708 6.7135 6.7155 1. (CXX) g++ options: -O3 -flto -pthread
JPEG XL The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 5 3 2 1 9 18 27 36 45 SE +/- 0.18, N = 3 41.09 40.79 40.73 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 7 3 2 1 2 4 6 8 10 SE +/- 0.03, N = 3 7.06 7.12 7.16 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 8 3 2 1 0.144 0.288 0.432 0.576 0.72 SE +/- 0.00, N = 3 0.63 0.64 0.63 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 5 3 2 1 10 20 30 40 50 SE +/- 0.23, N = 3 42.76 42.36 43.46 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 7 3 2 1 10 20 30 40 50 SE +/- 0.43, N = 3 41.91 42.04 41.56 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 8 3 2 1 5 10 15 20 25 SE +/- 0.19, N = 3 20.69 20.30 20.75 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
JPEG XL Decoding The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding 0.3.3 CPU Threads: 1 3 2 1 7 14 21 28 35 SE +/- 0.20, N = 3 31.68 32.20 32.08
Gcrypt Library Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Gcrypt Library 1.9 3 2 1 50 100 150 200 250 SE +/- 0.71, N = 3 237.49 235.26 236.85 1. (CC) gcc options: -O2 -fvisibility=hidden -lgpg-error
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU 3 2 1 0.5741 1.1482 1.7223 2.2964 2.8705 SE +/- 0.00338, N = 3 2.54776 2.54274 2.55148 MIN: 2.27 MIN: 2.26 MIN: 2.27 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU 3 2 1 0.7711 1.5422 2.3133 3.0844 3.8555 SE +/- 0.00459, N = 3 3.40519 3.41717 3.42701 MIN: 3.12 MIN: 3.11 MIN: 3.11 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU 3 2 1 0.1218 0.2436 0.3654 0.4872 0.609 SE +/- 0.000426, N = 3 0.539335 0.538777 0.541364 MIN: 0.48 MIN: 0.47 MIN: 0.47 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU 3 2 1 0.3166 0.6332 0.9498 1.2664 1.583 SE +/- 0.01012, N = 3 1.39261 1.40690 1.39215 MIN: 1.21 MIN: 1.21 MIN: 1.22 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 1.3233 2.6466 3.9699 5.2932 6.6165 SE +/- 0.01210, N = 3 5.85589 5.88065 5.88132 MIN: 5.47 MIN: 5.44 MIN: 5.43 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 0.652 1.304 1.956 2.608 3.26 SE +/- 0.00361, N = 3 2.88924 2.89756 2.83678 MIN: 2.57 MIN: 2.55 MIN: 2.55 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU 3 2 1 1.0576 2.1152 3.1728 4.2304 5.288 SE +/- 0.00914, N = 3 4.68043 4.68814 4.70048 MIN: 4.27 MIN: 4.24 MIN: 4.25 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU 3 2 1 1.25 2.5 3.75 5 6.25 SE +/- 0.09201, N = 15 3.98082 4.50935 5.55575 MIN: 3.27 MIN: 3.22 MIN: 3.26 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU 3 2 1 0.7813 1.5626 2.3439 3.1252 3.9065 SE +/- 0.03666, N = 3 3.33580 3.40838 3.47238 MIN: 3.17 MIN: 3.17 MIN: 3.17 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU 3 2 1 1.04 2.08 3.12 4.16 5.2 SE +/- 0.01741, N = 3 4.62231 4.57869 4.61416 MIN: 4.17 MIN: 4.13 MIN: 4.16 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU 3 2 1 0.1309 0.2618 0.3927 0.5236 0.6545 SE +/- 0.001705, N = 3 0.579300 0.581814 0.581051 MIN: 0.52 MIN: 0.52 MIN: 0.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU 3 2 1 0.2093 0.4186 0.6279 0.8372 1.0465 SE +/- 0.013165, N = 3 0.903897 0.930175 0.906022 MIN: 0.83 MIN: 0.84 MIN: 0.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU 3 2 1 400 800 1200 1600 2000 SE +/- 5.17, N = 3 1779.42 1774.74 1779.67 MIN: 1751.79 MIN: 1741.55 MIN: 1752.37 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU 3 2 1 200 400 600 800 1000 SE +/- 0.67, N = 3 991.16 1003.28 1008.88 MIN: 968.71 MIN: 978.75 MIN: 982.03 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU 3 2 1 400 800 1200 1600 2000 SE +/- 1.23, N = 3 1779.43 1785.72 1771.99 MIN: 1751.55 MIN: 1752.23 MIN: 1740.69 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 3 6 9 12 15 SE +/- 0.00857, N = 3 9.83093 9.84755 9.83488 MIN: 9.11 MIN: 9.08 MIN: 9.08 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 3 6 9 12 15 SE +/- 0.07, N = 3 12.52 12.42 12.36 MIN: 11.45 MIN: 11.45 MIN: 11.45 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 3 6 9 12 15 SE +/- 0.02, N = 3 12.67 12.69 12.75 MIN: 12.37 MIN: 12.41 MIN: 12.41 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU 3 2 1 200 400 600 800 1000 SE +/- 4.21, N = 3 995.41 999.49 990.54 MIN: 971.26 MIN: 966.55 MIN: 959.7 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU 3 2 1 0.2467 0.4934 0.7401 0.9868 1.2335 SE +/- 0.00360, N = 3 1.09422 1.09014 1.09663 MIN: 0.95 MIN: 0.94 MIN: 0.95 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 400 800 1200 1600 2000 SE +/- 4.93, N = 3 1786.55 1784.29 1764.31 MIN: 1755.21 MIN: 1739.16 MIN: 1737.4 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 200 400 600 800 1000 SE +/- 0.52, N = 3 1011.78 1005.65 993.82 MIN: 985.37 MIN: 980.08 MIN: 968.8 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU 3 2 1 0.1345 0.269 0.4035 0.538 0.6725 SE +/- 0.002509, N = 3 0.597077 0.592321 0.597683 MIN: 0.52 MIN: 0.51 MIN: 0.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU 3 2 1 0.5112 1.0224 1.5336 2.0448 2.556 SE +/- 0.00606, N = 3 2.27207 2.26126 2.27034 MIN: 2.01 MIN: 1.97 MIN: 2.01 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: leblancbig 3 2 1 6 12 18 24 30 SE +/- 0.01, N = 3 27.38 27.37 27.25 1. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction 3 2 1 4 8 12 16 20 SE +/- 0.20, N = 3 14.12 13.83 14.08 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction 3 2 1 12 24 36 48 60 SE +/- 0.28, N = 3 52.58 52.57 52.50 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time 3 2 1 9M 18M 27M 36M 45M SE +/- 343002.27, N = 3 41221711 40248549 38702741 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 3 - Compression Speed 3 2 1 600 1200 1800 2400 3000 SE +/- 17.00, N = 3 2883.0 2935.0 2913.2 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 3, Long Mode - Decompression Speed 3 2 1 700 1400 2100 2800 3500 SE +/- 15.06, N = 3 3318.7 3291.7 3303.5 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 8, Long Mode - Decompression Speed 3 2 1 700 1400 2100 2800 3500 SE +/- 16.76, N = 3 3258.6 3254.0 3269.1 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 19, Long Mode - Decompression Speed 3 2 1 600 1200 1800 2400 3000 SE +/- 16.34, N = 3 2721.4 2677.6 2729.2 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K 3 2 1 0.5333 1.0666 1.5999 2.1332 2.6665 SE +/- 0.01, N = 3 2.36 2.37 2.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K 3 2 1 2 4 6 8 10 SE +/- 0.01, N = 3 7.81 7.78 7.52 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K 3 2 1 0.9428 1.8856 2.8284 3.7712 4.714 SE +/- 0.05, N = 3 4.05 4.13 4.19 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K 3 2 1 4 8 12 16 20 SE +/- 0.17, N = 3 16.49 16.74 16.93 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K 3 2 1 6 12 18 24 30 SE +/- 0.09, N = 3 22.95 23.02 21.88 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p 3 2 1 0.0765 0.153 0.2295 0.306 0.3825 SE +/- 0.00, N = 3 0.34 0.34 0.34 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p 3 2 1 0.756 1.512 2.268 3.024 3.78 SE +/- 0.00, N = 3 3.36 3.36 3.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p 3 2 1 3 6 9 12 15 SE +/- 0.01, N = 3 10.94 11.09 11.09 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p 3 2 1 2 4 6 8 10 SE +/- 0.01, N = 3 8.49 8.46 8.47 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p 3 2 1 9 18 27 36 45 SE +/- 0.41, N = 3 38.94 38.14 38.54 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p 3 2 1 11 22 33 44 55 SE +/- 0.69, N = 3 46.17 48.12 48.85 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p 3 2 1 60 120 180 240 300 SE +/- 1.43, N = 3 266.29 264.14 267.03 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p 3 2 1 60 120 180 240 300 SE +/- 2.98, N = 3 273.50 269.85 267.34 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p 3 2 1 40 80 120 160 200 SE +/- 0.48, N = 3 201.78 204.13 201.96 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 4K 3 2 1 40 80 120 160 200 SE +/- 0.57, N = 3 187.90 172.64 174.26 MIN: 172.38 / MAX: 222.92 MIN: 139.18 / MAX: 203.2 MIN: 157.96 / MAX: 208.32 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 1080p 3 2 1 70 140 210 280 350 SE +/- 0.83, N = 3 317.53 271.02 269.99 MIN: 291.94 / MAX: 371.15 MIN: 239.91 / MAX: 295.67 MIN: 246.57 / MAX: 294.05 1. (CC) gcc options: -pthread
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p 3 2 1 3 6 9 12 15 SE +/- 0.03, N = 3 10.43 10.38 10.41 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p 3 2 1 30 60 90 120 150 SE +/- 0.36, N = 3 153.85 152.04 152.36 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p 3 2 1 70 140 210 280 350 SE +/- 1.08, N = 3 312.50 311.16 307.85 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sAXPY 3 2 1 30 60 90 120 150 SE +/- 0.58, N = 3 127 125 125 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dCOPY 3 2 1 16 32 48 64 80 SE +/- 0.15, N = 3 72.3 71.7 71.7 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-N 3 2 1 20 40 60 80 100 SE +/- 0.59, N = 3 99.3 99.8 99.6 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NN 3 2 1 13 26 39 52 65 SE +/- 1.50, N = 3 57.1 55.7 57.3 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NT 3 2 1 13 26 39 52 65 SE +/- 0.57, N = 3 50.9 55.5 56.2 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TN 3 2 1 13 26 39 52 65 SE +/- 1.21, N = 3 54.8 57.7 58.2 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TT 3 2 1 13 26 39 52 65 SE +/- 1.65, N = 3 54.7 57.9 59.2 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Signal Source (Cosine) 3 2 1 500 1000 1500 2000 2500 SE +/- 54.15, N = 3 2193.4 2152.7 2257.7 1. 3.8.1.0
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 1 - Buffer Length: 256 - Filter Length: 57 3 2 1 11M 22M 33M 44M 55M SE +/- 214667.44, N = 3 49287000 48978333 49219000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 2 - Buffer Length: 256 - Filter Length: 57 3 2 1 20M 40M 60M 80M 100M SE +/- 69848.25, N = 3 89791000 90398333 88934000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 4 - Buffer Length: 256 - Filter Length: 57 3 2 1 40M 80M 120M 160M 200M SE +/- 507181.76, N = 3 178570000 178210000 177020000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 8 - Buffer Length: 256 - Filter Length: 57 3 2 1 70M 140M 210M 280M 350M SE +/- 416706.66, N = 3 344060000 342623333 343710000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 3 2 1 140M 280M 420M 560M 700M SE +/- 451897.24, N = 3 636370000 638273333 638950000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 32 - Buffer Length: 256 - Filter Length: 57 3 2 1 160M 320M 480M 640M 800M SE +/- 131698.31, N = 3 728240000 728556667 728810000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
LuaRadio LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Five Back to Back FIR Filters 3 2 1 200 400 600 800 1000 SE +/- 2.65, N = 3 851.8 853.9 857.6
srsLTE srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsLTE 20.10.1 Test: OFDM_Test 3 2 1 20M 40M 60M 80M 100M SE +/- 348010.22, N = 3 114000000 113866667 112800000 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test 3 2 1 40 80 120 160 200 SE +/- 0.29, N = 3 190.2 190.4 190.6 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.org UE Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test 3 2 1 20 40 60 80 100 SE +/- 0.10, N = 3 77.7 77.8 78.3 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: Kostya 3 2 1 0.5333 1.0666 1.5999 2.1332 2.6665 SE +/- 0.01, N = 3 2.37 2.32 2.34 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: LargeRandom 3 2 1 0.1868 0.3736 0.5604 0.7472 0.934 SE +/- 0.00, N = 3 0.83 0.83 0.83 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: PartialTweets 3 2 1 0.7605 1.521 2.2815 3.042 3.8025 SE +/- 0.00, N = 3 3.38 3.38 3.38 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: DistinctUserID 3 2 1 0.909 1.818 2.727 3.636 4.545 SE +/- 0.00, N = 3 4.04 4.03 4.03 1. (CXX) g++ options: -O3 -pthread
1 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 27 March 2021 18:28 by user pts.
2 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 27 March 2021 20:44 by user pts.
3 Processor: Intel Xeon Gold 6226R @ 3.90GHz (16 Cores / 32 Threads), Motherboard: Supermicro X11SPL-F v1.02 (3.1 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 188GB, Disk: 280GB INTEL SSDPED1D280GA, Graphics: ASPEED, Network: 2 x Intel I210
OS: Ubuntu 20.10, Kernel: 5.11.0-rc4-max-boost-inv-patch (x86_64) 20210121, Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003003Python Notes: Python 3.8.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 28 March 2021 05:11 by user pts.