Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (0707 BIOS) and AMD Radeon VII 16GB on Fedora 34 via the Phoronix Test Suite.
-O3 -march=native Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverDisk Notes: NONE / compress=zstd:1,relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.4Python Notes: Python 3.9.5Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
-O1 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (0707 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 2000GB Corsair Force MP600 + 257GB Flash Drive, Graphics: AMD Radeon VII 16GB (1801/1000MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Fedora 34, Kernel: 5.12.9-300.fc34.x86_64 (x86_64), Desktop: GNOME Shell 40.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 21.1.1 (LLVM 12.0.0), Compiler: GCC 11.1.1 20210531, File-System: btrfs, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS=-O1 CFLAGS=-O1Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverDisk Notes: NONE / compress=zstd:1,relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.4Python Notes: Python 3.9.5Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Compiler Optimization Levels OpenBenchmarking.org Phoronix Test Suite Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads) ASUS ROG MAXIMUS XIII HERO (0707 BIOS) Intel Tiger Lake-H 32GB 2000GB Corsair Force MP600 + 257GB Flash Drive AMD Radeon VII 16GB (1801/1000MHz) Intel Tiger Lake-H HD Audio ASUS MG28U 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 Fedora 34 5.12.9-300.fc34.x86_64 (x86_64) GNOME Shell 40.1 X Server + Wayland 4.6 Mesa 21.1.1 (LLVM 12.0.0) GCC 11.1.1 20210531 btrfs 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution Compiler Optimization Levels Performance System Logs - Transparent Huge Pages: madvise - -O3 -march=native: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native" - -O1: CXXFLAGS=-O1 CFLAGS=-O1 - --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - NONE / compress=zstd:1,relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096 - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.4 - Python 3.9.5 - SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
-O3 -march=native vs. -O1 Comparison Phoronix Test Suite Baseline +43.1% +43.1% +86.2% +86.2% +129.3% +129.3% 8.8% 6.3% 6.3% 5.7% 5.6% 3.3% Total Time - 4.1.R.P.P 172.3% CPU - mnasnet 42.8% WAV To MP3 40.2% CPU-v2-v2 - mobilenet-v2 30.5% CPU-v3-v3 - mobilenet-v3 28.1% CPU - mobilenet 27.7% Enhanced 23.9% CPU - efficientnet-b0 23.6% Keyed Algorithms 23% CPU - resnet50 22.3% WAV To Opus Encode 22% Sharpen 20.4% Resizing 19.7% CoreMark Size 666 - I.P.S 18.5% Swirl 16.4% 2 - 256 - 57 16% 8 - 256 - 57 15.4% 4 - 256 - 57 14.9% 2048 x 2048 - Total Time 14.1% CPU - regnety_400m 13.5% CPU - googlenet 13% 1 - 256 - 57 12.9% CAST-256 12.9% CAST-256 - Decrypt 12.7% WAV To FLAC 11% All Algorithms 11% T.T.S.S 10.3% S.F.P.R G.I.R.1.S 8.7% CPU - blazeface 7.8% Twofish 7.8% 16 - 256 - 57 7.5% KASUMI 7% HWB Color Space 6.5% 3 - Compression Speed 6.4% Timed Time - Size 1,000 6.3% 3, Long Mode - Compression Speed Static OMP Speedup P.P.A 6.1% CPU - shufflenet-v2 5.8% CPU - squeezenet_ssd 5.8% Twofish - Decrypt 5.7% AES-256 CPU - MobileNet v2 5.7% AES-256 - Decrypt Summer Nature 4K 5.4% KASUMI - Decrypt 5.2% CPU - yolov4-tiny 5.2% I.E.C.P.K.A 4.8% 20k Atoms 4.7% Thorough 4.4% D.T 4.4% P.D.S 4.3% Rhodopsin Protein 4% Blowfish - Decrypt 3.9% Unkeyed Algorithms 3.9% VMAF Optimized - Bosphorus 1080p 3.8% CPU - SqueezeNet v1.1 3.7% V.Q.O - Bosphorus 1080p 3.5% CPU - resnet18 3.5% Blowfish 3.5% Medium 3.4% MobileNetV2_224 3.4% Exhaustive 3.4% P.S.O - Bosphorus 1080p 3.4% AUSURF112 3 - D.S 3.1% 1 - Bosphorus 1080p 3% OPTIONS, Stateless 3% 8, Long Mode - D.S 2.9% SqueezeNetV1.0 2.7% 10 - Bosphorus 1080p 2.6% D.T.P 2.6% 3, Long Mode - D.S 2.5% 7 - Bosphorus 1080p 2.3% 19 - D.S 2.3% 8 - D.S 2.2% mobilenet-v1-1.0 2% C-Ray NCNN LAME MP3 Encoding NCNN NCNN NCNN GraphicsMagick NCNN Crypto++ NCNN Opus Codec Encoding GraphicsMagick GraphicsMagick Coremark GraphicsMagick Liquid-DSP Liquid-DSP Liquid-DSP AOBench NCNN NCNN Liquid-DSP Botan Botan FLAC Audio Encoding Crypto++ eSpeak-NG Speech Engine ACES DGEMM Smallpt NCNN Botan Liquid-DSP Botan GraphicsMagick Zstd Compression SQLite Speedtest Zstd Compression CLOMP Timed MrBayes Analysis NCNN NCNN Botan Botan TNN Botan dav1d Botan NCNN Crypto++ LAMMPS Molecular Dynamics Simulator ASTC Encoder libjpeg-turbo tjbench Timed HMMer Search LAMMPS Molecular Dynamics Simulator Botan Crypto++ SVT-VP9 TNN SVT-VP9 NCNN Botan ASTC Encoder Mobile Neural Network ASTC Encoder SVT-VP9 Quantum ESPRESSO Zstd Compression SVT-HEVC PJSIP Zstd Compression Mobile Neural Network SVT-HEVC PostMark Zstd Compression SVT-HEVC Zstd Compression Zstd Compression Mobile Neural Network -O3 -march=native -O1
Compiler Optimization Levels postmark: Disk Transaction Performance cryptopp: All Algorithms cryptopp: Keyed Algorithms cryptopp: Unkeyed Algorithms cryptopp: Integer + Elliptic Curve Public Key Algorithms clomp: Static OMP Speedup mrbayes: Primate Phylogeny Analysis hmmer: Pfam Database Search qe: AUSURF112 lammps: 20k Atoms lammps: Rhodopsin Protein gmpbench: Total Time chia-vdf: Square Plain C++ chia-vdf: Square Assembly Optimized compress-zstd: 3 - Compression Speed compress-zstd: 3 - Decompression Speed compress-zstd: 8 - Compression Speed compress-zstd: 8 - Decompression Speed compress-zstd: 19 - Compression Speed compress-zstd: 19 - Decompression Speed compress-zstd: 3, Long Mode - Compression Speed compress-zstd: 3, Long Mode - Decompression Speed compress-zstd: 8, Long Mode - Compression Speed compress-zstd: 8, Long Mode - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed compress-zstd: 19, Long Mode - Decompression Speed botan: KASUMI botan: KASUMI - Decrypt botan: AES-256 botan: AES-256 - Decrypt botan: Twofish botan: Twofish - Decrypt botan: Blowfish botan: Blowfish - Decrypt botan: CAST-256 botan: CAST-256 - Decrypt botan: ChaCha20Poly1305 botan: ChaCha20Poly1305 - Decrypt graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space dav1d: Summer Nature 4K svt-hevc: 1 - Bosphorus 1080p svt-hevc: 7 - Bosphorus 1080p svt-hevc: 10 - Bosphorus 1080p svt-vp9: VMAF Optimized - Bosphorus 1080p svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080p svt-vp9: Visual Quality Optimized - Bosphorus 1080p x265: Bosphorus 4K x265: Bosphorus 1080p mt-dgemm: Sustained Floating-Point Rate coremark: CoreMark Size 666 - Iterations Per Second stockfish: Total Time pjsip: INVITE pjsip: OPTIONS, Stateful pjsip: OPTIONS, Stateless c-ray: Total Time - 4K, 16 Rays Per Pixel smallpt: Global Illumination Renderer; 128 Samples onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU aobench: 2048 x 2048 - Total Time encode-flac: WAV To FLAC encode-mp3: WAV To MP3 encode-opus: WAV To Opus Encode espeak: Text-To-Speech Synthesis liquid-dsp: 1 - 256 - 57 liquid-dsp: 2 - 256 - 57 liquid-dsp: 4 - 256 - 57 liquid-dsp: 8 - 256 - 57 liquid-dsp: 16 - 256 - 57 tjbench: Decompression Throughput astcenc: Medium astcenc: Thorough astcenc: Exhaustive basis: ETC1S basis: UASTC Level 0 basis: UASTC Level 2 basis: UASTC Level 3 sqlite-speedtest: Timed Time - Size 1,000 redis: GET redis: SET caffe: AlexNet - CPU - 100 caffe: GoogleNet - CPU - 100 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m tnn: CPU - MobileNet v2 tnn: CPU - SqueezeNet v1.1 sysbench: CPU encode-wavpack: WAV To WavPack kripke: -O3 -march=native -O1 9496 2346.359074 924.212911 491.454981 7194.857104 4.8 83.430 99.484 2609.02 8.737 8.513 6171.8 208400 250633 2731.5 4997.8 192.6 5189.9 35.4 4506.5 1451.0 5346.0 285.9 5542.9 32.8 4540.6 115.816 112.027 8401.852 8412.961 464.472 451.660 552.463 553.519 168.756 168.851 1012.732 1010.787 689 1094 195 270 1222 310 1285 195.94 9.48 140.40 279.12 198.73 204.96 166.43 16.02 67.85 3.604641 434724.849744 29443112 5060 9375 254610 47.335 8.401 4.03781 11.2002 14.2754 4.98281 4.28984 3165.60 1876.42 3.52485 21.556 5.937 5.473 5.595 21.765 99844333 188003333 363760000 687846667 722756667 271.676664 4.2153 9.3601 51.4853 20.808 6.106 29.138 54.586 46.087 4049394.67 2956462.00 36558 83625 3.748 19.224 1.916 1.883 22.513 11.76 3.21 2.49 3.26 2.22 4.24 1.15 10.09 54.36 11.08 9.64 18.23 20.21 15.29 8.57 230.113 227.455 34770.14 11.098 33544357 9259 2114.624613 751.481521 472.947089 6862.786620 5.1 88.533 103.742 2525.86 8.345 8.184 209233 247933 2568.0 4847.5 189.2 5075.8 35.4 4406.4 1542.8 5215.3 281.5 5385.7 32.9 4506.0 108.276 106.478 8879.330 8885.129 430.951 427.255 533.956 532.560 149.439 149.807 1019.913 1004.647 592 1078 162 218 1021 306 1207 185.95 9.20 137.23 271.99 191.41 198.18 160.73 15.72 67.85 3.922224 366951.484290 29448017 4993 9333 247106 128.907 9.133 4.04828 11.0289 14.1700 4.97288 4.28224 3133.28 1854.40 3.52499 24.605 6.590 7.675 6.828 24.001 88411000 162046667 316710000 595816667 672296667 260.256611 4.3606 9.7734 53.2528 20.845 6.114 29.108 54.557 49.011 3982525.83 2962660.83 36622 84729 3.848 19.507 1.982 1.921 22.942 15.02 4.19 3.19 3.45 3.17 5.24 1.24 11.40 54.91 11.47 9.62 22.29 21.26 16.18 9.73 243.162 235.963 34882.14 11.132 33790753 OpenBenchmarking.org
PostMark This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance -O3 -march=native -O1 2K 4K 6K 8K 10K SE +/- 118.67, N = 3 9496 9259 -march=native -O1 1. (CC) gcc options: -O3
OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.2 Test: Keyed Algorithms -O3 -march=native -O1 200 400 600 800 1000 SE +/- 0.64, N = 3 SE +/- 0.51, N = 3 924.21 751.48 -O3 -march=native -O1 1. (CXX) g++ options: -fPIC -pthread -pipe
OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.2 Test: Unkeyed Algorithms -O3 -march=native -O1 110 220 330 440 550 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 491.45 472.95 -O3 -march=native -O1 1. (CXX) g++ options: -fPIC -pthread -pipe
OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.2 Test: Integer + Elliptic Curve Public Key Algorithms -O3 -march=native -O1 1500 3000 4500 6000 7500 SE +/- 1.75, N = 3 SE +/- 4.50, N = 3 7194.86 6862.79 -O3 -march=native -O1 1. (CXX) g++ options: -fPIC -pthread -pipe
CLOMP CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Speedup, More Is Better CLOMP 1.2 Static OMP Speedup -O1 -O3 -march=native 1.1475 2.295 3.4425 4.59 5.7375 SE +/- 0.06, N = 3 SE +/- 0.07, N = 3 5.1 4.8 -O1 -march=native 1. (CC) gcc options: -fopenmp -O3 -lm
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis -O3 -march=native -O1 20 40 60 80 100 SE +/- 0.09, N = 3 SE +/- 0.17, N = 3 83.43 88.53 -march=native -O1 1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm
Quantum ESPRESSO Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Quantum ESPRESSO 6.7 Input: AUSURF112 -O1 -O3 -march=native 600 1200 1800 2400 3000 SE +/- 27.81, N = 5 SE +/- 5.73, N = 3 2525.86 2609.02 1. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
Chia Blockchain VDF Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org IPS, More Is Better Chia Blockchain VDF 1.0.1 Test: Square Plain C++ -O1 -O3 -march=native 40K 80K 120K 160K 200K SE +/- 120.19, N = 3 SE +/- 57.74, N = 3 209233 208400 1. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.org IPS, More Is Better Chia Blockchain VDF 1.0.1 Test: Square Assembly Optimized -O3 -march=native -O1 50K 100K 150K 200K 250K SE +/- 1105.04, N = 3 SE +/- 1471.21, N = 3 250633 247933 1. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3 - Compression Speed -O3 -march=native -O1 600 1200 1800 2400 3000 SE +/- 14.92, N = 3 SE +/- 8.18, N = 3 2731.5 2568.0 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3 - Decompression Speed -O3 -march=native -O1 1100 2200 3300 4400 5500 SE +/- 19.31, N = 3 SE +/- 8.75, N = 3 4997.8 4847.5 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8 - Compression Speed -O3 -march=native -O1 40 80 120 160 200 SE +/- 0.90, N = 3 SE +/- 0.57, N = 3 192.6 189.2 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8 - Decompression Speed -O3 -march=native -O1 1100 2200 3300 4400 5500 SE +/- 15.26, N = 3 SE +/- 13.17, N = 3 5189.9 5075.8 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed -O1 -O3 -march=native 8 16 24 32 40 SE +/- 0.43, N = 4 SE +/- 0.48, N = 3 35.4 35.4 -O1 -O3 -march=native 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed -O3 -march=native -O1 1000 2000 3000 4000 5000 SE +/- 18.10, N = 3 SE +/- 6.02, N = 4 4506.5 4406.4 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3, Long Mode - Compression Speed -O1 -O3 -march=native 300 600 900 1200 1500 SE +/- 12.97, N = 3 SE +/- 22.75, N = 15 1542.8 1451.0 -O1 -O3 -march=native 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3, Long Mode - Decompression Speed -O3 -march=native -O1 1100 2200 3300 4400 5500 SE +/- 2.50, N = 15 SE +/- 8.30, N = 3 5346.0 5215.3 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8, Long Mode - Compression Speed -O3 -march=native -O1 60 120 180 240 300 SE +/- 2.25, N = 15 SE +/- 2.78, N = 3 285.9 281.5 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8, Long Mode - Decompression Speed -O3 -march=native -O1 1200 2400 3600 4800 6000 SE +/- 6.10, N = 15 SE +/- 9.52, N = 3 5542.9 5385.7 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed -O1 -O3 -march=native 8 16 24 32 40 SE +/- 0.15, N = 3 SE +/- 0.19, N = 3 32.9 32.8 -O1 -O3 -march=native 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed -O3 -march=native -O1 1000 2000 3000 4000 5000 SE +/- 15.31, N = 3 SE +/- 3.19, N = 3 4540.6 4506.0 -O3 -march=native -O1 1. (CC) gcc options: -pthread -lz
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI -O3 -march=native -O1 30 60 90 120 150 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 115.82 108.28 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI - Decrypt -O3 -march=native -O1 30 60 90 120 150 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 112.03 106.48 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 -O1 -O3 -march=native 2K 4K 6K 8K 10K SE +/- 0.64, N = 3 SE +/- 5.18, N = 3 8879.33 8401.85 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 - Decrypt -O1 -O3 -march=native 2K 4K 6K 8K 10K SE +/- 2.06, N = 3 SE +/- 5.34, N = 3 8885.13 8412.96 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish -O3 -march=native -O1 100 200 300 400 500 SE +/- 0.31, N = 3 SE +/- 0.19, N = 3 464.47 430.95 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish - Decrypt -O3 -march=native -O1 100 200 300 400 500 SE +/- 0.62, N = 3 SE +/- 0.13, N = 3 451.66 427.26 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish -O3 -march=native -O1 120 240 360 480 600 SE +/- 0.20, N = 3 SE +/- 0.93, N = 3 552.46 533.96 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish - Decrypt -O3 -march=native -O1 120 240 360 480 600 SE +/- 0.26, N = 3 SE +/- 1.04, N = 3 553.52 532.56 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 -O3 -march=native -O1 40 80 120 160 200 SE +/- 0.06, N = 3 SE +/- 1.37, N = 15 168.76 149.44 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 - Decrypt -O3 -march=native -O1 40 80 120 160 200 SE +/- 0.01, N = 3 SE +/- 1.14, N = 15 168.85 149.81 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 -O1 -O3 -march=native 200 400 600 800 1000 SE +/- 1.88, N = 3 SE +/- 0.46, N = 3 1019.91 1012.73 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 - Decrypt -O3 -march=native -O1 200 400 600 800 1000 SE +/- 0.23, N = 3 SE +/- 1.73, N = 3 1010.79 1004.65 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Swirl -O3 -march=native -O1 150 300 450 600 750 SE +/- 2.67, N = 3 SE +/- 1.00, N = 3 689 592 -O3 -march=native -O1 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate -O3 -march=native -O1 200 400 600 800 1000 SE +/- 2.03, N = 3 SE +/- 1.20, N = 3 1094 1078 -O3 -march=native -O1 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen -O3 -march=native -O1 40 80 120 160 200 SE +/- 0.58, N = 3 SE +/- 0.58, N = 3 195 162 -O3 -march=native -O1 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced -O3 -march=native -O1 60 120 180 240 300 SE +/- 0.33, N = 3 270 218 -O3 -march=native -O1 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing -O3 -march=native -O1 300 600 900 1200 1500 SE +/- 2.33, N = 3 SE +/- 1.00, N = 3 1222 1021 -O3 -march=native -O1 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Noise-Gaussian -O3 -march=native -O1 70 140 210 280 350 SE +/- 0.88, N = 3 310 306 -O3 -march=native -O1 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: HWB Color Space -O3 -march=native -O1 300 600 900 1200 1500 SE +/- 1.20, N = 3 SE +/- 1.33, N = 3 1285 1207 -O3 -march=native -O1 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.9.0 Video Input: Summer Nature 4K -O3 -march=native -O1 40 80 120 160 200 SE +/- 0.19, N = 3 SE +/- 0.05, N = 3 195.94 185.95 -O3 -march=native - MIN: 181.35 / MAX: 208.71 -O1 - MIN: 169.98 / MAX: 195.75 1. (CC) gcc options: -pthread -lm
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p -O3 -march=native -O1 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 9.48 9.20 -march=native -O1 1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p -O3 -march=native -O1 30 60 90 120 150 SE +/- 0.11, N = 3 SE +/- 0.28, N = 3 140.40 137.23 -march=native -O1 1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p -O3 -march=native -O1 60 120 180 240 300 SE +/- 0.60, N = 3 SE +/- 0.19, N = 3 279.12 271.99 -march=native -O1 1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p -O3 -march=native -O1 40 80 120 160 200 SE +/- 1.49, N = 10 SE +/- 1.54, N = 9 198.73 191.41 -march=native -O1 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p -O3 -march=native -O1 40 80 120 160 200 SE +/- 0.17, N = 3 SE +/- 0.07, N = 3 204.96 198.18 -march=native -O1 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p -O3 -march=native -O1 40 80 120 160 200 SE +/- 0.27, N = 3 SE +/- 0.29, N = 3 166.43 160.73 -march=native -O1 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
x265 This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 4K -O3 -march=native -O1 4 8 12 16 20 SE +/- 0.12, N = 3 SE +/- 0.17, N = 4 16.02 15.72 -O3 -march=native -O1 1. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p -O1 -O3 -march=native 15 30 45 60 75 SE +/- 0.60, N = 3 SE +/- 0.32, N = 3 67.85 67.85 -O1 -O3 -march=native 1. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time -O1 -O3 -march=native 6M 12M 18M 24M 30M SE +/- 371064.83, N = 3 SE +/- 193823.90, N = 3 29448017 29443112 -O1 -march=native 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver
PJSIP PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: INVITE -O3 -march=native -O1 1100 2200 3300 4400 5500 SE +/- 15.24, N = 3 SE +/- 45.51, N = 3 5060 4993 -O3 -march=native -O1 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: OPTIONS, Stateful -O3 -march=native -O1 2K 4K 6K 8K 10K SE +/- 7.69, N = 3 SE +/- 4.41, N = 3 9375 9333 -O3 -march=native -O1 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: OPTIONS, Stateless -O3 -march=native -O1 50K 100K 150K 200K 250K SE +/- 711.03, N = 3 SE +/- 520.47, N = 3 254610 247106 -O3 -march=native -O1 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel -O3 -march=native -O1 30 60 90 120 150 SE +/- 0.15, N = 3 SE +/- 0.06, N = 3 47.34 128.91 -march=native -O1 1. (CC) gcc options: -lm -lpthread -O3
Smallpt Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Smallpt 1.0 Global Illumination Renderer; 128 Samples -O3 -march=native -O1 3 6 9 12 15 SE +/- 0.009, N = 3 SE +/- 0.002, N = 3 8.401 9.133 -march=native -O1 1. (CXX) g++ options: -fopenmp -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU -O3 -march=native -O1 0.9109 1.8218 2.7327 3.6436 4.5545 SE +/- 0.00473, N = 3 SE +/- 0.00076, N = 3 4.03781 4.04828 MIN: 3.92 -O1 - MIN: 3.91 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU -O1 -O3 -march=native 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 11.03 11.20 -O1 - MIN: 10.93 MIN: 11.11 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU -O1 -O3 -march=native 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 14.17 14.28 -O1 - MIN: 14.04 MIN: 14.18 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU -O1 -O3 -march=native 1.1211 2.2422 3.3633 4.4844 5.6055 SE +/- 0.01654, N = 3 SE +/- 0.01117, N = 3 4.97288 4.98281 -O1 - MIN: 3.81 MIN: 3.81 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU -O1 -O3 -march=native 0.9652 1.9304 2.8956 3.8608 4.826 SE +/- 0.00335, N = 3 SE +/- 0.00621, N = 3 4.28224 4.28984 -O1 - MIN: 4.17 MIN: 4.17 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU -O1 -O3 -march=native 700 1400 2100 2800 3500 SE +/- 2.80, N = 3 SE +/- 1.32, N = 3 3133.28 3165.60 -O1 - MIN: 3120.48 MIN: 3154.25 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU -O1 -O3 -march=native 400 800 1200 1600 2000 SE +/- 4.14, N = 3 SE +/- 1.46, N = 3 1854.40 1876.42 -O1 - MIN: 1837.76 MIN: 1865.18 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU -O3 -march=native -O1 0.7931 1.5862 2.3793 3.1724 3.9655 SE +/- 0.00042, N = 3 SE +/- 0.00163, N = 3 3.52485 3.52499 MIN: 3.46 -O1 - MIN: 3.45 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
LAME MP3 Encoding LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better LAME MP3 Encoding 3.100 WAV To MP3 -O3 -march=native -O1 2 4 6 8 10 SE +/- 0.008, N = 3 SE +/- 0.092, N = 4 5.473 7.675 -march=native -O1 1. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode -O3 -march=native -O1 2 4 6 8 10 SE +/- 0.010, N = 5 SE +/- 0.004, N = 5 5.595 6.828 -O3 -march=native -O1 1. (CXX) g++ options: -fvisibility=hidden -logg -lm
eSpeak-NG Speech Engine This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better eSpeak-NG Speech Engine 20200907 Text-To-Speech Synthesis -O3 -march=native -O1 6 12 18 24 30 SE +/- 0.06, N = 4 SE +/- 0.07, N = 4 21.77 24.00 -O3 -march=native -O1 1. (CC) gcc options: -std=c99 -lpthread -lm
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 1 - Buffer Length: 256 - Filter Length: 57 -O3 -march=native -O1 20M 40M 60M 80M 100M SE +/- 14836.14, N = 3 SE +/- 6806.86, N = 3 99844333 88411000 -march=native -O1 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 2 - Buffer Length: 256 - Filter Length: 57 -O3 -march=native -O1 40M 80M 120M 160M 200M SE +/- 66416.20, N = 3 SE +/- 601728.99, N = 3 188003333 162046667 -march=native -O1 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 4 - Buffer Length: 256 - Filter Length: 57 -O3 -march=native -O1 80M 160M 240M 320M 400M SE +/- 1410968.93, N = 3 SE +/- 132035.35, N = 3 363760000 316710000 -march=native -O1 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 8 - Buffer Length: 256 - Filter Length: 57 -O3 -march=native -O1 150M 300M 450M 600M 750M SE +/- 689597.31, N = 3 SE +/- 736168.76, N = 3 687846667 595816667 -march=native -O1 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 -O3 -march=native -O1 150M 300M 450M 600M 750M SE +/- 134824.99, N = 3 SE +/- 328295.26, N = 3 722756667 672296667 -march=native -O1 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.0 Preset: Medium -O3 -march=native -O1 0.9811 1.9622 2.9433 3.9244 4.9055 SE +/- 0.0026, N = 3 SE +/- 0.0112, N = 3 4.2153 4.3606 -O3 -march=native -O1 1. (CXX) g++ options: -O2 -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.0 Preset: Thorough -O3 -march=native -O1 3 6 9 12 15 SE +/- 0.0151, N = 3 SE +/- 0.0228, N = 3 9.3601 9.7734 -O3 -march=native -O1 1. (CXX) g++ options: -O2 -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.0 Preset: Exhaustive -O3 -march=native -O1 12 24 36 48 60 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 51.49 53.25 -O3 -march=native -O1 1. (CXX) g++ options: -O2 -flto -pthread
Basis Universal Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: ETC1S -O3 -march=native -O1 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 20.81 20.85 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 0 -O3 -march=native -O1 2 4 6 8 10 SE +/- 0.002, N = 3 SE +/- 0.005, N = 3 6.106 6.114 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 2 -O1 -O3 -march=native 7 14 21 28 35 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 29.11 29.14 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 3 -O1 -O3 -march=native 12 24 36 48 60 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 54.56 54.59 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread
Redis Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET -O3 -march=native -O1 900K 1800K 2700K 3600K 4500K SE +/- 18099.88, N = 3 SE +/- 33158.80, N = 3 4049394.67 3982525.83 -march=native -O1 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SET -O1 -O3 -march=native 600K 1200K 1800K 2400K 3000K SE +/- 19439.73, N = 3 SE +/- 33577.98, N = 3 2962660.83 2956462.00 -O1 -march=native 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 -O3 -march=native -O1 8K 16K 24K 32K 40K SE +/- 51.83, N = 3 SE +/- 14.01, N = 3 36558 36622 -O3 -march=native -O1 1. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 -O3 -march=native -O1 20K 40K 60K 80K 100K SE +/- 43.97, N = 3 SE +/- 10.17, N = 3 83625 84729 -O3 -march=native -O1 1. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: SqueezeNetV1.0 -O3 -march=native -O1 0.8658 1.7316 2.5974 3.4632 4.329 SE +/- 0.024, N = 3 SE +/- 0.019, N = 3 3.748 3.848 -march=native - MIN: 3.64 / MAX: 10.5 -O1 - MIN: 3.75 / MAX: 8.08 1. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: resnet-v2-50 -O3 -march=native -O1 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 19.22 19.51 -march=native - MIN: 19.06 / MAX: 24.92 -O1 - MIN: 19.33 / MAX: 23.75 1. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: MobileNetV2_224 -O3 -march=native -O1 0.446 0.892 1.338 1.784 2.23 SE +/- 0.008, N = 3 SE +/- 0.011, N = 3 1.916 1.982 -march=native - MIN: 1.87 / MAX: 6.22 -O1 - MIN: 1.93 / MAX: 7.73 1. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: mobilenet-v1-1.0 -O3 -march=native -O1 0.4322 0.8644 1.2966 1.7288 2.161 SE +/- 0.001, N = 3 SE +/- 0.004, N = 3 1.883 1.921 -march=native - MIN: 1.85 / MAX: 7.81 -O1 - MIN: 1.89 / MAX: 9.19 1. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: inception-v3 -O3 -march=native -O1 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 22.51 22.94 -march=native - MIN: 22.19 / MAX: 27.64 -O1 - MIN: 22.65 / MAX: 29.53 1. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet -O3 -march=native -O1 4 8 12 16 20 SE +/- 0.06, N = 3 SE +/- 0.00, N = 3 11.76 15.02 -O3 -march=native - MIN: 11.54 / MAX: 15.41 -O1 - MIN: 14.88 / MAX: 18.66 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 -O3 -march=native -O1 0.9428 1.8856 2.8284 3.7712 4.714 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 3.21 4.19 -O3 -march=native - MIN: 3.08 / MAX: 4.11 -O1 - MIN: 4.06 / MAX: 7.81 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 -O3 -march=native -O1 0.7178 1.4356 2.1534 2.8712 3.589 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 2.49 3.19 -O3 -march=native - MIN: 2.44 / MAX: 6.14 -O1 - MIN: 3.16 / MAX: 4.05 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 -O3 -march=native -O1 0.7763 1.5526 2.3289 3.1052 3.8815 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 3.26 3.45 -O3 -march=native - MIN: 3.18 / MAX: 6.94 -O1 - MIN: 3.39 / MAX: 7.07 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet -O3 -march=native -O1 0.7133 1.4266 2.1399 2.8532 3.5665 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 2.22 3.17 -O3 -march=native - MIN: 2.17 / MAX: 2.35 -O1 - MIN: 3.14 / MAX: 6.8 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 -O3 -march=native -O1 1.179 2.358 3.537 4.716 5.895 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 4.24 5.24 -O3 -march=native - MIN: 4.19 / MAX: 7.9 -O1 - MIN: 5.17 / MAX: 8.84 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface -O3 -march=native -O1 0.279 0.558 0.837 1.116 1.395 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 1.15 1.24 -O3 -march=native - MIN: 1.08 / MAX: 2 -O1 - MIN: 1.21 / MAX: 5.59 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet -O3 -march=native -O1 3 6 9 12 15 SE +/- 0.17, N = 3 SE +/- 0.02, N = 3 10.09 11.40 -O3 -march=native - MIN: 9.67 / MAX: 13.94 -O1 - MIN: 11.29 / MAX: 14.99 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 -O3 -march=native -O1 12 24 36 48 60 SE +/- 0.11, N = 3 SE +/- 0.09, N = 3 54.36 54.91 -O3 -march=native - MIN: 53.85 / MAX: 59.24 -O1 - MIN: 54.36 / MAX: 58.94 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 -O3 -march=native -O1 3 6 9 12 15 SE +/- 0.14, N = 3 SE +/- 0.01, N = 3 11.08 11.47 -O3 -march=native - MIN: 10.69 / MAX: 16.91 -O1 - MIN: 11.34 / MAX: 15.37 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet -O1 -O3 -march=native 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 9.62 9.64 -O1 - MIN: 9.5 / MAX: 13.21 -O3 -march=native - MIN: 9.53 / MAX: 13.24 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 -O3 -march=native -O1 5 10 15 20 25 SE +/- 0.15, N = 3 SE +/- 0.03, N = 3 18.23 22.29 -O3 -march=native - MIN: 17.79 / MAX: 22.11 -O1 - MIN: 22.02 / MAX: 27 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny -O3 -march=native -O1 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 20.21 21.26 -O3 -march=native - MIN: 20.03 / MAX: 23.86 -O1 - MIN: 20.97 / MAX: 27.08 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd -O3 -march=native -O1 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 15.29 16.18 -O3 -march=native - MIN: 15.14 / MAX: 19 -O1 - MIN: 16.02 / MAX: 19.89 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m -O3 -march=native -O1 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 8.57 9.73 -O3 -march=native - MIN: 8.47 / MAX: 12.35 -O1 - MIN: 9.55 / MAX: 14.41 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 -O3 -march=native -O1 50 100 150 200 250 SE +/- 0.06, N = 3 SE +/- 0.20, N = 3 230.11 243.16 -O3 -march=native - MIN: 229.52 / MAX: 232.81 -O1 - MIN: 241.63 / MAX: 246.21 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 -O3 -march=native -O1 50 100 150 200 250 SE +/- 0.04, N = 3 SE +/- 0.15, N = 3 227.46 235.96 -O3 -march=native - MIN: 226.88 / MAX: 228.23 -O1 - MIN: 234.76 / MAX: 237.84 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU -O1 -O3 -march=native 7K 14K 21K 28K 35K SE +/- 6.87, N = 3 SE +/- 2.38, N = 3 34882.14 34770.14 -O1 -O3 -march=native 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Kripke Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.4 -O1 -O3 -march=native 7M 14M 21M 28M 35M SE +/- 73388.54, N = 3 SE +/- 88809.55, N = 3 33790753 33544357 -O1 -O3 -march=native 1. (CXX) g++ options: -O2 -fopenmp
-O3 -march=native Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverDisk Notes: NONE / compress=zstd:1,relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.4Python Notes: Python 3.9.5Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 9 June 2021 17:11 by user phoronix.
-O1 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (0707 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 2000GB Corsair Force MP600 + 257GB Flash Drive, Graphics: AMD Radeon VII 16GB (1801/1000MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Fedora 34, Kernel: 5.12.9-300.fc34.x86_64 (x86_64), Desktop: GNOME Shell 40.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 21.1.1 (LLVM 12.0.0), Compiler: GCC 11.1.1 20210531, File-System: btrfs, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS=-O1 CFLAGS=-O1Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverDisk Notes: NONE / compress=zstd:1,relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.4Python Notes: Python 3.9.5Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 June 2021 11:06 by user phoronix.