Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (0707 BIOS) and AMD Radeon VII 16GB on Fedora 34 via the Phoronix Test Suite.
GCC 11.1: -O3 -march=native Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
GCC 11.1: -O3 -march=native -flto Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
GCC 11.1: -O2 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (0707 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0 + 15GB Ultra USB 3.0, Graphics: AMD Radeon VII 16GB (1801/1000MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Fedora 34, Kernel: 5.11.20-300.fc34.x86_64 (x86_64), Desktop: GNOME Shell 40.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 21.0.3 (LLVM 12.0.0), Compiler: GCC 11.1.1 20210428, File-System: btrfs, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS=-O2 CFLAGS=-O2Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Quantum ESPRESSO Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Quantum ESPRESSO 6.7 Input: AUSURF112 -O3 -march=native -flto -O3 -march=native -O2 600 1200 1800 2400 3000 SE +/- 24.60, N = 3 SE +/- 21.65, N = 3 SE +/- 18.09, N = 3 2540.19 2576.97 2538.25 1. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
GNU GMP GMPbench GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GMPbench Score, More Is Better GNU GMP GMPbench 6.2.1 Total Time -O3 -march=native -flto -O3 -march=native 1300 2600 3900 5200 6500 6171.6 6172.9 -flto 1. (CC) gcc options: -O3 -march=native -lm
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m -O3 -march=native -flto -O3 -march=native -O2 3 6 9 12 15 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 12 8.91 8.62 9.61 -O3 -march=native -flto - MIN: 8.72 / MAX: 12.48 -O3 -march=native - MIN: 8.51 / MAX: 12.11 MIN: 9.44 / MAX: 13.78 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.28, N = 3 SE +/- 0.12, N = 3 SE +/- 0.01, N = 15 15.92 15.53 16.15 -O3 -march=native -flto - MIN: 15.55 / MAX: 21.06 -O3 -march=native - MIN: 15.19 / MAX: 20.95 MIN: 15.95 / MAX: 21.54 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny -O3 -march=native -flto -O3 -march=native -O2 6 12 18 24 30 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.09, N = 15 23.45 20.23 21.07 -O3 -march=native -flto - MIN: 23.14 / MAX: 26.98 -O3 -march=native - MIN: 20.02 / MAX: 23.8 MIN: 20.27 / MAX: 26.62 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 -O3 -march=native -flto -O3 -march=native -O2 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.16, N = 3 SE +/- 0.08, N = 15 18.43 18.23 22.07 -O3 -march=native -flto - MIN: 18.19 / MAX: 22.12 -O3 -march=native - MIN: 17.76 / MAX: 22.08 MIN: 21.33 / MAX: 27.94 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet -O3 -march=native -flto -O3 -march=native -O2 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 15 9.70 9.63 9.63 -O3 -march=native -flto - MIN: 9.56 / MAX: 13.19 -O3 -march=native - MIN: 9.56 / MAX: 13.14 MIN: 9.47 / MAX: 14.51 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 -O3 -march=native -flto -O3 -march=native -O2 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.17, N = 3 SE +/- 0.06, N = 14 11.39 11.08 11.30 -O3 -march=native -flto - MIN: 11.27 / MAX: 15.15 -O3 -march=native - MIN: 10.66 / MAX: 16.66 MIN: 10.84 / MAX: 14.99 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 -O3 -march=native -flto -O3 -march=native -O2 12 24 36 48 60 SE +/- 0.13, N = 3 SE +/- 0.14, N = 3 SE +/- 0.05, N = 15 54.13 54.50 54.80 -O3 -march=native -flto - MIN: 53.54 / MAX: 59.11 -O3 -march=native - MIN: 53.96 / MAX: 58.57 MIN: 54.15 / MAX: 64 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet -O3 -march=native -flto -O3 -march=native -O2 3 6 9 12 15 SE +/- 0.13, N = 3 SE +/- 0.21, N = 3 SE +/- 0.08, N = 15 10.27 10.20 11.11 -O3 -march=native -flto - MIN: 9.93 / MAX: 13.87 -O3 -march=native - MIN: 9.72 / MAX: 13.93 MIN: 10.75 / MAX: 16.77 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface -O3 -march=native -flto -O3 -march=native -O2 0.3803 0.7606 1.1409 1.5212 1.9015 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 15 1.69 1.19 1.19 -O3 -march=native -flto - MIN: 1.64 / MAX: 2.46 -O3 -march=native - MIN: 1.09 / MAX: 2.02 MIN: 1.14 / MAX: 5.67 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 -O3 -march=native -flto -O3 -march=native -O2 1.1768 2.3536 3.5304 4.7072 5.884 SE +/- 0.01, N = 3 SE +/- 0.08, N = 3 SE +/- 0.02, N = 15 4.32 4.38 5.23 -O3 -march=native -flto - MIN: 4.25 / MAX: 8.71 -O3 -march=native - MIN: 4.18 / MAX: 7.9 MIN: 5.12 / MAX: 8.96 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet -O3 -march=native -flto -O3 -march=native -O2 0.6998 1.3996 2.0994 2.7992 3.499 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 14 2.27 2.30 3.11 -O3 -march=native -flto - MIN: 2.21 / MAX: 5.8 -O3 -march=native - MIN: 2.18 / MAX: 3.19 MIN: 3.05 / MAX: 9.87 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 -O3 -march=native -flto -O3 -march=native -O2 1.26 2.52 3.78 5.04 6.3 SE +/- 0.06, N = 3 SE +/- 0.02, N = 3 SE +/- 0.00, N = 15 5.60 3.24 3.48 -O3 -march=native -flto - MIN: 5.41 / MAX: 9.21 -O3 -march=native - MIN: 3.17 / MAX: 6.75 MIN: 3.4 / MAX: 7.05 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 -O3 -march=native -flto -O3 -march=native -O2 0.7155 1.431 2.1465 2.862 3.5775 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 15 2.52 2.55 3.18 -O3 -march=native -flto - MIN: 2.47 / MAX: 6.06 -O3 -march=native - MIN: 2.43 / MAX: 6.16 MIN: 3.1 / MAX: 6.78 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 -O3 -march=native -flto -O3 -march=native -O2 0.945 1.89 2.835 3.78 4.725 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 15 3.25 3.24 4.20 -O3 -march=native -flto - MIN: 3.14 / MAX: 6.68 -O3 -march=native - MIN: 3.1 / MAX: 6.67 MIN: 4.04 / MAX: 7.77 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 SE +/- 0.14, N = 15 13.34 11.83 15.15 -O3 -march=native -flto - MIN: 13.01 / MAX: 16.82 -O3 -march=native - MIN: 11.62 / MAX: 15.38 MIN: 14.73 / MAX: 342.42 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
x265 This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 4K -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.15, N = 6 SE +/- 0.13, N = 15 SE +/- 0.21, N = 3 15.40 15.81 15.64 -O3 -march=native -flto -O3 -march=native 1. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl
Timed HMMer Search This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed HMMer Search 3.3.2 Pfam Database Search -O3 -march=native -flto -O3 -march=native -O2 20 40 60 80 100 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 99.97 100.74 103.29 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -pthread -lhmmer -leasel -lm -lmpi
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Exhaustive -O3 -march=native -flto -O3 -march=native -O2 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 85.42 85.42 91.38 -O3 -march=native -O3 -march=native 1. (CXX) g++ options: -flto -O2 -pthread
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU -O3 -march=native -flto -O3 -march=native -O2 7K 14K 21K 28K 35K SE +/- 1.11, N = 3 SE +/- 0.97, N = 3 SE +/- 0.65, N = 3 34751.01 34776.08 34799.70 -O3 -march=native -flto -O3 -march=native 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis -O3 -march=native -flto -O3 -march=native -O2 20 40 60 80 100 SE +/- 0.32, N = 3 SE +/- 0.06, N = 3 SE +/- 0.53, N = 3 84.93 86.70 87.30 -march=native -flto -march=native -O2 1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 700 1400 2100 2800 3500 SE +/- 3.52, N = 3 SE +/- 2.46, N = 3 SE +/- 0.76, N = 3 3152.89 3173.47 3124.56 -flto - MIN: 3137.49 MIN: 3161.04 MIN: 3112.25 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 700 1400 2100 2800 3500 SE +/- 3.24, N = 3 SE +/- 0.26, N = 3 SE +/- 2.61, N = 3 3154.69 3171.46 3123.95 -flto - MIN: 3138.34 MIN: 3160.11 MIN: 3109.77 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 700 1400 2100 2800 3500 SE +/- 0.32, N = 3 SE +/- 1.30, N = 3 SE +/- 5.44, N = 3 3148.67 3172.19 3123.64 -flto - MIN: 3137.59 MIN: 3159.8 MIN: 3105.42 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 400 800 1200 1600 2000 SE +/- 1.23, N = 3 SE +/- 2.13, N = 3 SE +/- 1.34, N = 3 1874.70 1890.59 1845.74 -flto - MIN: 1865.22 MIN: 1879.82 MIN: 1834.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 400 800 1200 1600 2000 SE +/- 1.27, N = 3 SE +/- 0.80, N = 3 SE +/- 1.67, N = 3 1876.25 1887.61 1842.14 -flto - MIN: 1866.41 MIN: 1877.87 MIN: 1831.93 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 400 800 1200 1600 2000 SE +/- 1.29, N = 3 SE +/- 1.66, N = 3 SE +/- 0.68, N = 3 1877.51 1891.71 1841.63 -flto - MIN: 1866.09 MIN: 1880.74 MIN: 1831.74 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Chimera 1080p 10-bit -O3 -march=native -O2 50 100 150 200 250 SE +/- 0.03, N = 3 SE +/- 0.09, N = 3 223.02 148.40 -O3 -march=native - MIN: 153.51 / MAX: 490.73 -O2 -lm - MIN: 95.23 / MAX: 345.29 1. (CC) gcc options: -pthread
Crypto++ Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.2 Test: Unkeyed Algorithms -O3 -march=native -flto -O3 -march=native -O2 110 220 330 440 550 SE +/- 0.29, N = 3 SE +/- 0.14, N = 3 SE +/- 0.06, N = 3 488.63 489.76 491.64 -O3 -march=native -flto -O3 -march=native -O2 1. (CXX) g++ options: -fPIC -pthread -pipe
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel -O3 -march=native -flto -O3 -march=native -O2 20 40 60 80 100 SE +/- 0.16, N = 3 SE +/- 0.15, N = 3 SE +/- 0.05, N = 3 47.61 47.35 106.52 -march=native -flto -march=native -O2 1. (CC) gcc options: -lm -lpthread -O3
PJSIP PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: INVITE -O3 -march=native -flto -O3 -march=native -O2 1100 2200 3300 4400 5500 SE +/- 3.18, N = 3 SE +/- 41.25, N = 3 SE +/- 32.83, N = 3 5058 4959 5001 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: OPTIONS, Stateful -O3 -march=native -flto -O3 -march=native -O2 2K 4K 6K 8K 10K SE +/- 4.58, N = 3 SE +/- 6.96, N = 3 SE +/- 1.67, N = 3 9395 9389 9381 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen -O3 -march=native -flto -O3 -march=native -O2 40 80 120 160 200 SE +/- 0.67, N = 3 SE +/- 0.88, N = 3 SE +/- 0.33, N = 3 195 195 164 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced -O3 -march=native -flto -O3 -march=native -O2 60 120 180 240 300 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 269 270 219 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate -O3 -march=native -flto -O3 -march=native -O2 200 400 600 800 1000 SE +/- 1.53, N = 3 SE +/- 0.67, N = 3 1072 1141 1066 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing -O3 -march=native -flto -O3 -march=native -O2 300 600 900 1200 1500 SE +/- 1.20, N = 3 SE +/- 6.89, N = 3 1229 1198 1091 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed -O3 -march=native -flto -O3 -march=native -O2 1000 2000 3000 4000 5000 SE +/- 14.15, N = 3 SE +/- 11.11, N = 3 SE +/- 4.91, N = 3 4579.8 4582.3 4777.3 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed -O3 -march=native -flto -O3 -march=native -O2 8 16 24 32 40 SE +/- 0.12, N = 3 SE +/- 0.23, N = 3 SE +/- 0.22, N = 3 32.8 33.0 32.7 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed -O3 -march=native -flto -O3 -march=native -O2 1000 2000 3000 4000 5000 SE +/- 17.62, N = 3 SE +/- 8.15, N = 3 SE +/- 5.61, N = 3 4503.1 4514.8 4718.1 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed -O3 -march=native -flto -O3 -march=native -O2 8 16 24 32 40 SE +/- 0.03, N = 3 SE +/- 0.44, N = 3 SE +/- 0.15, N = 3 34.8 35.4 34.5 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -pthread -lz
SQLite Speedtest This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite Speedtest 3.30 Timed Time - Size 1,000 -O3 -march=native -flto -O3 -march=native -O2 10 20 30 40 50 SE +/- 0.13, N = 3 SE +/- 0.30, N = 3 SE +/- 0.15, N = 3 43.78 44.09 43.62 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -ldl -lz -lpthread
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8, Long Mode - Decompression Speed -O3 -march=native -flto -O3 -march=native -O2 1200 2400 3600 4800 6000 SE +/- 6.81, N = 4 SE +/- 15.18, N = 3 SE +/- 5.74, N = 3 5477.9 5546.0 5760.9 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8, Long Mode - Compression Speed -O3 -march=native -flto -O3 -march=native -O2 60 120 180 240 300 SE +/- 3.12, N = 4 SE +/- 2.26, N = 3 SE +/- 1.68, N = 3 281.1 285.3 296.0 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -pthread -lz
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time -O3 -march=native -flto -O3 -march=native -O2 6M 12M 18M 24M 30M SE +/- 94171.94, N = 3 SE +/- 279559.22, N = 3 SE +/- 96950.30, N = 3 29086394 29932441 29094819 -march=native -march=native -O2 1. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -flto -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver
eSpeak-NG Speech Engine This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better eSpeak-NG Speech Engine 20200907 Text-To-Speech Synthesis -O3 -march=native -flto -O3 -march=native -O2 5 10 15 20 25 SE +/- 0.05, N = 4 SE +/- 0.06, N = 4 SE +/- 0.05, N = 4 22.60 21.71 21.33 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -std=c99 -lpthread -lm
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression -O3 -march=native -flto -O3 -march=native -O2 7 14 21 28 35 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 27.07 27.26 27.84 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.17, N = 5 17.42 16.50 16.69 -flto - MIN: 17.27 MIN: 16.39 MIN: 16.38 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
libjpeg-turbo tjbench tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Megapixels/sec, More Is Better libjpeg-turbo tjbench 2.1.0 Test: Decompression Throughput -O3 -march=native -flto -O3 -march=native -O2 60 120 180 240 300 SE +/- 0.41, N = 3 SE +/- 0.20, N = 3 SE +/- 0.16, N = 3 272.60 273.10 261.03 -march=native -flto -lm -march=native -lm -O2 1. (CC) gcc options: -O3 -rdynamic
Coremark This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations/Sec, More Is Better Coremark 1.0 CoreMark Size 666 - Iterations Per Second -O3 -march=native -flto -O3 -march=native -O2 90K 180K 270K 360K 450K SE +/- 166.46, N = 3 SE +/- 1364.82, N = 3 SE +/- 1236.61, N = 3 435901.44 432583.96 430127.50 -O3 -march=native -flto -O3 -march=native 1. (CC) gcc options: -O2 -lrt" -lrt
AOBench AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better AOBench Size: 2048 x 2048 - Total Time -O3 -march=native -flto -O3 -march=native -O2 6 12 18 24 30 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 21.58 21.54 24.46 -march=native -flto -march=native -O2 1. (CC) gcc options: -lm -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 1.0971 2.1942 3.2913 4.3884 5.4855 SE +/- 0.02041, N = 3 SE +/- 0.02065, N = 3 SE +/- 0.02350, N = 3 4.74611 4.86522 4.87601 -flto - MIN: 3.7 MIN: 3.82 MIN: 3.82 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.1871 0.3742 0.5613 0.7484 0.9355 SE +/- 0.003541, N = 3 SE +/- 0.003135, N = 3 SE +/- 0.003232, N = 3 0.831699 0.829637 0.829564 -flto - MIN: 0.81 MIN: 0.81 MIN: 0.81 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 -O3 -march=native -flto -O3 -march=native -O2 150M 300M 450M 600M 750M SE +/- 322714.18, N = 3 SE +/- 209549.78, N = 3 SE +/- 189414.30, N = 3 722393333 722893333 711343333 -march=native -flto -march=native -O2 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 8 - Buffer Length: 256 - Filter Length: 57 -O3 -march=native -flto -O3 -march=native -O2 150M 300M 450M 600M 750M SE +/- 2050604.25, N = 3 SE +/- 2160717.47, N = 3 SE +/- 766753.62, N = 3 684356667 686530000 635506667 -march=native -flto -march=native -O2 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 4K -O3 -march=native -O2 40 80 120 160 200 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 190.31 186.75 -O3 -march=native - MIN: 174.59 / MAX: 201.24 -O2 -lm - MIN: 170.98 / MAX: 196.55 1. (CC) gcc options: -pthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 -O3 -march=native -flto -O3 -march=native -O2 50 100 150 200 250 SE +/- 0.12, N = 3 SE +/- 0.15, N = 3 SE +/- 0.21, N = 3 247.89 230.02 243.42 -O3 -march=native -flto - MIN: 247.03 / MAX: 249.92 -O3 -march=native - MIN: 229.3 / MAX: 233.4 MIN: 241.9 / MAX: 246.46 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 -O3 -march=native -flto -O3 -march=native -O2 50 100 150 200 250 SE +/- 0.12, N = 3 SE +/- 0.17, N = 3 SE +/- 0.09, N = 3 242.55 227.66 236.05 -O3 -march=native -flto - MIN: 241.93 / MAX: 243.45 -O3 -march=native - MIN: 226.71 / MAX: 229.36 MIN: 234.65 / MAX: 236.77 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Thorough -O3 -march=native -flto -O3 -march=native -O2 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 11.40 11.38 12.09 -O3 -march=native -O3 -march=native 1. (CXX) g++ options: -flto -O2 -pthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Chimera 1080p -O3 -march=native -O2 170 340 510 680 850 SE +/- 0.33, N = 3 SE +/- 1.36, N = 3 763.05 773.93 -O3 -march=native - MIN: 584.4 / MAX: 1127.78 -O2 -lm - MIN: 589.24 / MAX: 1160.82 1. (CC) gcc options: -pthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.9149 1.8298 2.7447 3.6596 4.5745 SE +/- 0.00379, N = 3 SE +/- 0.00741, N = 3 SE +/- 0.00867, N = 3 4.04481 4.06617 4.04477 -flto - MIN: 3.91 MIN: 3.93 MIN: 3.93 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 2 4 6 8 10 SE +/- 0.00390, N = 3 SE +/- 0.00184, N = 3 SE +/- 0.00352, N = 3 8.57248 8.57548 8.57623 -flto - MIN: 8.44 MIN: 8.41 MIN: 8.42 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.1625 0.325 0.4875 0.65 0.8125 SE +/- 0.001704, N = 3 SE +/- 0.002639, N = 3 SE +/- 0.001308, N = 3 0.720482 0.722430 0.717882 -flto - MIN: 0.67 MIN: 0.67 MIN: 0.66 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
Redis Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SET -O3 -march=native -flto -O3 -march=native -O2 600K 1200K 1800K 2400K 3000K SE +/- 3890.24, N = 3 SE +/- 15075.35, N = 3 SE +/- 20903.58, N = 3 2990164.92 2980192.00 2936296.08 -march=native -flto -march=native -O2 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 12.71 12.90 13.76 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
Redis Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET -O3 -march=native -flto -O3 -march=native -O2 900K 1800K 2700K 3600K 4500K SE +/- 23615.46, N = 3 SE +/- 16885.42, N = 3 SE +/- 8839.00, N = 3 4060369.08 4036791.92 4051463.17 -march=native -flto -march=native -O2 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.795 1.59 2.385 3.18 3.975 SE +/- 0.00147, N = 3 SE +/- 0.00143, N = 3 SE +/- 0.00099, N = 3 3.52381 3.52791 3.53315 -flto - MIN: 3.46 MIN: 3.45 MIN: 3.47 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.2977 0.5954 0.8931 1.1908 1.4885 SE +/- 0.00175, N = 3 SE +/- 0.00166, N = 3 SE +/- 0.00212, N = 3 1.32311 1.32271 1.32100 -flto - MIN: 1.26 MIN: 1.26 MIN: 1.25 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.7965 1.593 2.3895 3.186 3.9825 SE +/- 0.00020, N = 3 SE +/- 0.00232, N = 3 SE +/- 0.00185, N = 3 3.53500 3.53708 3.54019 -flto - MIN: 3.44 MIN: 3.41 MIN: 3.46 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p -O3 -march=native -flto -O3 -march=native -O2 40 80 120 160 200 SE +/- 1.49, N = 10 SE +/- 1.48, N = 10 SE +/- 1.51, N = 10 195.07 195.87 191.83 -march=native -flto -march=native 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
PJSIP PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: OPTIONS, Stateless -O3 -march=native -flto -O3 -march=native -O2 50K 100K 150K 200K 250K SE +/- 101.47, N = 3 SE +/- 1015.58, N = 3 SE +/- 504.43, N = 3 239892 241439 239792 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode -O3 -march=native -flto -O3 -march=native -O2 2 4 6 8 10 SE +/- 0.033, N = 5 SE +/- 0.007, N = 5 SE +/- 0.030, N = 5 5.575 5.587 6.467 -O3 -march=native -flto -O3 -march=native -O2 1. (CXX) g++ options: -fvisibility=hidden -logg -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 11.23 11.24 10.75 -flto - MIN: 11.14 MIN: 11.15 MIN: 10.65 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 1.2152 2.4304 3.6456 4.8608 6.076 SE +/- 0.01659, N = 3 SE +/- 0.02936, N = 3 SE +/- 0.03422, N = 3 5.40080 5.28726 5.01199 -flto - MIN: 4.78 MIN: 4.8 MIN: 4.47 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.7133 1.4266 2.1399 2.8532 3.5665 SE +/- 0.00623, N = 3 SE +/- 0.00399, N = 3 SE +/- 0.00129, N = 3 3.13941 3.17026 3.13532 -flto - MIN: 3.07 MIN: 3.1 MIN: 3.07 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
Smallpt Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Smallpt 1.0 Global Illumination Renderer; 128 Samples -O3 -march=native -flto -O3 -march=native -O2 2 4 6 8 10 SE +/- 0.020, N = 3 SE +/- 0.012, N = 3 SE +/- 0.014, N = 3 8.454 8.405 8.771 -march=native -flto -march=native -O2 1. (CXX) g++ options: -fopenmp -O3
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p -O3 -march=native -flto -O3 -march=native -O2 30 60 90 120 150 SE +/- 1.44, N = 5 SE +/- 1.58, N = 4 SE +/- 1.53, N = 4 141.83 139.13 136.31 -march=native -flto -march=native 1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 14.25 14.29 14.17 -flto - MIN: 14.14 MIN: 14.18 MIN: 14.04 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 12.52 12.51 12.37 -flto - MIN: 12.41 MIN: 12.43 MIN: 12.28 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 16.19 16.18 16.17 -flto - MIN: 16.09 MIN: 16.09 MIN: 16.09 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
LAME MP3 Encoding LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better LAME MP3 Encoding 3.100 WAV To MP3 -O3 -march=native -flto -O3 -march=native -O2 2 4 6 8 10 SE +/- 0.003, N = 3 SE +/- 0.010, N = 3 SE +/- 0.048, N = 3 5.376 5.479 7.304 -march=native -flto -march=native -O2 1. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression -O3 -march=native -flto -O3 -march=native -O2 1.206 2.412 3.618 4.824 6.03 SE +/- 0.008, N = 3 SE +/- 0.014, N = 3 SE +/- 0.005, N = 3 5.103 5.127 5.360 -O3 -march=native -flto -O3 -march=native -O2 1. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium -O3 -march=native -flto -O3 -march=native -O2 1.1808 2.3616 3.5424 4.7232 5.904 SE +/- 0.0065, N = 3 SE +/- 0.0013, N = 3 SE +/- 0.0027, N = 3 5.1705 5.1820 5.2481 -O3 -march=native -O3 -march=native 1. (CXX) g++ options: -flto -O2 -pthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 1080p -O3 -march=native -O2 160 320 480 640 800 SE +/- 1.03, N = 3 SE +/- 2.55, N = 3 717.31 727.60 -O3 -march=native - MIN: 641.13 / MAX: 782.17 -O2 -lm - MIN: 643.78 / MAX: 798.32 1. (CC) gcc options: -pthread
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p -O3 -march=native -flto -O3 -march=native -O2 40 80 120 160 200 SE +/- 0.31, N = 3 SE +/- 0.01, N = 3 SE +/- 0.13, N = 3 166.05 164.77 160.65 -march=native -flto -march=native 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p -O3 -march=native -flto -O3 -march=native -O2 40 80 120 160 200 SE +/- 0.29, N = 3 SE +/- 0.28, N = 3 SE +/- 0.06, N = 3 201.10 201.70 198.01 -march=native -flto -march=native 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 17.13 17.06 17.06 -flto - MIN: 16.73 MIN: 16.72 MIN: 16.67 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.9693 1.9386 2.9079 3.8772 4.8465 SE +/- 0.00501, N = 3 SE +/- 0.01947, N = 3 SE +/- 0.01250, N = 3 4.25176 4.30798 4.27077 -flto - MIN: 4.15 MIN: 4.19 MIN: 4.16 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU -O3 -march=native -flto -O3 -march=native -O2 0.3319 0.6638 0.9957 1.3276 1.6595 SE +/- 0.00575, N = 3 SE +/- 0.00602, N = 3 SE +/- 0.01597, N = 3 1.47524 1.45788 1.46726 -flto - MIN: 1.37 MIN: 1.36 MIN: 1.37 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p -O3 -march=native -flto -O3 -march=native -O2 60 120 180 240 300 SE +/- 0.22, N = 3 SE +/- 0.09, N = 3 SE +/- 0.52, N = 3 278.59 278.72 273.60 -march=native -flto -march=native 1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
GCC 11.1: -O3 -march=native Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 May 2021 11:48 by user phoronix.
GCC 11.1: -O3 -march=native -flto Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 May 2021 19:01 by user phoronix.
GCC 11.1: -O2 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (0707 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0 + 15GB Ultra USB 3.0, Graphics: AMD Radeon VII 16GB (1801/1000MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Fedora 34, Kernel: 5.11.20-300.fc34.x86_64 (x86_64), Desktop: GNOME Shell 40.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 21.0.3 (LLVM 12.0.0), Compiler: GCC 11.1.1 20210428, File-System: btrfs, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS=-O2 CFLAGS=-O2Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 May 2021 04:40 by user phoronix.