Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (0707 BIOS) and AMD Radeon VII 16GB on Fedora 34 via the Phoronix Test Suite.
GCC 11.1: -O3 -march=native Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
GCC 11.1: -O3 -march=native -flto Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
GCC 11.1: -O2 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (0707 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0 + 15GB Ultra USB 3.0, Graphics: AMD Radeon VII 16GB (1801/1000MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Fedora 34, Kernel: 5.11.20-300.fc34.x86_64 (x86_64), Desktop: GNOME Shell 40.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 21.0.3 (LLVM 12.0.0), Compiler: GCC 11.1.1 20210428, File-System: btrfs, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS=-O2 CFLAGS=-O2Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel -O2 -O3 -march=native -O3 -march=native -flto 20 40 60 80 100 SE +/- 0.05, N = 3 SE +/- 0.15, N = 3 SE +/- 0.16, N = 3 106.52 47.35 47.61 -O2 -march=native -march=native -flto 1. (CC) gcc options: -lm -lpthread -O3
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 -O2 -O3 -march=native -O3 -march=native -flto 1.26 2.52 3.78 5.04 6.3 SE +/- 0.00, N = 15 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 3.48 3.24 5.60 MIN: 3.4 / MAX: 7.05 -O3 -march=native - MIN: 3.17 / MAX: 6.75 -O3 -march=native -flto - MIN: 5.41 / MAX: 9.21 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Chimera 1080p 10-bit -O2 -O3 -march=native 50 100 150 200 250 SE +/- 0.09, N = 3 SE +/- 0.03, N = 3 148.40 223.02 -O2 -lm - MIN: 95.23 / MAX: 345.29 -O3 -march=native - MIN: 153.51 / MAX: 490.73 1. (CC) gcc options: -pthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet -O2 -O3 -march=native -O3 -march=native -flto 0.6998 1.3996 2.0994 2.7992 3.499 SE +/- 0.01, N = 14 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 3.11 2.30 2.27 MIN: 3.05 / MAX: 9.87 -O3 -march=native - MIN: 2.18 / MAX: 3.19 -O3 -march=native -flto - MIN: 2.21 / MAX: 5.8 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
LAME MP3 Encoding LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better LAME MP3 Encoding 3.100 WAV To MP3 -O2 -O3 -march=native -O3 -march=native -flto 2 4 6 8 10 SE +/- 0.048, N = 3 SE +/- 0.010, N = 3 SE +/- 0.003, N = 3 7.304 5.479 5.376 -O2 -march=native -march=native -flto 1. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 -O2 -O3 -march=native -O3 -march=native -flto 0.945 1.89 2.835 3.78 4.725 SE +/- 0.02, N = 15 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 4.20 3.24 3.25 MIN: 4.04 / MAX: 7.77 -O3 -march=native - MIN: 3.1 / MAX: 6.67 -O3 -march=native -flto - MIN: 3.14 / MAX: 6.68 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.14, N = 15 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 15.15 11.83 13.34 MIN: 14.73 / MAX: 342.42 -O3 -march=native - MIN: 11.62 / MAX: 15.38 -O3 -march=native -flto - MIN: 13.01 / MAX: 16.82 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 -O2 -O3 -march=native -O3 -march=native -flto 0.7155 1.431 2.1465 2.862 3.5775 SE +/- 0.01, N = 15 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 3.18 2.55 2.52 MIN: 3.1 / MAX: 6.78 -O3 -march=native - MIN: 2.43 / MAX: 6.16 -O3 -march=native -flto - MIN: 2.47 / MAX: 6.06 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced -O2 -O3 -march=native -O3 -march=native -flto 60 120 180 240 300 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 219 270 269 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 -O2 -O3 -march=native -O3 -march=native -flto 1.1768 2.3536 3.5304 4.7072 5.884 SE +/- 0.02, N = 15 SE +/- 0.08, N = 3 SE +/- 0.01, N = 3 5.23 4.38 4.32 MIN: 5.12 / MAX: 8.96 -O3 -march=native - MIN: 4.18 / MAX: 7.9 -O3 -march=native -flto - MIN: 4.25 / MAX: 8.71 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 -O2 -O3 -march=native -O3 -march=native -flto 5 10 15 20 25 SE +/- 0.08, N = 15 SE +/- 0.16, N = 3 SE +/- 0.06, N = 3 22.07 18.23 18.43 MIN: 21.33 / MAX: 27.94 -O3 -march=native - MIN: 17.76 / MAX: 22.08 -O3 -march=native -flto - MIN: 18.19 / MAX: 22.12 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen -O2 -O3 -march=native -O3 -march=native -flto 40 80 120 160 200 SE +/- 0.33, N = 3 SE +/- 0.88, N = 3 SE +/- 0.67, N = 3 164 195 195 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode -O2 -O3 -march=native -O3 -march=native -flto 2 4 6 8 10 SE +/- 0.030, N = 5 SE +/- 0.007, N = 5 SE +/- 0.033, N = 5 6.467 5.587 5.575 -O2 -O3 -march=native -O3 -march=native -flto 1. (CXX) g++ options: -fvisibility=hidden -logg -lm
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny -O2 -O3 -march=native -O3 -march=native -flto 6 12 18 24 30 SE +/- 0.09, N = 15 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 21.07 20.23 23.45 MIN: 20.27 / MAX: 26.62 -O3 -march=native - MIN: 20.02 / MAX: 23.8 -O3 -march=native -flto - MIN: 23.14 / MAX: 26.98 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
AOBench AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better AOBench Size: 2048 x 2048 - Total Time -O2 -O3 -march=native -O3 -march=native -flto 6 12 18 24 30 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 24.46 21.54 21.58 -O2 -march=native -march=native -flto 1. (CC) gcc options: -lm -O3
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing -O2 -O3 -march=native -O3 -march=native -flto 300 600 900 1200 1500 SE +/- 6.89, N = 3 SE +/- 1.20, N = 3 1091 1198 1229 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m -O2 -O3 -march=native -O3 -march=native -flto 3 6 9 12 15 SE +/- 0.02, N = 12 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 9.61 8.62 8.91 MIN: 9.44 / MAX: 13.78 -O3 -march=native - MIN: 8.51 / MAX: 12.11 -O3 -march=native -flto - MIN: 8.72 / MAX: 12.48 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet -O2 -O3 -march=native -O3 -march=native -flto 3 6 9 12 15 SE +/- 0.08, N = 15 SE +/- 0.21, N = 3 SE +/- 0.13, N = 3 11.11 10.20 10.27 MIN: 10.75 / MAX: 16.77 -O3 -march=native - MIN: 9.72 / MAX: 13.93 -O3 -march=native -flto - MIN: 9.93 / MAX: 13.87 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 13.76 12.90 12.71 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 8 - Buffer Length: 256 - Filter Length: 57 -O2 -O3 -march=native -O3 -march=native -flto 150M 300M 450M 600M 750M SE +/- 766753.62, N = 3 SE +/- 2160717.47, N = 3 SE +/- 2050604.25, N = 3 635506667 686530000 684356667 -O2 -march=native -march=native -flto 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 -O2 -O3 -march=native -O3 -march=native -flto 50 100 150 200 250 SE +/- 0.21, N = 3 SE +/- 0.15, N = 3 SE +/- 0.12, N = 3 243.42 230.02 247.89 MIN: 241.9 / MAX: 246.46 -O3 -march=native - MIN: 229.3 / MAX: 233.4 -O3 -march=native -flto - MIN: 247.03 / MAX: 249.92 1. (CXX) g++ options: -O2 -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 1.2152 2.4304 3.6456 4.8608 6.076 SE +/- 0.03422, N = 3 SE +/- 0.02936, N = 3 SE +/- 0.01659, N = 3 5.01199 5.28726 5.40080 MIN: 4.47 MIN: 4.8 -flto - MIN: 4.78 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate -O2 -O3 -march=native -O3 -march=native -flto 200 400 600 800 1000 SE +/- 0.67, N = 3 SE +/- 1.53, N = 3 1066 1141 1072 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Exhaustive -O2 -O3 -march=native -O3 -march=native -flto 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 91.38 85.42 85.42 -O3 -march=native -O3 -march=native 1. (CXX) g++ options: -O2 -flto -pthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 -O2 -O3 -march=native -O3 -march=native -flto 50 100 150 200 250 SE +/- 0.09, N = 3 SE +/- 0.17, N = 3 SE +/- 0.12, N = 3 236.05 227.66 242.55 MIN: 234.65 / MAX: 236.77 -O3 -march=native - MIN: 226.71 / MAX: 229.36 -O3 -march=native -flto - MIN: 241.93 / MAX: 243.45 1. (CXX) g++ options: -O2 -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Thorough -O2 -O3 -march=native -O3 -march=native -flto 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 12.09 11.38 11.40 -O3 -march=native -O3 -march=native 1. (CXX) g++ options: -O2 -flto -pthread
eSpeak-NG Speech Engine This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better eSpeak-NG Speech Engine 20200907 Text-To-Speech Synthesis -O2 -O3 -march=native -O3 -march=native -flto 5 10 15 20 25 SE +/- 0.05, N = 4 SE +/- 0.06, N = 4 SE +/- 0.05, N = 4 21.33 21.71 22.60 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -std=c99 -lpthread -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.17, N = 5 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 16.69 16.50 17.42 MIN: 16.38 MIN: 16.39 -flto - MIN: 17.27 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8, Long Mode - Compression Speed -O2 -O3 -march=native -O3 -march=native -flto 60 120 180 240 300 SE +/- 1.68, N = 3 SE +/- 2.26, N = 3 SE +/- 3.12, N = 4 296.0 285.3 281.1 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 8, Long Mode - Decompression Speed -O2 -O3 -march=native -O3 -march=native -flto 1200 2400 3600 4800 6000 SE +/- 5.74, N = 3 SE +/- 15.18, N = 3 SE +/- 6.81, N = 4 5760.9 5546.0 5477.9 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -lz
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression -O2 -O3 -march=native -O3 -march=native -flto 1.206 2.412 3.618 4.824 6.03 SE +/- 0.005, N = 3 SE +/- 0.014, N = 3 SE +/- 0.008, N = 3 5.360 5.127 5.103 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed -O2 -O3 -march=native -O3 -march=native -flto 1000 2000 3000 4000 5000 SE +/- 5.61, N = 3 SE +/- 8.15, N = 3 SE +/- 17.62, N = 3 4718.1 4514.8 4503.1 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -lz
libjpeg-turbo tjbench tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Megapixels/sec, More Is Better libjpeg-turbo tjbench 2.1.0 Test: Decompression Throughput -O2 -O3 -march=native -O3 -march=native -flto 60 120 180 240 300 SE +/- 0.16, N = 3 SE +/- 0.20, N = 3 SE +/- 0.41, N = 3 261.03 273.10 272.60 -O2 -march=native -lm -march=native -flto -lm 1. (CC) gcc options: -O3 -rdynamic
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 10.75 11.24 11.23 MIN: 10.65 MIN: 11.15 -flto - MIN: 11.14 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Smallpt Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Smallpt 1.0 Global Illumination Renderer; 128 Samples -O2 -O3 -march=native -O3 -march=native -flto 2 4 6 8 10 SE +/- 0.014, N = 3 SE +/- 0.012, N = 3 SE +/- 0.020, N = 3 8.771 8.405 8.454 -O2 -march=native -march=native -flto 1. (CXX) g++ options: -fopenmp -O3
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed -O2 -O3 -march=native -O3 -march=native -flto 1000 2000 3000 4000 5000 SE +/- 4.91, N = 3 SE +/- 11.11, N = 3 SE +/- 14.15, N = 3 4777.3 4582.3 4579.8 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -lz
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p -O2 -O3 -march=native -O3 -march=native -flto 30 60 90 120 150 SE +/- 1.53, N = 4 SE +/- 1.58, N = 4 SE +/- 1.44, N = 5 136.31 139.13 141.83 -march=native -march=native -flto 1. (CC) gcc options: -O2 -fPIE -fPIC -O3 -pie -rdynamic -lpthread -lrt
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.01, N = 15 SE +/- 0.12, N = 3 SE +/- 0.28, N = 3 16.15 15.53 15.92 MIN: 15.95 / MAX: 21.54 -O3 -march=native - MIN: 15.19 / MAX: 20.95 -O3 -march=native -flto - MIN: 15.55 / MAX: 21.06 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p -O2 -O3 -march=native -O3 -march=native -flto 40 80 120 160 200 SE +/- 0.13, N = 3 SE +/- 0.01, N = 3 SE +/- 0.31, N = 3 160.65 164.77 166.05 -march=native -march=native -flto 1. (CC) gcc options: -O3 -fcommon -O2 -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
Timed HMMer Search This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed HMMer Search 3.3.2 Pfam Database Search -O2 -O3 -march=native -O3 -march=native -flto 20 40 60 80 100 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 103.29 100.74 99.97 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -lhmmer -leasel -lm -lmpi
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time -O2 -O3 -march=native -O3 -march=native -flto 6M 12M 18M 24M 30M SE +/- 96950.30, N = 3 SE +/- 279559.22, N = 3 SE +/- 94171.94, N = 3 29094819 29932441 29086394 -O2 -march=native -march=native 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression -O2 -O3 -march=native -O3 -march=native -flto 7 14 21 28 35 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 SE +/- 0.04, N = 3 27.84 27.26 27.07 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 -O2 -O3 -march=native -O3 -march=native -flto 3 6 9 12 15 SE +/- 0.06, N = 14 SE +/- 0.17, N = 3 SE +/- 0.02, N = 3 11.30 11.08 11.39 MIN: 10.84 / MAX: 14.99 -O3 -march=native - MIN: 10.66 / MAX: 16.66 -O3 -march=native -flto - MIN: 11.27 / MAX: 15.15 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis -O2 -O3 -march=native -O3 -march=native -flto 20 40 60 80 100 SE +/- 0.53, N = 3 SE +/- 0.06, N = 3 SE +/- 0.32, N = 3 87.30 86.70 84.93 -O2 -march=native -march=native -flto 1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 1.0971 2.1942 3.2913 4.3884 5.4855 SE +/- 0.02350, N = 3 SE +/- 0.02065, N = 3 SE +/- 0.02041, N = 3 4.87601 4.86522 4.74611 MIN: 3.82 MIN: 3.82 -flto - MIN: 3.7 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 400 800 1200 1600 2000 SE +/- 0.68, N = 3 SE +/- 1.66, N = 3 SE +/- 1.29, N = 3 1841.63 1891.71 1877.51 MIN: 1831.74 MIN: 1880.74 -flto - MIN: 1866.09 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
x265 This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 4K -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.21, N = 3 SE +/- 0.13, N = 15 SE +/- 0.15, N = 6 15.64 15.81 15.40 -O3 -march=native -O3 -march=native -flto 1. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed -O2 -O3 -march=native -O3 -march=native -flto 8 16 24 32 40 SE +/- 0.15, N = 3 SE +/- 0.44, N = 3 SE +/- 0.03, N = 3 34.5 35.4 34.8 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -lz
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 400 800 1200 1600 2000 SE +/- 1.67, N = 3 SE +/- 0.80, N = 3 SE +/- 1.27, N = 3 1842.14 1887.61 1876.25 MIN: 1831.93 MIN: 1877.87 -flto - MIN: 1866.41 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 400 800 1200 1600 2000 SE +/- 1.34, N = 3 SE +/- 2.13, N = 3 SE +/- 1.23, N = 3 1845.74 1890.59 1874.70 MIN: 1834.84 MIN: 1879.82 -flto - MIN: 1865.22 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p -O2 -O3 -march=native -O3 -march=native -flto 40 80 120 160 200 SE +/- 1.51, N = 10 SE +/- 1.48, N = 10 SE +/- 1.49, N = 10 191.83 195.87 195.07 -march=native -march=native -flto 1. (CC) gcc options: -O3 -fcommon -O2 -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
PJSIP PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: INVITE -O2 -O3 -march=native -O3 -march=native -flto 1100 2200 3300 4400 5500 SE +/- 32.83, N = 3 SE +/- 41.25, N = 3 SE +/- 3.18, N = 3 5001 4959 5058 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 4K -O2 -O3 -march=native 40 80 120 160 200 SE +/- 0.05, N = 3 SE +/- 0.09, N = 3 186.75 190.31 -O2 -lm - MIN: 170.98 / MAX: 196.55 -O3 -march=native - MIN: 174.59 / MAX: 201.24 1. (CC) gcc options: -pthread
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p -O2 -O3 -march=native -O3 -march=native -flto 60 120 180 240 300 SE +/- 0.52, N = 3 SE +/- 0.09, N = 3 SE +/- 0.22, N = 3 273.60 278.72 278.59 -march=native -march=native -flto 1. (CC) gcc options: -O2 -fPIE -fPIC -O3 -pie -rdynamic -lpthread -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p -O2 -O3 -march=native -O3 -march=native -flto 40 80 120 160 200 SE +/- 0.06, N = 3 SE +/- 0.28, N = 3 SE +/- 0.29, N = 3 198.01 201.70 201.10 -march=native -march=native -flto 1. (CC) gcc options: -O3 -fcommon -O2 -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
Redis Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SET -O2 -O3 -march=native -O3 -march=native -flto 600K 1200K 1800K 2400K 3000K SE +/- 20903.58, N = 3 SE +/- 15075.35, N = 3 SE +/- 3890.24, N = 3 2936296.08 2980192.00 2990164.92 -O2 -march=native -march=native -flto 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 -O2 -O3 -march=native -O3 -march=native -flto 150M 300M 450M 600M 750M SE +/- 189414.30, N = 3 SE +/- 209549.78, N = 3 SE +/- 322714.18, N = 3 711343333 722893333 722393333 -O2 -march=native -march=native -flto 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 700 1400 2100 2800 3500 SE +/- 0.76, N = 3 SE +/- 2.46, N = 3 SE +/- 3.52, N = 3 3124.56 3173.47 3152.89 MIN: 3112.25 MIN: 3161.04 -flto - MIN: 3137.49 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 700 1400 2100 2800 3500 SE +/- 5.44, N = 3 SE +/- 1.30, N = 3 SE +/- 0.32, N = 3 3123.64 3172.19 3148.67 MIN: 3105.42 MIN: 3159.8 -flto - MIN: 3137.59 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Quantum ESPRESSO Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Quantum ESPRESSO 6.7 Input: AUSURF112 -O2 -O3 -march=native -O3 -march=native -flto 600 1200 1800 2400 3000 SE +/- 18.09, N = 3 SE +/- 21.65, N = 3 SE +/- 24.60, N = 3 2538.25 2576.97 2540.19 1. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 700 1400 2100 2800 3500 SE +/- 2.61, N = 3 SE +/- 0.26, N = 3 SE +/- 3.24, N = 3 3123.95 3171.46 3154.69 MIN: 3109.77 MIN: 3160.11 -flto - MIN: 3138.34 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium -O2 -O3 -march=native -O3 -march=native -flto 1.1808 2.3616 3.5424 4.7232 5.904 SE +/- 0.0027, N = 3 SE +/- 0.0013, N = 3 SE +/- 0.0065, N = 3 5.2481 5.1820 5.1705 -O3 -march=native -O3 -march=native 1. (CXX) g++ options: -O2 -flto -pthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 1080p -O2 -O3 -march=native 160 320 480 640 800 SE +/- 2.55, N = 3 SE +/- 1.03, N = 3 727.60 717.31 -O2 -lm - MIN: 643.78 / MAX: 798.32 -O3 -march=native - MIN: 641.13 / MAX: 782.17 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Chimera 1080p -O2 -O3 -march=native 170 340 510 680 850 SE +/- 1.36, N = 3 SE +/- 0.33, N = 3 773.93 763.05 -O2 -lm - MIN: 589.24 / MAX: 1160.82 -O3 -march=native - MIN: 584.4 / MAX: 1127.78 1. (CC) gcc options: -pthread
Coremark This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations/Sec, More Is Better Coremark 1.0 CoreMark Size 666 - Iterations Per Second -O2 -O3 -march=native -O3 -march=native -flto 90K 180K 270K 360K 450K SE +/- 1236.61, N = 3 SE +/- 1364.82, N = 3 SE +/- 166.46, N = 3 430127.50 432583.96 435901.44 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -O2 -lrt" -lrt
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.9693 1.9386 2.9079 3.8772 4.8465 SE +/- 0.01250, N = 3 SE +/- 0.01947, N = 3 SE +/- 0.00501, N = 3 4.27077 4.30798 4.25176 MIN: 4.16 MIN: 4.19 -flto - MIN: 4.15 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 12.37 12.51 12.52 MIN: 12.28 MIN: 12.43 -flto - MIN: 12.41 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 -O2 -O3 -march=native -O3 -march=native -flto 12 24 36 48 60 SE +/- 0.05, N = 15 SE +/- 0.14, N = 3 SE +/- 0.13, N = 3 54.80 54.50 54.13 MIN: 54.15 / MAX: 64 -O3 -march=native - MIN: 53.96 / MAX: 58.57 -O3 -march=native -flto - MIN: 53.54 / MAX: 59.11 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.3319 0.6638 0.9957 1.3276 1.6595 SE +/- 0.01597, N = 3 SE +/- 0.00602, N = 3 SE +/- 0.00575, N = 3 1.46726 1.45788 1.47524 MIN: 1.37 MIN: 1.36 -flto - MIN: 1.37 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.7133 1.4266 2.1399 2.8532 3.5665 SE +/- 0.00129, N = 3 SE +/- 0.00399, N = 3 SE +/- 0.00623, N = 3 3.13532 3.17026 3.13941 MIN: 3.07 MIN: 3.1 -flto - MIN: 3.07 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SQLite Speedtest This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite Speedtest 3.30 Timed Time - Size 1,000 -O2 -O3 -march=native -O3 -march=native -flto 10 20 30 40 50 SE +/- 0.15, N = 3 SE +/- 0.30, N = 3 SE +/- 0.13, N = 3 43.62 44.09 43.78 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -ldl -lz -lpthread
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed -O2 -O3 -march=native -O3 -march=native -flto 8 16 24 32 40 SE +/- 0.22, N = 3 SE +/- 0.23, N = 3 SE +/- 0.12, N = 3 32.7 33.0 32.8 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -lz
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 14.17 14.29 14.25 MIN: 14.04 MIN: 14.18 -flto - MIN: 14.14 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet -O2 -O3 -march=native -O3 -march=native -flto 3 6 9 12 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 9.63 9.63 9.70 MIN: 9.47 / MAX: 14.51 -O3 -march=native - MIN: 9.56 / MAX: 13.14 -O3 -march=native -flto - MIN: 9.56 / MAX: 13.19 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
PJSIP PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: OPTIONS, Stateless -O2 -O3 -march=native -O3 -march=native -flto 50K 100K 150K 200K 250K SE +/- 504.43, N = 3 SE +/- 1015.58, N = 3 SE +/- 101.47, N = 3 239792 241439 239892 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.1625 0.325 0.4875 0.65 0.8125 SE +/- 0.001308, N = 3 SE +/- 0.002639, N = 3 SE +/- 0.001704, N = 3 0.717882 0.722430 0.720482 MIN: 0.66 MIN: 0.67 -flto - MIN: 0.67 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Crypto++ Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.2 Test: Unkeyed Algorithms -O2 -O3 -march=native -O3 -march=native -flto 110 220 330 440 550 SE +/- 0.06, N = 3 SE +/- 0.14, N = 3 SE +/- 0.29, N = 3 491.64 489.76 488.63 -O2 -O3 -march=native -O3 -march=native -flto 1. (CXX) g++ options: -fPIC -pthread -pipe
Redis Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET -O2 -O3 -march=native -O3 -march=native -flto 900K 1800K 2700K 3600K 4500K SE +/- 8839.00, N = 3 SE +/- 16885.42, N = 3 SE +/- 23615.46, N = 3 4051463.17 4036791.92 4060369.08 -O2 -march=native -march=native -flto 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.9149 1.8298 2.7447 3.6596 4.5745 SE +/- 0.00867, N = 3 SE +/- 0.00741, N = 3 SE +/- 0.00379, N = 3 4.04477 4.06617 4.04481 MIN: 3.93 MIN: 3.93 -flto - MIN: 3.91 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 17.06 17.06 17.13 MIN: 16.67 MIN: 16.72 -flto - MIN: 16.73 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.795 1.59 2.385 3.18 3.975 SE +/- 0.00099, N = 3 SE +/- 0.00143, N = 3 SE +/- 0.00147, N = 3 3.53315 3.52791 3.52381 MIN: 3.47 MIN: 3.45 -flto - MIN: 3.46 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.1871 0.3742 0.5613 0.7484 0.9355 SE +/- 0.003232, N = 3 SE +/- 0.003135, N = 3 SE +/- 0.003541, N = 3 0.829564 0.829637 0.831699 MIN: 0.81 MIN: 0.81 -flto - MIN: 0.81 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.2977 0.5954 0.8931 1.1908 1.4885 SE +/- 0.00212, N = 3 SE +/- 0.00166, N = 3 SE +/- 0.00175, N = 3 1.32100 1.32271 1.32311 MIN: 1.25 MIN: 1.26 -flto - MIN: 1.26 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
PJSIP PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Responses Per Second, More Is Better PJSIP 2.11 Method: OPTIONS, Stateful -O2 -O3 -march=native -O3 -march=native -flto 2K 4K 6K 8K 10K SE +/- 1.67, N = 3 SE +/- 6.96, N = 3 SE +/- 4.58, N = 3 9381 9389 9395 -O2 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 0.7965 1.593 2.3895 3.186 3.9825 SE +/- 0.00185, N = 3 SE +/- 0.00232, N = 3 SE +/- 0.00020, N = 3 3.54019 3.53708 3.53500 MIN: 3.46 MIN: 3.41 -flto - MIN: 3.44 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU -O2 -O3 -march=native -O3 -march=native -flto 7K 14K 21K 28K 35K SE +/- 0.65, N = 3 SE +/- 0.97, N = 3 SE +/- 1.11, N = 3 34799.70 34776.08 34751.01 -O3 -march=native -O3 -march=native -flto 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 16.17 16.18 16.19 MIN: 16.09 MIN: 16.09 -flto - MIN: 16.09 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU -O2 -O3 -march=native -O3 -march=native -flto 2 4 6 8 10 SE +/- 0.00352, N = 3 SE +/- 0.00184, N = 3 SE +/- 0.00390, N = 3 8.57623 8.57548 8.57248 MIN: 8.42 MIN: 8.41 -flto - MIN: 8.44 1. (CXX) g++ options: -O3 -march=native -O2 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
GNU GMP GMPbench GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GMPbench Score, More Is Better GNU GMP GMPbench 6.2.1 Total Time -O3 -march=native -O3 -march=native -flto 1300 2600 3900 5200 6500 6172.9 6171.6 -flto 1. (CC) gcc options: -O3 -march=native -lm
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface -O2 -O3 -march=native -O3 -march=native -flto 0.3803 0.7606 1.1409 1.5212 1.9015 SE +/- 0.01, N = 15 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 1.19 1.19 1.69 MIN: 1.14 / MAX: 5.67 -O3 -march=native - MIN: 1.09 / MAX: 2.02 -O3 -march=native -flto - MIN: 1.64 / MAX: 2.46 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
GCC 11.1: -O3 -march=native Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 May 2021 11:48 by user phoronix.
GCC 11.1: -O3 -march=native -flto Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 May 2021 19:01 by user phoronix.
GCC 11.1: -O2 Processor: Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG MAXIMUS XIII HERO (0707 BIOS), Chipset: Intel Tiger Lake-H, Memory: 32GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0 + 15GB Ultra USB 3.0, Graphics: AMD Radeon VII 16GB (1801/1000MHz), Audio: Intel Tiger Lake-H HD Audio, Monitor: ASUS MG28U, Network: 2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Fedora 34, Kernel: 5.11.20-300.fc34.x86_64 (x86_64), Desktop: GNOME Shell 40.1, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 21.0.3 (LLVM 12.0.0), Compiler: GCC 11.1.1 20210428, File-System: btrfs, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS=-O2 CFLAGS=-O2Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 May 2021 04:40 by user phoronix.