Intel Core i9 11900K Lake AVX-512

Intel Core i9-11900K Rocket Lake AVX-512 benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2104074-PTS-ROCKET5105
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 5 Tests
Creator Workloads 4 Tests
Cryptography 2 Tests
Encoding 3 Tests
HPC - High Performance Computing 7 Tests
Machine Learning 4 Tests
MPI Benchmarks 2 Tests
Multi-Core 4 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 2 Tests
Scientific Computing 2 Tests
Server CPU Tests 3 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
No AVX
April 03 2021
  1 Hour, 42 Minutes
AVX
April 02 2021
  1 Hour, 9 Minutes
AVX2
April 02 2021
  1 Hour, 51 Minutes
AVX-512
April 02 2021
  1 Hour, 50 Minutes
Invert Hiding All Results Option
  1 Hour, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel Core i9 11900K Lake AVX-512OpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads)ASUS ROG MAXIMUS XIII HERO (0703 BIOS)Intel Tiger Lake-H32GB1000GB Western Digital WD_BLACK SN850 1TBAMD Radeon RX 6800/6800 XT / 6900 16GB (2575/1000MHz)Intel Tiger Lake-H HD AudioASUS MG28U2 x Intel I225-V + Intel Device 2725Ubuntu 21.045.12.0-051200rc3daily20210315-generic (x86_64) 20210314GNOME Shell 3.38.3X Server 1.20.10 + Wayland4.6 Mesa 21.1.0-devel (git-616720d 2021-03-16 hirsute-oibaf-ppa) (LLVM 12.0.0)GCC 10.2.1 20210320ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionIntel Core I9 11900K Lake AVX-512 BenchmarksSystem Logs- Transparent Huge Pages: madvise- No AVX: CXXFLAGS="-O3 -march=native -mno-avx" DEBUGINFOD_URLS= CFLAGS="-O3 -march=native -mno-avx"- AVX: CXXFLAGS="-O3 -march=native -mno-avx2" DEBUGINFOD_URLS= CFLAGS="-O3 -march=native -mno-avx2"- AVX2: CXXFLAGS="-O3 -march=native -mno-avx512f" DEBUGINFOD_URLS= CFLAGS="-O3 -march=native -mno-avx512f"- AVX-512: CXXFLAGS="-O3 -march=native -mprefer-vector-width=512" DEBUGINFOD_URLS= CFLAGS="-O3 -march=native -mprefer-vector-width=512" - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-DjbZbO/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-DjbZbO/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.3- Python 3.9.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

No AVXAVXAVX2AVX-512Result OverviewPhoronix Test Suite100%125%150%175%200%CaffeQMCPACKLeelaChessZeroRNNoiseDarmstadt Automotive Parallel Heterogeneous SuiteAOM AV1SVT-HEVCdav1dCrypto++

No AVXAVXAVX2AVX-512Per Watt Result OverviewPhoronix Test Suite100%110%120%131%141%LeelaChessZeroSVT-HEVCAOM AV1Crypto++dav1dDarmstadt Automotive Parallel Heterogeneous SuiteP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

Intel Core i9 11900K Lake AVX-512lczero: BLASlczero: Eigendav1d: Summer Nature 1080pdav1d: Summer Nature 4Kncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - resnet18ncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mcaffe: AlexNet - CPU - 200aom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pdaphne: OpenMP - Euclidean Clustercpuminer-opt: x25xcpuminer-opt: Garlicoincpuminer-opt: Deepcoincryptopp: Integer + Elliptic Curve Public Key Algorithmsrnnoise: mrbayes: Primate Phylogeny Analysisqmcpack: simple-H2ONo AVXAVXAVX2AVX-512469610706.74186.7916.264.703.723.743.575.771.4413.1912.9424.8023.5918.3510.997009843.2616.497.48136.21266.661387.816307.13458319.94424.909474795707.94186.976374243.1216.697.62137.53268.381406.186353.92563219.07863.50921.584479817712.42190.8115.964.383.613.743.505.581.3612.4412.8624.5323.3417.5910.856235242.9516.557.58138.88272.741452.06333.162491.109756.466324.09864719.04559.28322.358482816724.55190.5812.693.492.843.472.564.611.2611.2211.9019.6721.9617.009.496272544.0216.897.92139.72272.031486.22363.615395.07119836432.50951918.49357.07120.826OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASNo AVXAVXAVX2AVX-512100200300400500SE +/- 6.06, N = 3SE +/- 6.49, N = 3SE +/- 5.24, N = 3469474479482-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -flto -O3 -march=native -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASNo AVXAVXAVX2AVX-51290180270360450Min: 458 / Avg: 468.67 / Max: 479Min: 464 / Avg: 473.67 / Max: 486Min: 474 / Avg: 482.33 / Max: 4921. (CXX) g++ options: -flto -O3 -march=native -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenNo AVXAVXAVX2AVX-5122004006008001000SE +/- 8.50, N = 3SE +/- 8.97, N = 3SE +/- 2.96, N = 3SE +/- 3.28, N = 3610795817816-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -flto -O3 -march=native -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenNo AVXAVXAVX2AVX-512140280420560700Min: 593 / Avg: 610 / Max: 619Min: 782 / Avg: 794.67 / Max: 812Min: 813 / Avg: 817.33 / Max: 823Min: 811 / Avg: 815.67 / Max: 8221. (CXX) g++ options: -flto -O3 -march=native -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pNo AVXAVXAVX2AVX-512160320480640800SE +/- 1.41, N = 3SE +/- 2.01, N = 3SE +/- 2.05, N = 3SE +/- 0.93, N = 3706.74707.94712.42724.55-mno-avx -lm - MIN: 625.98 / MAX: 770.89-mno-avx2 - MIN: 618.26 / MAX: 770.29-mno-avx512f -lm - MIN: 632.68 / MAX: 782.05-lm - MIN: 639.87 / MAX: 788.331. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pNo AVXAVXAVX2AVX-512130260390520650Min: 704.82 / Avg: 706.74 / Max: 709.5Min: 703.95 / Avg: 707.94 / Max: 710.35Min: 710.3 / Avg: 712.42 / Max: 716.52Min: 722.7 / Avg: 724.55 / Max: 725.71. (CC) gcc options: -O3 -march=native -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KNo AVXAVXAVX2AVX-5124080120160200SE +/- 0.17, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.17, N = 3186.79186.97190.81190.58-mno-avx -lm - MIN: 171.98 / MAX: 198.82-mno-avx2 - MIN: 172.64 / MAX: 198.75-mno-avx512f -lm - MIN: 177.5 / MAX: 203.85-lm - MIN: 178.31 / MAX: 202.851. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KNo AVXAVXAVX2AVX-512306090120150Min: 186.53 / Avg: 186.79 / Max: 187.12Min: 186.78 / Avg: 186.97 / Max: 187.26Min: 190.7 / Avg: 190.81 / Max: 190.93Min: 190.34 / Avg: 190.58 / Max: 190.91. (CC) gcc options: -O3 -march=native -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetNo AVXAVX2AVX-51248121620SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 316.2615.9612.69-mno-avx - MIN: 16.04 / MAX: 16.71-mno-avx512f - MIN: 15.53 / MAX: 23.85MIN: 12.49 / MAX: 16.251. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetNo AVXAVX2AVX-51248121620Min: 16.18 / Avg: 16.26 / Max: 16.34Min: 15.78 / Avg: 15.96 / Max: 16.19Min: 12.64 / Avg: 12.69 / Max: 12.761. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2No AVXAVX2AVX-5121.05752.1153.17254.235.2875SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 34.704.383.49-mno-avx - MIN: 4.56 / MAX: 5.67-mno-avx512f - MIN: 4.2 / MAX: 5.28MIN: 3.35 / MAX: 4.591. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2No AVXAVX2AVX-512246810Min: 4.67 / Avg: 4.7 / Max: 4.71Min: 4.32 / Avg: 4.38 / Max: 4.43Min: 3.48 / Avg: 3.49 / Max: 3.511. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3No AVXAVX2AVX-5120.8371.6742.5113.3484.185SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 33.723.612.84-mno-avx - MIN: 3.61 / MAX: 4.66-mno-avx512f - MIN: 3.45 / MAX: 4.57MIN: 2.71 / MAX: 3.361. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3No AVXAVX2AVX-512246810Min: 3.69 / Avg: 3.72 / Max: 3.78Min: 3.52 / Avg: 3.61 / Max: 3.68Min: 2.81 / Avg: 2.84 / Max: 2.881. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2No AVXAVX2AVX-5120.84151.6832.52453.3664.2075SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.743.743.47-mno-avx - MIN: 3.68 / MAX: 5.99-mno-avx512f - MIN: 3.69 / MAX: 5.99MIN: 3.4 / MAX: 4.241. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2No AVXAVX2AVX-512246810Min: 3.71 / Avg: 3.74 / Max: 3.77Min: 3.74 / Avg: 3.74 / Max: 3.74Min: 3.46 / Avg: 3.47 / Max: 3.471. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetNo AVXAVX2AVX-5120.80331.60662.40993.21324.0165SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 33.573.502.56-mno-avx - MIN: 3.45 / MAX: 4.45-mno-avx512f - MIN: 3.32 / MAX: 4.4MIN: 2.47 / MAX: 3.081. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetNo AVXAVX2AVX-512246810Min: 3.51 / Avg: 3.57 / Max: 3.64Min: 3.38 / Avg: 3.5 / Max: 3.58Min: 2.52 / Avg: 2.56 / Max: 2.631. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0No AVXAVX2AVX-5121.29832.59663.89495.19326.4915SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 35.775.584.61-mno-avx - MIN: 5.6 / MAX: 7.73-mno-avx512f - MIN: 5.45 / MAX: 6.29MIN: 4.54 / MAX: 5.51. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0No AVXAVX2AVX-512246810Min: 5.67 / Avg: 5.77 / Max: 5.83Min: 5.52 / Avg: 5.58 / Max: 5.61Min: 4.6 / Avg: 4.61 / Max: 4.631. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceNo AVXAVX2AVX-5120.3240.6480.9721.2961.62SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 31.441.361.26-mno-avx - MIN: 1.37 / MAX: 1.59-mno-avx512f - MIN: 1.26 / MAX: 1.46MIN: 1.18 / MAX: 1.961. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceNo AVXAVX2AVX-512246810Min: 1.39 / Avg: 1.44 / Max: 1.52Min: 1.29 / Avg: 1.36 / Max: 1.42Min: 1.22 / Avg: 1.26 / Max: 1.311. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetNo AVXAVX2AVX-5123691215SE +/- 0.04, N = 3SE +/- 0.35, N = 3SE +/- 0.28, N = 313.1912.4411.22-mno-avx - MIN: 13.06 / MAX: 13.71-mno-avx512f - MIN: 11.65 / MAX: 13.42MIN: 10.81 / MAX: 12.591. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetNo AVXAVX2AVX-51248121620Min: 13.12 / Avg: 13.19 / Max: 13.24Min: 11.74 / Avg: 12.44 / Max: 12.82Min: 10.93 / Avg: 11.22 / Max: 11.781. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18No AVXAVX2AVX-5123691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 212.9412.8611.90-mno-avx - MIN: 12.74 / MAX: 20.32-mno-avx512f - MIN: 12.68 / MAX: 13.64MIN: 11.7 / MAX: 19.291. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18No AVXAVX2AVX-51248121620Min: 12.91 / Avg: 12.94 / Max: 12.98Min: 12.84 / Avg: 12.86 / Max: 12.87Min: 11.87 / Avg: 11.9 / Max: 11.921. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50No AVXAVX2AVX-512612182430SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.33, N = 324.8024.5319.67-mno-avx - MIN: 24.53 / MAX: 25.23-mno-avx512f - MIN: 24.13 / MAX: 27.19MIN: 19.1 / MAX: 23.651. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50No AVXAVX2AVX-512612182430Min: 24.75 / Avg: 24.8 / Max: 24.85Min: 24.42 / Avg: 24.53 / Max: 24.71Min: 19.33 / Avg: 19.67 / Max: 20.321. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyNo AVXAVX2AVX-512612182430SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.30, N = 323.5923.3421.96-mno-avx - MIN: 23.19 / MAX: 31.96-mno-avx512f - MIN: 22.92 / MAX: 23.97MIN: 21.16 / MAX: 22.931. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyNo AVXAVX2AVX-512612182430Min: 23.37 / Avg: 23.59 / Max: 23.73Min: 23.11 / Avg: 23.34 / Max: 23.47Min: 21.37 / Avg: 21.96 / Max: 22.381. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdNo AVXAVX2AVX-512510152025SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 318.3517.5917.00-mno-avx - MIN: 18.06 / MAX: 18.82-mno-avx512f - MIN: 17.32 / MAX: 19.44MIN: 16.72 / MAX: 17.391. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdNo AVXAVX2AVX-512510152025Min: 18.26 / Avg: 18.35 / Max: 18.5Min: 17.57 / Avg: 17.59 / Max: 17.62Min: 16.96 / Avg: 17 / Max: 17.061. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mNo AVXAVX2AVX-5123691215SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 310.9910.859.49-mno-avx - MIN: 10.77 / MAX: 12.22-mno-avx512f - MIN: 10.63 / MAX: 11.72MIN: 9.28 / MAX: 10.491. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mNo AVXAVX2AVX-5123691215Min: 10.85 / Avg: 10.99 / Max: 11.16Min: 10.73 / Avg: 10.85 / Max: 10.95Min: 9.38 / Avg: 9.49 / Max: 9.571. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200No AVXAVXAVX2AVX-51215K30K45K60K75KSE +/- 193.34, N = 3SE +/- 18.80, N = 3SE +/- 17.85, N = 3SE +/- 52.62, N = 370098637426235262725-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -O3 -march=native -fPIC -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200No AVXAVXAVX2AVX-51212K24K36K48K60KMin: 69903 / Avg: 70098.33 / Max: 70485Min: 63708 / Avg: 63741.67 / Max: 63773Min: 62333 / Avg: 62352.33 / Max: 62388Min: 62631 / Avg: 62725 / Max: 628131. (CXX) g++ options: -O3 -march=native -fPIC -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KNo AVXAVXAVX2AVX-5121020304050SE +/- 0.08, N = 4SE +/- 0.05, N = 4SE +/- 0.40, N = 15SE +/- 0.14, N = 443.2643.1242.9544.02-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KNo AVXAVXAVX2AVX-512918273645Min: 43.08 / Avg: 43.26 / Max: 43.43Min: 43.02 / Avg: 43.12 / Max: 43.25Min: 39.99 / Avg: 42.95 / Max: 44.11Min: 43.8 / Avg: 44.02 / Max: 44.411. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KNo AVXAVXAVX2AVX-51248121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 316.4916.6916.5516.89-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KNo AVXAVXAVX2AVX-51248121620Min: 16.44 / Avg: 16.49 / Max: 16.51Min: 16.63 / Avg: 16.69 / Max: 16.73Min: 16.53 / Avg: 16.55 / Max: 16.57Min: 16.88 / Avg: 16.89 / Max: 16.91. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KNo AVXAVXAVX2AVX-512246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 37.487.627.587.92-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KNo AVXAVXAVX2AVX-5123691215Min: 7.45 / Avg: 7.48 / Max: 7.52Min: 7.6 / Avg: 7.62 / Max: 7.64Min: 7.57 / Avg: 7.58 / Max: 7.59Min: 7.86 / Avg: 7.92 / Max: 7.991. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pNo AVXAVXAVX2AVX-512306090120150SE +/- 0.11, N = 8SE +/- 0.13, N = 8SE +/- 0.14, N = 8SE +/- 0.06, N = 8136.21137.53138.88139.72-mno-avx-mno-avx2-mno-avx512f1. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pNo AVXAVXAVX2AVX-512306090120150Min: 135.9 / Avg: 136.21 / Max: 136.83Min: 137.08 / Avg: 137.53 / Max: 138.03Min: 138.34 / Avg: 138.88 / Max: 139.44Min: 139.44 / Avg: 139.72 / Max: 140.021. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pNo AVXAVXAVX2AVX-51260120180240300SE +/- 0.23, N = 10SE +/- 0.17, N = 10SE +/- 0.13, N = 10SE +/- 0.21, N = 10266.66268.38272.74272.03-mno-avx-mno-avx2-mno-avx512f1. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pNo AVXAVXAVX2AVX-51250100150200250Min: 265.6 / Avg: 266.66 / Max: 268.22Min: 267.62 / Avg: 268.38 / Max: 269.18Min: 271.99 / Avg: 272.74 / Max: 273.35Min: 271.13 / Avg: 272.03 / Max: 272.981. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterNo AVXAVXAVX2AVX-51230060090012001500SE +/- 0.22, N = 4SE +/- 0.96, N = 4SE +/- 0.81, N = 4SE +/- 1.20, N = 41387.811406.181452.061486.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterNo AVXAVXAVX2AVX-51230060090012001500Min: 1387.17 / Avg: 1387.81 / Max: 1388.07Min: 1404.19 / Avg: 1406.18 / Max: 1408.29Min: 1450.35 / Avg: 1452.06 / Max: 1454.02Min: 1484.34 / Avg: 1486.22 / Max: 1489.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xAVX2AVX-51280160240320400SE +/- 0.10, N = 3SE +/- 4.66, N = 3333.16363.61-mno-avx512f1. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xAVX2AVX-51260120180240300Min: 333.02 / Avg: 333.16 / Max: 333.35Min: 358.19 / Avg: 363.61 / Max: 372.891. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinAVX2AVX-51212002400360048006000SE +/- 7.37, N = 3SE +/- 14.60, N = 32491.105395.07-mno-avx512f1. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinAVX2AVX-5129001800270036004500Min: 2482.85 / Avg: 2491.1 / Max: 2505.8Min: 5366.6 / Avg: 5395.07 / Max: 5414.921. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinAVX2AVX-5123K6K9K12K15KSE +/- 3.00, N = 3SE +/- 743.82, N = 159756.4611983.00-mno-avx512f1. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinAVX2AVX-5122K4K6K8K10KMin: 9751.03 / Avg: 9756.46 / Max: 9761.39Min: 10890 / Avg: 11982.67 / Max: 215101. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsNo AVXAVXAVX2AVX-51214002800420056007000SE +/- 1.35, N = 3SE +/- 7.57, N = 3SE +/- 2.22, N = 3SE +/- 3.23, N = 36307.136353.936324.106432.51-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsNo AVXAVXAVX2AVX-51211002200330044005500Min: 6304.88 / Avg: 6307.13 / Max: 6309.54Min: 6339.99 / Avg: 6353.93 / Max: 6366.02Min: 6319.71 / Avg: 6324.1 / Max: 6326.93Min: 6426.06 / Avg: 6432.51 / Max: 6436.11. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28No AVXAVXAVX2AVX-512510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 319.9419.0819.0518.49-mno-avx-mno-avx2-mno-avx512f1. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28No AVXAVXAVX2AVX-512510152025Min: 19.93 / Avg: 19.94 / Max: 19.96Min: 19.06 / Avg: 19.08 / Max: 19.09Min: 19.04 / Avg: 19.04 / Max: 19.05Min: 18.49 / Avg: 18.49 / Max: 18.51. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisAVXAVX2AVX-5121428425670SE +/- 0.56, N = 3SE +/- 0.71, N = 3SE +/- 0.68, N = 363.5159.2857.07-mno-avx2-mno-avx512f1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisAVXAVX2AVX-5121224364860Min: 62.38 / Avg: 63.51 / Max: 64.13Min: 58 / Avg: 59.28 / Max: 60.47Min: 55.8 / Avg: 57.07 / Max: 58.111. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -march=native -lm

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2ONo AVXAVXAVX2AVX-512612182430SE +/- 0.16, N = 3SE +/- 0.29, N = 3SE +/- 0.26, N = 3SE +/- 0.24, N = 324.9121.5822.3620.83-mno-avx-mno-avx2-mno-avx512f1. (CXX) g++ options: -O3 -march=native -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -fomit-frame-pointer -ffast-math -pthread -lm -ldl
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2ONo AVXAVXAVX2AVX-512612182430Min: 24.7 / Avg: 24.91 / Max: 25.23Min: 21.04 / Avg: 21.58 / Max: 22.04Min: 21.98 / Avg: 22.36 / Max: 22.85Min: 20.38 / Avg: 20.83 / Max: 21.221. (CXX) g++ options: -O3 -march=native -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -fomit-frame-pointer -ffast-math -pthread -lm -ldl

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringNo AVXAVXAVX2AVX-51250100150200250Min: 11.68 / Avg: 128.15 / Max: 266.6Min: 11.75 / Avg: 124.13 / Max: 265.19Min: 6.12 / Avg: 107.53 / Max: 264.45Min: 11.62 / Avg: 125.93 / Max: 292.4

CPU Temperature Monitor

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringNo AVXAVXAVX2AVX-51220406080100Min: 27 / Avg: 62.85 / Max: 93Min: 29 / Avg: 62.37 / Max: 94Min: 28 / Avg: 57.79 / Max: 93Min: 29 / Avg: 62.23 / Max: 95