AMD EPYC 7F72 2P Linux 5.11

2 x AMD EPYC 7F72 24-Core testing looking at CPU freq invariance on 5.11 with patch. CPU power consumption monitoring via AMD_Energy interface at 1 second polling.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101248-HA-AMDEPYC7F52
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Bioinformatics 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 4 Tests
C++ Boost Tests 5 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 20 Tests
Compression Tests 2 Tests
CPU Massive 38 Tests
Creator Workloads 21 Tests
Cryptography 4 Tests
Database Test Suite 4 Tests
Encoding 5 Tests
Finance 2 Tests
Fortran Tests 7 Tests
Game Development 5 Tests
HPC - High Performance Computing 28 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 8 Tests
Multi-Core 35 Tests
NVIDIA GPU Compute 8 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 15 Tests
Programmer / Developer System Benchmarks 9 Tests
Python 2 Tests
Quantum Mechanics 2 Tests
Raytracing 4 Tests
Renderers 9 Tests
Scientific Computing 15 Tests
Server 6 Tests
Server CPU Tests 22 Tests
Single-Threaded 6 Tests
Texture Compression 2 Tests
Video Encoding 5 Tests
Common Workstation Benchmarks 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux 5.10
January 21 2021
  16 Hours, 7 Minutes
Linux 5.11 Git
January 22 2021
  15 Hours
Linux 5.11 Patched
January 23 2021
  15 Hours, 14 Minutes
Invert Hiding All Results Option
  15 Hours, 27 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7F72 2P Linux 5.11ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionLinux 5.10Linux 5.11 GitLinux 5.11 Patched2 x AMD EPYC 7F72 24-Core @ 3.20GHz (48 Cores / 96 Threads)Supermicro H11DSi-NT v2.00 (2.1 BIOS)AMD Starship/Matisse16 x 8192 MB DDR4-3200MT/s HMA81GR7CJR8N-XN1000GB Western Digital WD_BLACK SN850 1TBASPEEDVE2282 x Intel 10G X550TUbuntu 20.105.10.9-051009-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.9GCC 10.2.0ext41920x10805.11.0-051100rc4daily20210122-generic (x86_64) 20210121VE2285.11.0-rc4-max-boost-inv-patch (x86_64) 20210121OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034Java Details- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Linux 5.10Linux 5.11 GitLinux 5.11 PatchedResult OverviewPhoronix Test Suite100%113%125%138%150%FinanceBenchTTSIOD 3D RendererLAMMPS Molecular Dynamics SimulatorBlogBenchTimed GDB GNU Debugger CompilationIORFFTWKeyDBLULESHTensorFlow LiteAI Benchmark Alpharav1eDaCapo BenchmarkJohn The RipperCpuminer-Optx265NAS Parallel BenchmarksFFTETNNOSPrayoneDNNRodiniaRedisONNX RuntimePlaidMLTimed Godot Game Engine CompilationBRL-CADSVT-VP9Quantum ESPRESSOLeelaChessZeroSVT-AV1

AMD EPYC 7F72 2P Linux 5.11ttsiod-renderer: Phong Rendering With Soft-Shadow Mappinglammps: Rhodopsin Proteinblogbench: Readtensorflow-lite: Inception V4build-gdb: Time To Compileior: 2MB - Default Test Directoryfftw: Float + SSE - 2D FFT Size 4096keydb: ai-benchmark: Device Inference Scoredacapobench: Tradebeansrav1e: 10lulesh: tensorflow-lite: SqueezeNetai-benchmark: Device AI Scorerav1e: 1x265: Bosphorus 4Konednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUjohn-the-ripper: MD5cpuminer-opt: LBC, LBRY Creditsrav1e: 5financebench: Repo OpenMPtnn: CPU - MobileNet v2npb: LU.Credis: SADDffte: N=256, 3D Complex FFT Routineonednn: IP Shapes 3D - f32 - CPUrav1e: 6ospray: San Miguel - SciVisrodinia: OpenMP Leukocyteonednn: IP Shapes 1D - f32 - CPUsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080ponnx: yolov4 - OpenMP CPUtensorflow-lite: Inception ResNet V2plaidml: No - Inference - VGG19 - CPUqe: AUSURF112build-godot: Time To Compileai-benchmark: Device Training Scorex265: Bosphorus 1080pbrl-cad: VGR Performance Metricdacapobench: Jythonredis: SETlczero: Eigensvt-av1: Enc Mode 0 - 1080ponednn: Deconvolution Batch shapes_1d - f32 - CPUsvt-vp9: VMAF Optimized - Bosphorus 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched726.86523.573981673835208102.498517.1317440280163.95159360322.83718424.98461294.226210.35119.260.8724410.51901547806671356021.01841287.102865291.428152840.601563597.66182254.442465310.8884281.34652.6354.9651.56970365.9618275545021.631197.6061.440102848.4861593049511429370.7143790.0942.32899357.42627.20821.129108440589464097.641505.1918468302893.56169759542.90219576.12265193.027560.36818.630.9141980.54767445503331324771.04540124.373698303.449147443.861539146.21174206.130003870.8813481.37052.6353.8621.62088381.0817576572622.091217.4960.858105947.6663897148971380890.2242840.0922.40549369.01655.22523.787110311881075092.916475.2517015294214.37172055913.05419771.22362195.427870.37219.740.8637820.52196846123081390371.06839406.757812289.764154376.761611164.34178738.124970940.8492481.40854.9752.6841.55447371.4818173628522.491171.0359.177106749.4563652147781427348.1044330.0912.33290364.81OpenBenchmarking.org

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingLinux 5.10Linux 5.11 GitLinux 5.11 Patched160320480640800SE +/- 10.33, N = 3SE +/- 9.04, N = 15SE +/- 3.22, N = 3726.87627.21655.231. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingLinux 5.10Linux 5.11 GitLinux 5.11 Patched130260390520650Min: 706.76 / Avg: 726.87 / Max: 741.07Min: 572.05 / Avg: 627.21 / Max: 682.22Min: 651.25 / Avg: 655.23 / Max: 661.591. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.10Linux 5.11 GitLinux 5.11 Patched612182430SE +/- 0.23, N = 15SE +/- 0.23, N = 15SE +/- 0.17, N = 1223.5721.1323.791. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.10Linux 5.11 GitLinux 5.11 Patched612182430Min: 21.87 / Avg: 23.57 / Max: 24.66Min: 20.01 / Avg: 21.13 / Max: 23.09Min: 23 / Avg: 23.79 / Max: 25.061. (CXX) g++ options: -O3 -pthread -lm

BlogBench

BlogBench is designed to replicate the load of a real-world busy file server by stressing the file-system with multiple threads of random reads, writes, and rewrites. The behavior is mimicked of that of a blog by creating blogs with content and pictures, modifying blog posts, adding comments to these blogs, and then reading the content of the blogs. All of these blogs generated are created locally with fake content and pictures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadLinux 5.10Linux 5.11 GitLinux 5.11 Patched200K400K600K800K1000KSE +/- 4087.26, N = 3SE +/- 10984.18, N = 9SE +/- 1738.41, N = 3981673108440511031181. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadLinux 5.10Linux 5.11 GitLinux 5.11 Patched200K400K600K800K1000KMin: 973502 / Avg: 981673 / Max: 985966Min: 1006413 / Avg: 1084405.44 / Max: 1116895Min: 1099845 / Avg: 1103118.33 / Max: 11057701. (CC) gcc options: -O2 -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.10Linux 5.11 GitLinux 5.11 Patched200K400K600K800K1000KSE +/- 4174.26, N = 3SE +/- 2435.29, N = 3SE +/- 1163.43, N = 3835208894640810750
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.10Linux 5.11 GitLinux 5.11 Patched160K320K480K640K800KMin: 828914 / Avg: 835208 / Max: 843105Min: 889793 / Avg: 894640 / Max: 897478Min: 808813 / Avg: 810749.67 / Max: 812835

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileLinux 5.10Linux 5.11 GitLinux 5.11 Patched20406080100SE +/- 0.64, N = 3SE +/- 0.40, N = 3SE +/- 0.43, N = 3102.5097.6492.92
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileLinux 5.10Linux 5.11 GitLinux 5.11 Patched20406080100Min: 101.41 / Avg: 102.5 / Max: 103.64Min: 97.05 / Avg: 97.64 / Max: 98.41Min: 92.05 / Avg: 92.92 / Max: 93.37

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test DirectoryLinux 5.10Linux 5.11 GitLinux 5.11 Patched110220330440550SE +/- 1.76, N = 3SE +/- 1.77, N = 3SE +/- 2.06, N = 3517.13505.19475.25MIN: 453.79 / MAX: 894.75MIN: 457.62 / MAX: 951.11MIN: 400.96 / MAX: 971.551. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test DirectoryLinux 5.10Linux 5.11 GitLinux 5.11 Patched90180270360450Min: 514.21 / Avg: 517.13 / Max: 520.29Min: 501.85 / Avg: 505.19 / Max: 507.87Min: 473.03 / Avg: 475.25 / Max: 479.371. (CC) gcc options: -O2 -lm -pthread -lmpi

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Linux 5.10Linux 5.11 GitLinux 5.11 Patched4K8K12K16K20KSE +/- 280.66, N = 6SE +/- 24.98, N = 3SE +/- 213.45, N = 31744018468170151. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Linux 5.10Linux 5.11 GitLinux 5.11 Patched3K6K9K12K15KMin: 16663 / Avg: 17439.67 / Max: 18715Min: 18432 / Avg: 18468 / Max: 18516Min: 16653 / Avg: 17015.33 / Max: 173921. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.10Linux 5.11 GitLinux 5.11 Patched60K120K180K240K300KSE +/- 3843.85, N = 3SE +/- 4239.68, N = 15SE +/- 3012.50, N = 15280163.95302893.56294214.371. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.10Linux 5.11 GitLinux 5.11 Patched50K100K150K200K250KMin: 272534.74 / Avg: 280163.95 / Max: 284798.31Min: 275189.6 / Avg: 302893.56 / Max: 338020.92Min: 273233.06 / Avg: 294214.37 / Max: 314258.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreLinux 5.10Linux 5.11 GitLinux 5.11 Patched400800120016002000159316971720

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansLinux 5.10Linux 5.11 GitLinux 5.11 Patched13002600390052006500SE +/- 65.72, N = 4SE +/- 50.83, N = 20SE +/- 66.39, N = 20603259545591
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansLinux 5.10Linux 5.11 GitLinux 5.11 Patched10002000300040005000Min: 5851 / Avg: 6031.75 / Max: 6166Min: 5457 / Avg: 5954.45 / Max: 6300Min: 5113 / Avg: 5590.8 / Max: 6277

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10Linux 5.10Linux 5.11 GitLinux 5.11 Patched0.68721.37442.06162.74883.436SE +/- 0.024, N = 3SE +/- 0.016, N = 3SE +/- 0.008, N = 32.8372.9023.054
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10Linux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 2.8 / Avg: 2.84 / Max: 2.88Min: 2.87 / Avg: 2.9 / Max: 2.93Min: 3.04 / Avg: 3.05 / Max: 3.07

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Linux 5.10Linux 5.11 GitLinux 5.11 Patched4K8K12K16K20KSE +/- 149.06, N = 5SE +/- 67.78, N = 5SE +/- 171.84, N = 518424.9819576.1219771.221. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Linux 5.10Linux 5.11 GitLinux 5.11 Patched3K6K9K12K15KMin: 18060.77 / Avg: 18424.98 / Max: 18845.74Min: 19339.97 / Avg: 19576.12 / Max: 19704.93Min: 19188.04 / Avg: 19771.22 / Max: 20140.781. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.10Linux 5.11 GitLinux 5.11 Patched14K28K42K56K70KSE +/- 72.63, N = 3SE +/- 690.93, N = 3SE +/- 412.91, N = 1561294.265193.062195.4
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.10Linux 5.11 GitLinux 5.11 Patched11K22K33K44K55KMin: 61166.4 / Avg: 61294.2 / Max: 61417.9Min: 63856.6 / Avg: 65192.97 / Max: 66165.7Min: 59468.9 / Avg: 62195.4 / Max: 65210.3

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreLinux 5.10Linux 5.11 GitLinux 5.11 Patched6001200180024003000262127562787

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 1Linux 5.10Linux 5.11 GitLinux 5.11 Patched0.08370.16740.25110.33480.4185SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 30.3510.3680.372
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 1Linux 5.10Linux 5.11 GitLinux 5.11 Patched12345Min: 0.35 / Avg: 0.35 / Max: 0.36Min: 0.36 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.37

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.10Linux 5.11 GitLinux 5.11 Patched510152025SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 319.2618.6319.741. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.10Linux 5.11 GitLinux 5.11 Patched510152025Min: 19.07 / Avg: 19.26 / Max: 19.52Min: 18.46 / Avg: 18.63 / Max: 18.82Min: 19.52 / Avg: 19.74 / Max: 19.991. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched0.20570.41140.61710.82281.0285SE +/- 0.003739, N = 7SE +/- 0.006064, N = 7SE +/- 0.001510, N = 70.8724410.9141980.863782MIN: 0.79MIN: 0.78MIN: 0.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 0.86 / Avg: 0.87 / Max: 0.89Min: 0.9 / Avg: 0.91 / Max: 0.95Min: 0.86 / Avg: 0.86 / Max: 0.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched0.12320.24640.36960.49280.616SE +/- 0.004888, N = 4SE +/- 0.005010, N = 4SE +/- 0.004601, N = 40.5190150.5476740.521968MIN: 0.43MIN: 0.43MIN: 0.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 0.51 / Avg: 0.52 / Max: 0.53Min: 0.54 / Avg: 0.55 / Max: 0.56Min: 0.51 / Avg: 0.52 / Max: 0.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.10Linux 5.11 GitLinux 5.11 Patched1000K2000K3000K4000K5000KSE +/- 8171.77, N = 3SE +/- 49184.46, N = 3SE +/- 54344.04, N = 134780667455033346123081. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.10Linux 5.11 GitLinux 5.11 Patched800K1600K2400K3200K4000KMin: 4772000 / Avg: 4780666.67 / Max: 4797000Min: 4455000 / Avg: 4550333.33 / Max: 4619000Min: 4069000 / Avg: 4612307.69 / Max: 47710001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: LBC, LBRY CreditsLinux 5.10Linux 5.11 GitLinux 5.11 Patched30K60K90K120K150KSE +/- 1088.59, N = 15SE +/- 1036.73, N = 3SE +/- 1380.06, N = 31356021324771390371. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: LBC, LBRY CreditsLinux 5.10Linux 5.11 GitLinux 5.11 Patched20K40K60K80K100KMin: 128470 / Avg: 135602 / Max: 143480Min: 130710 / Avg: 132476.67 / Max: 134300Min: 136670 / Avg: 139036.67 / Max: 1414501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5Linux 5.10Linux 5.11 GitLinux 5.11 Patched0.24030.48060.72090.96121.2015SE +/- 0.006, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.0181.0451.068
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5Linux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 1.01 / Avg: 1.02 / Max: 1.03Min: 1.04 / Avg: 1.04 / Max: 1.05Min: 1.07 / Avg: 1.07 / Max: 1.07

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPLinux 5.10Linux 5.11 GitLinux 5.11 Patched9K18K27K36K45KSE +/- 215.27, N = 3SE +/- 319.03, N = 3SE +/- 393.10, N = 341287.1040124.3739406.761. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPLinux 5.10Linux 5.11 GitLinux 5.11 Patched7K14K21K28K35KMin: 40925.3 / Avg: 41287.1 / Max: 41670.13Min: 39582.09 / Avg: 40124.37 / Max: 40686.69Min: 38632.56 / Avg: 39406.76 / Max: 39912.351. (CXX) g++ options: -O3 -march=native -fopenmp

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.10Linux 5.11 GitLinux 5.11 Patched70140210280350SE +/- 0.66, N = 3SE +/- 3.80, N = 3SE +/- 2.83, N = 3291.43303.45289.76MIN: 283.33 / MAX: 459.62MIN: 284.51 / MAX: 461.21MIN: 283.65 / MAX: 458.791. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.10Linux 5.11 GitLinux 5.11 Patched50100150200250Min: 290.11 / Avg: 291.43 / Max: 292.09Min: 299.58 / Avg: 303.45 / Max: 311.05Min: 285.31 / Avg: 289.76 / Max: 2951. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CLinux 5.10Linux 5.11 GitLinux 5.11 Patched30K60K90K120K150KSE +/- 547.25, N = 4SE +/- 1780.52, N = 15SE +/- 509.59, N = 4152840.60147443.86154376.761. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CLinux 5.10Linux 5.11 GitLinux 5.11 Patched30K60K90K120K150KMin: 151377.98 / Avg: 152840.6 / Max: 153973.8Min: 130785.22 / Avg: 147443.86 / Max: 153153.3Min: 153161.9 / Avg: 154376.76 / Max: 155556.231. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLinux 5.10Linux 5.11 GitLinux 5.11 Patched300K600K900K1200K1500KSE +/- 16159.15, N = 3SE +/- 16361.41, N = 3SE +/- 15585.71, N = 41563597.661539146.211611164.341. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLinux 5.10Linux 5.11 GitLinux 5.11 Patched300K600K900K1200K1500KMin: 1532616.12 / Avg: 1563597.66 / Max: 1587054.75Min: 1521665.25 / Avg: 1539146.21 / Max: 1571842.88Min: 1584073.5 / Avg: 1611164.34 / Max: 1649898.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10Linux 5.11 GitLinux 5.11 Patched40K80K120K160K200KSE +/- 1647.45, N = 15SE +/- 1640.30, N = 15SE +/- 1760.31, N = 15182254.44174206.13178738.121. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10Linux 5.11 GitLinux 5.11 Patched30K60K90K120K150KMin: 170039.07 / Avg: 182254.44 / Max: 195909.97Min: 163154.93 / Avg: 174206.13 / Max: 181280.54Min: 165715.79 / Avg: 178738.12 / Max: 190995.741. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched0.19990.39980.59970.79960.9995SE +/- 0.002999, N = 5SE +/- 0.005127, N = 5SE +/- 0.004000, N = 50.8884280.8813480.849248MIN: 0.77MIN: 0.71MIN: 0.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 0.88 / Avg: 0.89 / Max: 0.9Min: 0.87 / Avg: 0.88 / Max: 0.9Min: 0.84 / Avg: 0.85 / Max: 0.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6Linux 5.10Linux 5.11 GitLinux 5.11 Patched0.31680.63360.95041.26721.584SE +/- 0.005, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 31.3461.3701.408
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6Linux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 1.34 / Avg: 1.35 / Max: 1.35Min: 1.37 / Avg: 1.37 / Max: 1.38Min: 1.4 / Avg: 1.41 / Max: 1.41

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisLinux 5.10Linux 5.11 GitLinux 5.11 Patched1224364860SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.58, N = 552.6352.6354.97MIN: 24.39 / MAX: 58.82MIN: 27.03 / MAX: 58.82MIN: 31.25 / MAX: 58.82
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisLinux 5.10Linux 5.11 GitLinux 5.11 Patched1122334455Min: 52.63 / Avg: 52.63 / Max: 52.63Min: 52.63 / Avg: 52.63 / Max: 52.63Min: 52.63 / Avg: 54.97 / Max: 55.56

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteLinux 5.10Linux 5.11 GitLinux 5.11 Patched1224364860SE +/- 0.36, N = 15SE +/- 0.35, N = 3SE +/- 0.69, N = 354.9753.8652.681. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteLinux 5.10Linux 5.11 GitLinux 5.11 Patched1122334455Min: 52.96 / Avg: 54.97 / Max: 57.3Min: 53.31 / Avg: 53.86 / Max: 54.49Min: 51.76 / Avg: 52.68 / Max: 54.031. (CXX) g++ options: -O2 -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched0.36470.72941.09411.45881.8235SE +/- 0.00943, N = 4SE +/- 0.01518, N = 4SE +/- 0.01340, N = 41.569701.620881.55447MIN: 1.31MIN: 1.31MIN: 1.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 1.55 / Avg: 1.57 / Max: 1.6Min: 1.58 / Avg: 1.62 / Max: 1.66Min: 1.53 / Avg: 1.55 / Max: 1.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched80160240320400SE +/- 2.11, N = 10SE +/- 2.00, N = 10SE +/- 1.70, N = 9365.96381.08371.481. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched70140210280350Min: 358.42 / Avg: 365.96 / Max: 376.41Min: 364.74 / Avg: 381.08 / Max: 385.85Min: 363.2 / Avg: 371.48 / Max: 378.551. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched4080120160200SE +/- 1.89, N = 12SE +/- 1.60, N = 12SE +/- 1.86, N = 31821751811. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched306090120150Min: 169 / Avg: 182 / Max: 190.5Min: 166 / Avg: 174.58 / Max: 184.5Min: 177.5 / Avg: 181.17 / Max: 183.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.10Linux 5.11 GitLinux 5.11 Patched160K320K480K640K800KSE +/- 1447.51, N = 3SE +/- 4257.59, N = 3SE +/- 5824.36, N = 9755450765726736285
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.10Linux 5.11 GitLinux 5.11 Patched130K260K390K520K650KMin: 752568 / Avg: 755450.33 / Max: 757126Min: 757283 / Avg: 765726.33 / Max: 770904Min: 718390 / Avg: 736285 / Max: 779525

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched510152025SE +/- 0.23, N = 15SE +/- 0.20, N = 15SE +/- 0.16, N = 1521.6322.0922.49
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched510152025Min: 19.46 / Avg: 21.63 / Max: 22.87Min: 20.88 / Avg: 22.09 / Max: 23.28Min: 21.2 / Avg: 22.49 / Max: 23.35

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112Linux 5.10Linux 5.11 GitLinux 5.11 Patched30060090012001500SE +/- 17.89, N = 9SE +/- 11.28, N = 3SE +/- 12.21, N = 41197.601217.491171.031. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112Linux 5.10Linux 5.11 GitLinux 5.11 Patched2004006008001000Min: 1142.24 / Avg: 1197.6 / Max: 1286.76Min: 1194.93 / Avg: 1217.49 / Max: 1228.78Min: 1148.76 / Avg: 1171.03 / Max: 1205.511. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLinux 5.10Linux 5.11 GitLinux 5.11 Patched1428425670SE +/- 0.28, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 361.4460.8659.18
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLinux 5.10Linux 5.11 GitLinux 5.11 Patched1224364860Min: 61.01 / Avg: 61.44 / Max: 61.96Min: 60.66 / Avg: 60.86 / Max: 61Min: 58.84 / Avg: 59.18 / Max: 59.4

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreLinux 5.10Linux 5.11 GitLinux 5.11 Patched2004006008001000102810591067

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched1122334455SE +/- 0.26, N = 4SE +/- 0.42, N = 7SE +/- 0.52, N = 448.4847.6649.451. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched1020304050Min: 47.69 / Avg: 48.48 / Max: 48.76Min: 45.27 / Avg: 47.66 / Max: 48.53Min: 48.01 / Avg: 49.45 / Max: 50.311. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricLinux 5.10Linux 5.11 GitLinux 5.11 Patched140K280K420K560K700K6159306389716365211. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonLinux 5.10Linux 5.11 GitLinux 5.11 Patched11002200330044005500SE +/- 47.90, N = 6SE +/- 28.66, N = 18SE +/- 43.93, N = 6495148974778
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonLinux 5.10Linux 5.11 GitLinux 5.11 Patched9001800270036004500Min: 4835 / Avg: 4950.83 / Max: 5159Min: 4762 / Avg: 4896.67 / Max: 5223Min: 4629 / Avg: 4777.83 / Max: 4946

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLinux 5.10Linux 5.11 GitLinux 5.11 Patched300K600K900K1200K1500KSE +/- 13292.58, N = 7SE +/- 10410.66, N = 15SE +/- 13176.39, N = 151429370.711380890.221427348.101. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLinux 5.10Linux 5.11 GitLinux 5.11 Patched200K400K600K800K1000KMin: 1375524.62 / Avg: 1429370.71 / Max: 1469761.38Min: 1271941 / Avg: 1380890.22 / Max: 1431871.62Min: 1355752.75 / Avg: 1427348.1 / Max: 15195261. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.10Linux 5.11 GitLinux 5.11 Patched10002000300040005000SE +/- 32.10, N = 3SE +/- 49.20, N = 4SE +/- 36.23, N = 34379428444331. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.10Linux 5.11 GitLinux 5.11 Patched8001600240032004000Min: 4329 / Avg: 4379.33 / Max: 4439Min: 4171 / Avg: 4283.5 / Max: 4398Min: 4385 / Avg: 4433 / Max: 45041. (CXX) g++ options: -flto -pthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched0.02120.04240.06360.08480.106SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 120.0940.0920.0911. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched12345Min: 0.09 / Avg: 0.09 / Max: 0.1Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.091. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched0.54121.08241.62362.16482.706SE +/- 0.02538, N = 3SE +/- 0.03372, N = 3SE +/- 0.01587, N = 32.328992.405492.33290MIN: 1.93MIN: 1.92MIN: 21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.10Linux 5.11 GitLinux 5.11 Patched246810Min: 2.3 / Avg: 2.33 / Max: 2.38Min: 2.35 / Avg: 2.41 / Max: 2.46Min: 2.31 / Avg: 2.33 / Max: 2.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched80160240320400SE +/- 1.89, N = 10SE +/- 1.11, N = 10SE +/- 0.91, N = 10357.42369.01364.811. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.10Linux 5.11 GitLinux 5.11 Patched70140210280350Min: 352.94 / Avg: 357.42 / Max: 373.13Min: 364.08 / Avg: 369.01 / Max: 375.94Min: 361.66 / Avg: 364.81 / Max: 369.911. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm