5950X ASUS ROG CROSSHAIR VIII HERO WiFi BIOS

AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3202 BIOS) and AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101203-HA-5950XASUS43
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 2 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 7 Tests
Compression Tests 2 Tests
CPU Massive 15 Tests
Creator Workloads 18 Tests
Desktop Graphics 2 Tests
Encoding 6 Tests
Fortran Tests 5 Tests
Game Development 3 Tests
HPC - High Performance Computing 23 Tests
Imaging 2 Tests
LAPACK (Linear Algebra Pack) Tests 3 Tests
Linear Algebra 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 8 Tests
MPI Benchmarks 4 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 12 Tests
Programmer / Developer System Benchmarks 8 Tests
Python Tests 3 Tests
Scientific Computing 14 Tests
Server 2 Tests
Server CPU Tests 9 Tests
Single-Threaded 4 Tests
Speech 4 Tests
Telephony 4 Tests
Texture Compression 2 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
3003
January 18 2021
  13 Hours, 6 Minutes
3202
January 19 2021
  14 Hours, 21 Minutes
Invert Hiding All Results Option
  13 Hours, 44 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5950X ASUS ROG CROSSHAIR VIII HERO WiFi BIOSProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution30033202AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3003 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBAMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (2100/875MHz)AMD Navi 10 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.11.0-051100rc2daily20210108-generic (x86_64) 20210107GNOME Shell 3.38.1X Server 1.20.9amdgpu 19.1.04.6 Mesa 21.0.0-devel (git-f01bca8 2021-01-08 groovy-oibaf-ppa) (LLVM 11.0.1)1.2.164GCC 10.2.0ext43840x2160ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3202 BIOS)OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Graphics Details- GLAMORPython Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected Disk Details- 3202: NONE / errors=remount-ro,relatime,rw / Block Size: 4096

3003 vs. 3202 ComparisonPhoronix Test SuiteBaseline+3.1%+3.1%+6.2%+6.2%+9.3%+9.3%12.2%6.8%5.1%5.1%3.9%3.8%3.6%2.5%2.3%2.2%2%2%3840 x 2160 - Ultimate2MB10.9%8MB9.6%256MB7.1%Motorbike 30M4MB6.6%bertsquad-10 - OpenMP CPU6%R.R.BG-FfteD.B.s - u8s8f32 - CPU4.2%512MB3.9%EP-STREAM Triadsuper-resolution-10 - OpenMP CPUETC2C.F.D3.1%Timed Time - Size 1,0003%Elapsed Time2.7%109 - Compression Speed2.4%Time To Compile2.4%yolov4 - OpenMP CPU2.3%MobileNetV2_224inception-v3Time To Compile2%CPU - alexnet2%DXT1IP Shapes 1D - u8s8f32 - CPU2%CPU-v3-v3 - mobilenet-v32%Q.1.H.CXonoticIORIORIOROpenFOAMIORONNX RuntimeHPC ChallengeHPC ChallengeoneDNNIORHPC ChallengeONNX RuntimeEtcpakDolfynSQLite SpeedtestCraftyrav1eLZ4 CompressionBuild2ONNX RuntimeMobile Neural NetworkMobile Neural NetworkTimed Eigen CompilationNCNNEtcpakoneDNNNCNNWebP Image Encode30033202

5950X ASUS ROG CROSSHAIR VIII HERO WiFi BIOShpcc: G-HPLrelion: Basic - CPUopenfoam: Motorbike 60Mior: 512MB - Default Test Directoryqe: AUSURF112ior: 256MB - Default Test Directorylammps: 20k Atomsonnx: super-resolution-10 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUcp2k: Fayalite-FIST Databrl-cad: VGR Performance Metricnumpy: tesseract: 3840 x 2160dav1d: Chimera 1080p 10-bitgromacs: Water Benchmarkcloverleaf: Lagrangian-Eulerian Hydrodynamicsxonotic: 3840 x 2160 - Ultimatesqlite-speedtest: Timed Time - Size 1,000compress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedonnx: fcn-resnet101-11 - OpenMP CPUonnx: yolov4 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUopenfoam: Motorbike 30Mastcenc: Exhaustiveonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUhmmer: Pfam Database Searchwarsow: 3840 x 2160build2: Time To Compileonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUbuild-godot: Time To Compilerav1e: 10onednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0compress-zstd: 19indigobench: CPU - Supercarindigobench: CPU - Bedroombuild-eigen: Time To Compilencnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetnamd: ATPase Simulation - 327,506 Atomsetlegacy: Renderer2 - 3840 x 2160compress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speeddeepspeech: CPUkripke: build-linux-kernel: Time To Compileamg: rav1e: 5qmcpack: simple-H2Ocompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedcompress-zstd: 3rav1e: 6synthmark: VoiceMark_100x265: Bosphorus 4Kior: 8MB - Default Test Directoryespeak: Text-To-Speech Synthesisonednn: IP Shapes 3D - u8s8f32 - CPUwebp: Quality 100, Lossless, Highest Compressioncoremark: CoreMark Size 666 - Iterations Per Secondphpbench: PHP Benchmark Suitelibraw: Post-Processing Benchmarkonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUetcpak: ETC2dav1d: Chimera 1080pencode-wavpack: WAV To WavPackcrafty: Elapsed Timedav1d: Summer Nature 4Kencode-ape: WAV To APEastcenc: Thoroughtnn: CPU - MobileNet v2rnnoise: onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUtnn: CPU - SqueezeNet v1.1etcpak: ETC1 + Ditheringwebp: Quality 100, Losslessdolfyn: Computational Fluid Dynamicsetcpak: ETC1x265: Bosphorus 1080pior: 4MB - Default Test Directoryonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUlulesh: encode-opus: WAV To Opus Encodelammps: Rhodopsin Proteinonednn: IP Shapes 3D - f32 - CPUdav1d: Summer Nature 1080ponednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUwebp: Quality 100, Highest Compressionastcenc: Mediumastcenc: Fastetcpak: DXT1onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUior: 2MB - Default Test Directorywebp: Quality 100yquake2: OpenGL 3.x - 3840 x 2160hpcc: Max Ping Pong Bandwidthhpcc: Rand Ring Bandwidthhpcc: Rand Ring Latencyhpcc: G-Rand Accesshpcc: EP-STREAM Triadhpcc: G-Ptranshpcc: EP-DGEMMhpcc: G-Ffte3003320253.166201892.6091391.391748.211199.391339.9113.5397056705787.495265419514.13356.171596.351.275136.80257.455709041.63113093.168.579943816013104.4899.001818.1182.704429.679.9972769.092752.882753.7779.1983.3621783.661809.7631.0052.5013.40224.0365.23243.48.8034.17560.05217.6714.6421.1425.0511.0414.5160.6012.991.805.293.904.384.074.3812.041.07519224.313085.168.8870.480867238555345.7872072611671.49722.31213410.211911.064723.91.958958.66224.571601.2121.5600.47713827.225829763.12673283193952.992.950022.38690236.507590.3210.94711736507224.409.80512.47218.29315.1813.980220.816147211.459349.70013.03912.898382.93247.871580.250.6258621.459705041.85616.12613.1049.48452534.9519.130717.23405.4655.354.111532.4111.534313.504451540.021.741979.334205.2751.925620.489330.049981.424682.4532616.859176.2345253.168631875.481380.201682.821221.191251.6513.3907324665801.638262742514.08353.570796.661.258134.74288.967253542.86212949.066.95984281586397.87100.901795.3782.928430.581.8832761.962763.902749.2380.0723.4461789.231823.1930.3322.4813.32523.6305.15343.38.6694.13261.25117.9514.6021.2925.0011.2614.5759.9013.001.825.373.964.424.154.4611.991.0873612946.469.1070.534707258080046.4102101652001.50522.37113319.611854.524738.41.965957.95124.181461.1821.6920.48436827.522815726.12226383401953.092.994022.43213244.998592.5611.11211427830228.629.85112.67220.32915.2343.979920.832251212.620355.43912.87313.292387.14147.621482.950.6367351.487865066.73826.15013.1139.51868535.5219.256517.29015.3605.434.101562.8871.598183.531821388.541.729986.534166.8392.024740.492040.049731.479562.4784116.570236.55050OpenBenchmarking.org

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL300332021224364860SE +/- 0.08, N = 3SE +/- 0.05, N = 353.1753.171. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU30033202400800120016002000SE +/- 3.84, N = 3SE +/- 6.24, N = 31892.611875.481. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -pthread -lmpi_cxx -lmpi

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M3003320230060090012001500SE +/- 0.57, N = 3SE +/- 0.41, N = 31391.391380.201. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 512MB - Disk Target: Default Test Directory30033202400800120016002000SE +/- 11.67, N = 3SE +/- 22.61, N = 91748.211682.82MIN: 534.9 / MAX: 2360.08MIN: 251.69 / MAX: 2253.721. (CC) gcc options: -O2 -lm -pthread -lmpi

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF1123003320230060090012001500SE +/- 0.92, N = 3SE +/- 3.58, N = 31199.391221.191. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 256MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 19.35, N = 9SE +/- 14.35, N = 91339.911251.65MIN: 282.98 / MAX: 2236.64MIN: 354.68 / MAX: 2107.131. (CC) gcc options: -O2 -lm -pthread -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms300332023691215SE +/- 0.08, N = 3SE +/- 0.03, N = 313.5413.391. (CXX) g++ options: -O3 -pthread -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU3003320216003200480064008000SE +/- 202.78, N = 12SE +/- 175.44, N = 12705673241. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU30033202150300450600750SE +/- 1.32, N = 3SE +/- 11.02, N = 127056651. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Fayalite-FIST Data300332022004006008001000787.50801.64

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric3003320260K120K180K240K300K2654192627421. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark30033202110220330440550SE +/- 0.48, N = 3SE +/- 4.15, N = 3514.13514.08

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 21603003320280160240320400SE +/- 3.53, N = 15SE +/- 4.21, N = 15356.17353.57

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit3003320220406080100SE +/- 0.06, N = 3SE +/- 0.05, N = 396.3596.66MIN: 61.49 / MAX: 217.11MIN: 61.56 / MAX: 221.221. (CC) gcc options: -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark300332020.28690.57380.86071.14761.4345SE +/- 0.001, N = 3SE +/- 0.001, N = 31.2751.2581. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics30033202306090120150SE +/- 0.03, N = 3SE +/- 0.24, N = 3136.80134.741. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultimate3003320260120180240300SE +/- 4.83, N = 15SE +/- 3.79, N = 3257.46288.97MIN: 55 / MAX: 623MIN: 60 / MAX: 571

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000300332021020304050SE +/- 0.34, N = 3SE +/- 0.24, N = 1541.6342.861. (CC) gcc options: -O2 -ldl -lz -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed300332023K6K9K12K15KSE +/- 11.50, N = 12SE +/- 11.91, N = 313093.112949.01. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed300332021530456075SE +/- 0.80, N = 12SE +/- 0.20, N = 368.5766.951. (CC) gcc options: -O3

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU3003320220406080100SE +/- 0.33, N = 399981. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU3003320290180270360450SE +/- 1.17, N = 3SE +/- 3.35, N = 34384281. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU300332023K6K9K12K15KSE +/- 20.67, N = 3SE +/- 68.50, N = 316013158631. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M3003320220406080100SE +/- 0.09, N = 3SE +/- 0.15, N = 3104.4897.871. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive3003320220406080100SE +/- 0.10, N = 3SE +/- 0.15, N = 399.00100.901. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU30033202400800120016002000SE +/- 20.56, N = 3SE +/- 19.66, N = 41818.111795.37MIN: 1764.89MIN: 1755.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search3003320220406080100SE +/- 0.15, N = 3SE +/- 0.07, N = 382.7082.931. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 21603003320290180270360450SE +/- 0.72, N = 3SE +/- 0.67, N = 3429.6430.5

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile3003320220406080100SE +/- 0.10, N = 3SE +/- 0.12, N = 380.0081.88

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU300332026001200180024003000SE +/- 9.46, N = 3SE +/- 4.41, N = 32769.092761.96MIN: 2739.1MIN: 2740.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU300332026001200180024003000SE +/- 11.23, N = 3SE +/- 9.00, N = 32752.882763.90MIN: 2722.06MIN: 2736.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU300332026001200180024003000SE +/- 13.63, N = 3SE +/- 2.89, N = 32753.772749.23MIN: 2727.8MIN: 2735.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile3003320220406080100SE +/- 0.15, N = 3SE +/- 0.31, N = 379.2080.07

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10300332020.77541.55082.32623.10163.877SE +/- 0.037, N = 3SE +/- 0.037, N = 153.3623.446

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU30033202400800120016002000SE +/- 8.25, N = 3SE +/- 17.01, N = 31783.661789.23MIN: 1765.38MIN: 1761.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU30033202400800120016002000SE +/- 15.33, N = 3SE +/- 9.60, N = 31809.761823.19MIN: 1773.21MIN: 1798.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v330033202714212835SE +/- 0.18, N = 3SE +/- 0.26, N = 331.0130.33MIN: 29.92 / MAX: 38.55MIN: 29.28 / MAX: 56.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0300332020.56271.12541.68812.25082.8135SE +/- 0.094, N = 3SE +/- 0.027, N = 32.5012.481MIN: 2.32 / MAX: 4.53MIN: 2.42 / MAX: 2.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224300332020.76551.5312.29653.0623.8275SE +/- 0.047, N = 3SE +/- 0.039, N = 33.4023.325MIN: 3.23 / MAX: 5.81MIN: 3.16 / MAX: 4.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-5030033202612182430SE +/- 0.25, N = 3SE +/- 0.19, N = 324.0423.63MIN: 21.95 / MAX: 33.08MIN: 22.31 / MAX: 33.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0300332021.17722.35443.53164.70885.886SE +/- 0.062, N = 3SE +/- 0.022, N = 35.2325.153MIN: 5.02 / MAX: 8.4MIN: 5.02 / MAX: 14.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19300332021020304050SE +/- 0.03, N = 3SE +/- 0.06, N = 343.443.31. (CC) gcc options: -O3 -pthread -lz -llzma

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar30033202246810SE +/- 0.008, N = 3SE +/- 0.011, N = 38.8038.669

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom300332020.93941.87882.81823.75764.697SE +/- 0.006, N = 3SE +/- 0.017, N = 34.1754.132

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile300332021428425670SE +/- 0.21, N = 3SE +/- 0.01, N = 360.0561.25

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m3003320248121620SE +/- 0.04, N = 3SE +/- 0.13, N = 317.6717.95MIN: 17.47 / MAX: 18.02MIN: 17.66 / MAX: 19.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd3003320248121620SE +/- 0.03, N = 3SE +/- 0.04, N = 314.6414.60MIN: 14.31 / MAX: 15.31MIN: 14.21 / MAX: 15.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny30033202510152025SE +/- 0.11, N = 3SE +/- 0.08, N = 321.1421.29MIN: 20.72 / MAX: 29.47MIN: 20.98 / MAX: 21.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet5030033202612182430SE +/- 0.13, N = 3SE +/- 0.17, N = 325.0525.00MIN: 24.65 / MAX: 26.23MIN: 24.58 / MAX: 26.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet300332023691215SE +/- 0.01, N = 3SE +/- 0.24, N = 311.0411.26MIN: 10.95 / MAX: 11.89MIN: 10.91 / MAX: 19.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet183003320248121620SE +/- 0.01, N = 3SE +/- 0.01, N = 314.5114.57MIN: 14.39 / MAX: 15.06MIN: 14.45 / MAX: 23.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16300332021428425670SE +/- 0.13, N = 3SE +/- 0.09, N = 360.6059.90MIN: 59.43 / MAX: 62.32MIN: 58.71 / MAX: 61.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet300332023691215SE +/- 0.00, N = 3SE +/- 0.02, N = 312.9913.00MIN: 12.62 / MAX: 13.57MIN: 12.64 / MAX: 20.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface300332020.40950.8191.22851.6382.0475SE +/- 0.00, N = 3SE +/- 0.00, N = 31.801.82MIN: 1.78 / MAX: 2.28MIN: 1.79 / MAX: 2.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0300332021.20832.41663.62494.83326.0415SE +/- 0.00, N = 3SE +/- 0.01, N = 35.295.37MIN: 5.24 / MAX: 5.79MIN: 5.31 / MAX: 7.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet300332020.8911.7822.6733.5644.455SE +/- 0.00, N = 3SE +/- 0.01, N = 33.903.96MIN: 3.78 / MAX: 4.83MIN: 3.84 / MAX: 5.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2300332020.99451.9892.98353.9784.9725SE +/- 0.01, N = 3SE +/- 0.01, N = 34.384.42MIN: 4.33 / MAX: 4.89MIN: 4.35 / MAX: 5.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3300332020.93381.86762.80143.73524.669SE +/- 0.01, N = 3SE +/- 0.00, N = 34.074.15MIN: 4.03 / MAX: 5.2MIN: 4.11 / MAX: 5.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2300332021.00352.0073.01054.0145.0175SE +/- 0.01, N = 3SE +/- 0.01, N = 34.384.46MIN: 4.24 / MAX: 7.32MIN: 4.31 / MAX: 5.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet300332023691215SE +/- 0.15, N = 3SE +/- 0.01, N = 312.0411.99MIN: 11.72 / MAX: 14.17MIN: 11.79 / MAX: 12.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms300332020.24470.48940.73410.97881.2235SE +/- 0.00258, N = 3SE +/- 0.00500, N = 31.075191.08736

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Renderer2 - Resolution: 3840 x 2160300350100150200250224.3

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed300332023K6K9K12K15KSE +/- 7.36, N = 3SE +/- 2.16, N = 313085.112946.41. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed300332021530456075SE +/- 0.92, N = 3SE +/- 0.20, N = 368.8869.101. (CC) gcc options: -O3

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU300332021632486480SE +/- 0.10, N = 3SE +/- 0.04, N = 370.4870.53

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.43003320216M32M48M64M80MSE +/- 1023469.37, N = 3SE +/- 319563.42, N = 372385553725808001. (CXX) g++ options: -O3 -fopenmp

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile300332021122334455SE +/- 0.43, N = 3SE +/- 0.49, N = 345.7946.41

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.23003320250M100M150M200M250MSE +/- 2159740.57, N = 3SE +/- 569155.55, N = 32072611672101652001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5300332020.33860.67721.01581.35441.693SE +/- 0.005, N = 3SE +/- 0.007, N = 31.4971.505

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2O30033202510152025SE +/- 0.21, N = 7SE +/- 0.14, N = 322.3122.371. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed300332023K6K9K12K15KSE +/- 19.80, N = 3SE +/- 25.73, N = 313410.213319.61. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed300332023K6K9K12K15KSE +/- 49.00, N = 3SE +/- 59.07, N = 311911.0611854.521. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 33003320210002000300040005000SE +/- 8.02, N = 3SE +/- 12.71, N = 34723.94738.41. (CC) gcc options: -O3 -pthread -lz -llzma

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6300332020.44210.88421.32631.76842.2105SE +/- 0.005, N = 3SE +/- 0.016, N = 31.9581.965

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100300332022004006008001000SE +/- 3.00, N = 3SE +/- 4.96, N = 3958.66957.951. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K30033202612182430SE +/- 0.35, N = 3SE +/- 0.26, N = 424.5724.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 8MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 6.28, N = 3SE +/- 28.69, N = 131601.211461.18MIN: 1005.82 / MAX: 2534.37MIN: 491.21 / MAX: 2711.81. (CC) gcc options: -O2 -lm -pthread -lmpi

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis30033202510152025SE +/- 0.08, N = 4SE +/- 0.08, N = 421.5621.691. (CC) gcc options: -O2 -std=c99

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU300332020.1090.2180.3270.4360.545SE +/- 0.002170, N = 3SE +/- 0.003401, N = 150.4771380.484368MIN: 0.44MIN: 0.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression30033202612182430SE +/- 0.05, N = 3SE +/- 0.04, N = 327.2327.521. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second30033202200K400K600K800K1000KSE +/- 1722.61, N = 3SE +/- 552.83, N = 3829763.13815726.121. (CC) gcc options: -O2 -lrt" -lrt

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite30033202200K400K600K800K1000KSE +/- 8175.01, N = 3SE +/- 7522.35, N = 3831939834019

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark300332021224364860SE +/- 0.08, N = 3SE +/- 0.24, N = 352.9953.091. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU300332020.67371.34742.02112.69483.3685SE +/- 0.00671, N = 3SE +/- 0.00911, N = 32.950022.99402MIN: 2.81MIN: 2.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU300332020.54721.09441.64162.18882.736SE +/- 0.00251, N = 3SE +/- 0.00987, N = 32.386902.43213MIN: 2.28MIN: 2.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC23003320250100150200250SE +/- 0.59, N = 3SE +/- 1.66, N = 3236.51245.001. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p30033202130260390520650SE +/- 0.69, N = 3SE +/- 0.77, N = 3590.32592.56MIN: 447.67 / MAX: 749.27MIN: 447.8 / MAX: 754.791. (CC) gcc options: -pthread

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack300332023691215SE +/- 0.06, N = 5SE +/- 0.03, N = 510.9511.111. (CXX) g++ options: -rdynamic

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time300332023M6M9M12M15MSE +/- 108514.09, N = 3SE +/- 40276.62, N = 311736507114278301. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K3003320250100150200250SE +/- 0.49, N = 3SE +/- 0.36, N = 3224.40228.62MIN: 172.58 / MAX: 234.67MIN: 172.54 / MAX: 238.821. (CC) gcc options: -pthread

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE300332023691215SE +/- 0.045, N = 5SE +/- 0.041, N = 59.8059.8511. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough300332023691215SE +/- 0.03, N = 3SE +/- 0.02, N = 312.4712.671. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v23003320250100150200250SE +/- 0.32, N = 3SE +/- 0.67, N = 3218.29220.33MIN: 208.52 / MAX: 289.05MIN: 216.95 / MAX: 261.21. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-283003320248121620SE +/- 0.07, N = 3SE +/- 0.18, N = 315.1815.231. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU300332020.89551.7912.68653.5824.4775SE +/- 0.01644, N = 3SE +/- 0.00985, N = 33.980223.97992MIN: 3.72MIN: 3.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU300332020.18730.37460.56190.74920.9365SE +/- 0.001756, N = 3SE +/- 0.001536, N = 30.8161470.832251MIN: 0.74MIN: 0.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.13003320250100150200250SE +/- 0.43, N = 3SE +/- 0.34, N = 3211.46212.62MIN: 210.71 / MAX: 212.37MIN: 211.89 / MAX: 213.241. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + Dithering3003320280160240320400SE +/- 0.36, N = 3SE +/- 3.87, N = 3349.70355.441. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless300332023691215SE +/- 0.01, N = 3SE +/- 0.05, N = 313.0412.871. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics300332023691215SE +/- 0.10, N = 3SE +/- 0.02, N = 312.9013.29

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC13003320280160240320400SE +/- 1.98, N = 3SE +/- 1.60, N = 3382.93387.141. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p300332021122334455SE +/- 0.08, N = 3SE +/- 0.22, N = 347.8747.621. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 4MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 5.96, N = 3SE +/- 11.11, N = 111580.251482.95MIN: 1161.4 / MAX: 2244.12MIN: 955.9 / MAX: 2484.921. (CC) gcc options: -O2 -lm -pthread -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU300332020.14330.28660.42990.57320.7165SE +/- 0.000749, N = 3SE +/- 0.000300, N = 30.6258620.636735MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU300332020.33480.66961.00441.33921.674SE +/- 0.00059, N = 3SE +/- 0.00214, N = 31.459701.48786MIN: 1.39MIN: 1.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.33003320211002200330044005500SE +/- 59.80, N = 3SE +/- 66.75, N = 35041.865066.741. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode30033202246810SE +/- 0.032, N = 5SE +/- 0.042, N = 56.1266.1501. (CXX) g++ options: -fvisibility=hidden -logg -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein300332023691215SE +/- 0.15, N = 15SE +/- 0.13, N = 1513.1013.111. (CXX) g++ options: -O3 -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU300332023691215SE +/- 0.01431, N = 3SE +/- 0.00745, N = 39.484529.51868MIN: 9.38MIN: 9.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p30033202120240360480600SE +/- 4.29, N = 3SE +/- 1.59, N = 3534.95535.52MIN: 432.57 / MAX: 611.74MIN: 453.14 / MAX: 589.941. (CC) gcc options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU30033202510152025SE +/- 0.02, N = 3SE +/- 0.04, N = 319.1319.26MIN: 18.77MIN: 18.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU3003320248121620SE +/- 0.01, N = 3SE +/- 0.02, N = 317.2317.29MIN: 16.81MIN: 16.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression300332021.22962.45923.68884.91846.148SE +/- 0.059, N = 3SE +/- 0.062, N = 35.4655.3601. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium300332021.22182.44363.66544.88726.109SE +/- 0.04, N = 3SE +/- 0.02, N = 35.355.431. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast300332020.92481.84962.77443.69924.624SE +/- 0.01, N = 3SE +/- 0.03, N = 34.114.101. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT13003320230060090012001500SE +/- 2.32, N = 3SE +/- 21.94, N = 31532.411562.891. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU300332020.35960.71921.07881.43841.798SE +/- 0.00153, N = 3SE +/- 0.00175, N = 31.534311.59818MIN: 1.41MIN: 1.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU300332020.79471.58942.38413.17883.9735SE +/- 0.00613, N = 3SE +/- 0.00421, N = 33.504453.53182MIN: 3.38MIN: 3.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 2.40, N = 3SE +/- 14.36, N = 31540.021388.54MIN: 1034.78 / MAX: 2149.91MIN: 890.97 / MAX: 2113.751. (CC) gcc options: -O2 -lm -pthread -lmpi

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100300332020.39170.78341.17511.56681.9585SE +/- 0.001, N = 3SE +/- 0.004, N = 31.7411.7291. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 3840 x 2160300332022004006008001000SE +/- 1.03, N = 3SE +/- 1.35, N = 3979.3986.51. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth300332027K14K21K28K35KSE +/- 146.47, N = 3SE +/- 130.59, N = 334205.2834166.841. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth300332020.45560.91121.36681.82242.278SE +/- 0.02644, N = 3SE +/- 0.02919, N = 31.925622.024741. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency300332020.11070.22140.33210.44280.5535SE +/- 0.00404, N = 3SE +/- 0.00234, N = 30.489330.492041. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access300332020.01120.02240.03360.04480.056SE +/- 0.00046, N = 3SE +/- 0.00017, N = 30.049980.049731. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad300332020.33290.66580.99871.33161.6645SE +/- 0.00070, N = 3SE +/- 0.00089, N = 31.424681.479561. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans300332020.55761.11521.67282.23042.788SE +/- 0.00427, N = 3SE +/- 0.01009, N = 32.453262.478411. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM3003320248121620SE +/- 0.11, N = 3SE +/- 0.17, N = 316.8616.571. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte30033202246810SE +/- 0.10692, N = 3SE +/- 0.02613, N = 36.234526.550501. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

129 Results Shown

HPC Challenge
RELION
OpenFOAM
IOR
Quantum ESPRESSO
IOR
LAMMPS Molecular Dynamics Simulator
ONNX Runtime:
  super-resolution-10 - OpenMP CPU
  bertsquad-10 - OpenMP CPU
CP2K Molecular Dynamics
BRL-CAD
Numpy Benchmark
Tesseract
dav1d
GROMACS
CloverLeaf
Xonotic
SQLite Speedtest
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
ONNX Runtime:
  fcn-resnet101-11 - OpenMP CPU
  yolov4 - OpenMP CPU
  shufflenet-v2-10 - OpenMP CPU
OpenFOAM
ASTC Encoder
oneDNN
Timed HMMer Search
Warsow
Build2
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
Timed Godot Game Engine Compilation
rav1e
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Zstd Compression
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
Timed Eigen Compilation
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
NAMD
ET: Legacy
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
DeepSpeech
Kripke
Timed Linux Kernel Compilation
Algebraic Multi-Grid Benchmark
rav1e
QMCPACK
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
Zstd Compression
rav1e
Google SynthMark
x265
IOR
eSpeak-NG Speech Engine
oneDNN
WebP Image Encode
Coremark
PHPBench
LibRaw
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
Etcpak
dav1d
WavPack Audio Encoding
Crafty
dav1d
Monkey Audio Encoding
ASTC Encoder
TNN
RNNoise
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
TNN
Etcpak
WebP Image Encode
Dolfyn
Etcpak
x265
IOR
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
LULESH
Opus Codec Encoding
LAMMPS Molecular Dynamics Simulator
oneDNN
dav1d
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
WebP Image Encode
ASTC Encoder:
  Medium
  Fast
Etcpak
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
IOR
WebP Image Encode
yquake2
HPC Challenge:
  Max Ping Pong Bandwidth
  Rand Ring Bandwidth
  Rand Ring Latency
  G-Rand Access
  EP-STREAM Triad
  G-Ptrans
  EP-DGEMM
  G-Ffte