5950X ASUS ROG CROSSHAIR VIII HERO WiFi BIOS

AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3202 BIOS) and AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101203-HA-5950XASUS43
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 2 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 7 Tests
Compression Tests 2 Tests
CPU Massive 15 Tests
Creator Workloads 18 Tests
Desktop Graphics 2 Tests
Encoding 6 Tests
Fortran Tests 5 Tests
Game Development 3 Tests
HPC - High Performance Computing 23 Tests
Imaging 2 Tests
LAPACK (Linear Algebra Pack) Tests 3 Tests
Linear Algebra 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 8 Tests
MPI Benchmarks 4 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 12 Tests
Programmer / Developer System Benchmarks 8 Tests
Python Tests 3 Tests
Scientific Computing 14 Tests
Server 2 Tests
Server CPU Tests 9 Tests
Single-Threaded 4 Tests
Speech 4 Tests
Telephony 4 Tests
Texture Compression 2 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
3003
January 18 2021
  13 Hours, 6 Minutes
3202
January 19 2021
  14 Hours, 21 Minutes
Invert Hiding All Results Option
  13 Hours, 44 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5950X ASUS ROG CROSSHAIR VIII HERO WiFi BIOSProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution30033202AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3003 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBAMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (2100/875MHz)AMD Navi 10 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.11.0-051100rc2daily20210108-generic (x86_64) 20210107GNOME Shell 3.38.1X Server 1.20.9amdgpu 19.1.04.6 Mesa 21.0.0-devel (git-f01bca8 2021-01-08 groovy-oibaf-ppa) (LLVM 11.0.1)1.2.164GCC 10.2.0ext43840x2160ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3202 BIOS)OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Graphics Details- GLAMORPython Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected Disk Details- 3202: NONE / errors=remount-ro,relatime,rw / Block Size: 4096

3003 vs. 3202 ComparisonPhoronix Test SuiteBaseline+3.1%+3.1%+6.2%+6.2%+9.3%+9.3%12.2%6.8%5.1%5.1%3.9%3.8%3.6%2.5%2.3%2.2%2%2%3840 x 2160 - Ultimate2MB10.9%8MB9.6%256MB7.1%Motorbike 30M4MB6.6%bertsquad-10 - OpenMP CPU6%R.R.BG-FfteD.B.s - u8s8f32 - CPU4.2%512MB3.9%EP-STREAM Triadsuper-resolution-10 - OpenMP CPUETC2C.F.D3.1%Timed Time - Size 1,0003%Elapsed Time2.7%109 - Compression Speed2.4%Time To Compile2.4%yolov4 - OpenMP CPU2.3%MobileNetV2_224inception-v3Time To Compile2%CPU - alexnet2%DXT1IP Shapes 1D - u8s8f32 - CPU2%CPU-v3-v3 - mobilenet-v32%Q.1.H.CXonoticIORIORIOROpenFOAMIORONNX RuntimeHPC ChallengeHPC ChallengeoneDNNIORHPC ChallengeONNX RuntimeEtcpakDolfynSQLite SpeedtestCraftyrav1eLZ4 CompressionBuild2ONNX RuntimeMobile Neural NetworkMobile Neural NetworkTimed Eigen CompilationNCNNEtcpakoneDNNNCNNWebP Image Encode30033202

5950X ASUS ROG CROSSHAIR VIII HERO WiFi BIOSior: 2MB - Default Test Directoryior: 256MB - Default Test Directoryopenfoam: Motorbike 30Mior: 4MB - Default Test Directoryonnx: bertsquad-10 - OpenMP CPUhpcc: Rand Ring Bandwidthhpcc: G-Ffteonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUior: 512MB - Default Test Directoryhpcc: EP-STREAM Triadetcpak: ETC2dolfyn: Computational Fluid Dynamicssqlite-speedtest: Timed Time - Size 1,000crafty: Elapsed Timerav1e: 10compress-lz4: 9 - Compression Speedbuild2: Time To Compileonnx: yolov4 - OpenMP CPUmnn: MobileNetV2_224mnn: inception-v3build-eigen: Time To Compilencnn: CPU - alexnetetcpak: DXT1onednn: IP Shapes 1D - u8s8f32 - CPUncnn: CPU-v3-v3 - mobilenet-v3webp: Quality 100, Highest Compressiononednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUastcenc: Exhaustiveonednn: Deconvolution Batch shapes_1d - f32 - CPUdav1d: Summer Nature 4Kncnn: CPU-v2-v2 - mobilenet-v2qe: AUSURF112cp2k: Fayalite-FIST Datahpcc: EP-DGEMMonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondmnn: resnet-v2-50etcpak: ETC1 + Ditheringx265: Bosphorus 4Kastcenc: Thoroughncnn: CPU - regnety_400mindigobench: CPU - Supercarncnn: CPU - mnasnetmnn: SqueezeNetV1.0cloverleaf: Lagrangian-Eulerian Hydrodynamicsonednn: IP Shapes 3D - u8s8f32 - CPUncnn: CPU - efficientnet-b0encode-wavpack: WAV To WavPackastcenc: Mediumonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUamg: build-linux-kernel: Time To Compilegromacs: Water Benchmarkwebp: Quality 100, Losslessonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUncnn: CPU - vgg16namd: ATPase Simulation - 327,506 Atomscompress-lz4: 9 - Decompression Speedlammps: 20k Atomsncnn: CPU - blazefacebuild-godot: Time To Compileetcpak: ETC1webp: Quality 100, Lossless, Highest Compressioncompress-lz4: 3 - Decompression Speedindigobench: CPU - Bedroomhpcc: G-Ptransonnx: fcn-resnet101-11 - OpenMP CPUbrl-cad: VGR Performance Metriconnx: shufflenet-v2-10 - OpenMP CPUtnn: CPU - MobileNet v2relion: Basic - CPUncnn: CPU - shufflenet-v2openfoam: Motorbike 60Monednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUtesseract: 3840 x 2160yquake2: OpenGL 3.x - 3840 x 2160ncnn: CPU - yolov4-tinywebp: Quality 100compress-lz4: 1 - Decompression Speedonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUespeak: Text-To-Speech Synthesishpcc: Rand Ring Latencytnn: CPU - SqueezeNet v1.1rav1e: 5x265: Bosphorus 1080phpcc: G-Rand Accesslulesh: compress-lz4: 1 - Compression Speedencode-ape: WAV To APEncnn: CPU - mobilenetncnn: CPU - resnet18onednn: Recurrent Neural Network Training - u8s8f32 - CPUencode-opus: WAV To Opus Encodedav1d: Chimera 1080ponednn: IP Shapes 3D - f32 - CPUrav1e: 6rnnoise: onednn: Convolution Batch Shapes Auto - f32 - CPUdav1d: Chimera 1080p 10-bitcompress-lz4: 3 - Compression Speedonednn: Recurrent Neural Network Inference - u8s8f32 - CPUcompress-zstd: 3ncnn: CPU - squeezenet_ssdhmmer: Pfam Database Searchkripke: qmcpack: simple-H2Oonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUphpbench: PHP Benchmark Suiteastcenc: Fastcompress-zstd: 19warsow: 3840 x 2160ncnn: CPU - resnet50libraw: Post-Processing Benchmarkonednn: Recurrent Neural Network Training - f32 - CPUhpcc: Max Ping Pong Bandwidthdav1d: Summer Nature 1080pncnn: CPU - googlenetdeepspeech: CPUsynthmark: VoiceMark_100lammps: Rhodopsin Proteinnumpy: onednn: IP Shapes 1D - f32 - CPUhpcc: G-HPLetlegacy: Renderer2 - 3840 x 2160ior: 8MB - Default Test Directoryonnx: super-resolution-10 - OpenMP CPUmnn: mobilenet-v1-1.0xonotic: 3840 x 2160 - Ultimate300332021540.021339.91104.481580.257051.925626.234521.534311748.211.42468236.50712.89841.631117365073.36268.5779.9974383.40231.00560.05211.041532.4110.8161474.075.4651.4597099.002.38690224.404.381199.39787.49516.859170.625862829763.12673224.036349.70024.5712.4717.678.8033.905.232136.800.4771385.2910.9475.352.9500220726116745.7871.27513.0391818.1160.601.0751913093.113.5391.8079.198382.93227.22513085.14.1752.453269926541916013218.2931892.6094.381391.393.504451809.76356.1715979.321.141.74113410.219.130721.5600.48933211.4591.49747.870.049985041.856111911.069.80512.0414.512752.886.126590.329.484521.95815.18117.234096.3568.881783.664723.914.6482.7047238555322.3122769.098319394.1143.4429.625.0552.992753.7734205.275534.9512.9970.48086958.66213.104514.133.9802253.16620224.31601.2170562.501257.45570901388.541251.6597.871482.956652.024746.550501.598181682.821.47956244.99813.29242.862114278303.44666.9581.8834283.32530.33261.25111.261562.8870.8322514.155.3601.48786100.902.43213228.624.461221.19801.63816.570230.636735815726.12226323.630355.43924.1812.6717.958.6693.965.153134.740.4843685.3711.1125.432.9940221016520046.4101.25812.8731795.3759.901.0873612949.013.3901.8280.072387.14127.52212946.44.1322.478419826274215863220.3291875.484.421380.203.531821823.19353.5707986.521.291.72913319.619.256521.6920.49204212.6201.50547.620.049735066.738211854.529.85111.9914.572763.906.150592.569.518681.96515.23417.290196.6669.101789.234738.414.6082.9287258080022.3712761.968340194.1043.3430.525.0053.092749.2334166.839535.5213.0070.53470957.95113.113514.083.9799253.168631461.1873242.481288.9672535OpenBenchmarking.org

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 2.40, N = 3SE +/- 14.36, N = 31540.021388.54MIN: 1034.78 / MAX: 2149.91MIN: 890.97 / MAX: 2113.751. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test Directory3003320230060090012001500Min: 1537.03 / Avg: 1540.02 / Max: 1544.76Min: 1359.84 / Avg: 1388.54 / Max: 1403.961. (CC) gcc options: -O2 -lm -pthread -lmpi

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 256MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 19.35, N = 9SE +/- 14.35, N = 91339.911251.65MIN: 282.98 / MAX: 2236.64MIN: 354.68 / MAX: 2107.131. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 256MB - Disk Target: Default Test Directory300332022004006008001000Min: 1233.24 / Avg: 1339.91 / Max: 1405.6Min: 1210.12 / Avg: 1251.65 / Max: 1355.041. (CC) gcc options: -O2 -lm -pthread -lmpi

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M3202300320406080100SE +/- 0.15, N = 3SE +/- 0.09, N = 397.87104.481. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M3202300320406080100Min: 97.7 / Avg: 97.87 / Max: 98.17Min: 104.34 / Avg: 104.48 / Max: 104.641. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 4MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 5.96, N = 3SE +/- 11.11, N = 111580.251482.95MIN: 1161.4 / MAX: 2244.12MIN: 955.9 / MAX: 2484.921. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 4MB - Disk Target: Default Test Directory3003320230060090012001500Min: 1571.53 / Avg: 1580.25 / Max: 1591.64Min: 1384.65 / Avg: 1482.95 / Max: 1513.251. (CC) gcc options: -O2 -lm -pthread -lmpi

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU30033202150300450600750SE +/- 1.32, N = 3SE +/- 11.02, N = 127056651. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU30033202120240360480600Min: 702.5 / Avg: 705 / Max: 707Min: 605.5 / Avg: 664.63 / Max: 704.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth320230030.45560.91121.36681.82242.278SE +/- 0.02919, N = 3SE +/- 0.02644, N = 32.024741.925621. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth32023003246810Min: 1.98 / Avg: 2.02 / Max: 2.08Min: 1.89 / Avg: 1.93 / Max: 1.981. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte32023003246810SE +/- 0.02613, N = 3SE +/- 0.10692, N = 36.550506.234521. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte320230033691215Min: 6.5 / Avg: 6.55 / Max: 6.59Min: 6.1 / Avg: 6.23 / Max: 6.451. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU300332020.35960.71921.07881.43841.798SE +/- 0.00153, N = 3SE +/- 0.00175, N = 31.534311.59818MIN: 1.41MIN: 1.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU30033202246810Min: 1.53 / Avg: 1.53 / Max: 1.54Min: 1.59 / Avg: 1.6 / Max: 1.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 512MB - Disk Target: Default Test Directory30033202400800120016002000SE +/- 11.67, N = 3SE +/- 22.61, N = 91748.211682.82MIN: 534.9 / MAX: 2360.08MIN: 251.69 / MAX: 2253.721. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 512MB - Disk Target: Default Test Directory3003320230060090012001500Min: 1725.18 / Avg: 1748.21 / Max: 1763Min: 1598.74 / Avg: 1682.82 / Max: 1826.851. (CC) gcc options: -O2 -lm -pthread -lmpi

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad320230030.33290.66580.99871.33161.6645SE +/- 0.00089, N = 3SE +/- 0.00070, N = 31.479561.424681. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad32023003246810Min: 1.48 / Avg: 1.48 / Max: 1.48Min: 1.42 / Avg: 1.42 / Max: 1.431. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC23202300350100150200250SE +/- 1.66, N = 3SE +/- 0.59, N = 3245.00236.511. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2320230034080120160200Min: 241.7 / Avg: 245 / Max: 246.89Min: 235.9 / Avg: 236.51 / Max: 237.691. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics300332023691215SE +/- 0.10, N = 3SE +/- 0.02, N = 312.9013.29
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics3003320248121620Min: 12.72 / Avg: 12.9 / Max: 13.07Min: 13.27 / Avg: 13.29 / Max: 13.33

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000300332021020304050SE +/- 0.34, N = 3SE +/- 0.24, N = 1541.6342.861. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00030033202918273645Min: 41 / Avg: 41.63 / Max: 42.16Min: 41.69 / Avg: 42.86 / Max: 46.041. (CC) gcc options: -O2 -ldl -lz -lpthread

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time300332023M6M9M12M15MSE +/- 108514.09, N = 3SE +/- 40276.62, N = 311736507114278301. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time300332022M4M6M8M10MMin: 11606893 / Avg: 11736507.33 / Max: 11952066Min: 11381705 / Avg: 11427830 / Max: 115080851. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10320230030.77541.55082.32623.10163.877SE +/- 0.037, N = 15SE +/- 0.037, N = 33.4463.362
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 1032023003246810Min: 3.27 / Avg: 3.45 / Max: 3.73Min: 3.29 / Avg: 3.36 / Max: 3.41

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed300332021530456075SE +/- 0.80, N = 12SE +/- 0.20, N = 368.5766.951. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed300332021326395265Min: 61.77 / Avg: 68.57 / Max: 72.7Min: 66.56 / Avg: 66.95 / Max: 67.221. (CC) gcc options: -O3

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile3003320220406080100SE +/- 0.10, N = 3SE +/- 0.12, N = 380.0081.88
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile300332021632486480Min: 79.89 / Avg: 80 / Max: 80.19Min: 81.68 / Avg: 81.88 / Max: 82.1

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU3003320290180270360450SE +/- 1.17, N = 3SE +/- 3.35, N = 34384281. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU3003320280160240320400Min: 435.5 / Avg: 437.67 / Max: 439.5Min: 422.5 / Avg: 427.83 / Max: 4341. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224320230030.76551.5312.29653.0623.8275SE +/- 0.039, N = 3SE +/- 0.047, N = 33.3253.402MIN: 3.16 / MAX: 4.02MIN: 3.23 / MAX: 5.811. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_22432023003246810Min: 3.28 / Avg: 3.33 / Max: 3.4Min: 3.34 / Avg: 3.4 / Max: 3.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v332023003714212835SE +/- 0.26, N = 3SE +/- 0.18, N = 330.3331.01MIN: 29.28 / MAX: 56.48MIN: 29.92 / MAX: 38.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v332023003714212835Min: 30.02 / Avg: 30.33 / Max: 30.85Min: 30.68 / Avg: 31 / Max: 31.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile300332021428425670SE +/- 0.21, N = 3SE +/- 0.01, N = 360.0561.25
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile300332021224364860Min: 59.83 / Avg: 60.05 / Max: 60.48Min: 61.24 / Avg: 61.25 / Max: 61.27

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet300332023691215SE +/- 0.01, N = 3SE +/- 0.24, N = 311.0411.26MIN: 10.95 / MAX: 11.89MIN: 10.91 / MAX: 19.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet300332023691215Min: 11.03 / Avg: 11.04 / Max: 11.05Min: 11.01 / Avg: 11.26 / Max: 11.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT13202300330060090012001500SE +/- 21.94, N = 3SE +/- 2.32, N = 31562.891532.411. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT13202300330060090012001500Min: 1519.08 / Avg: 1562.89 / Max: 1587.09Min: 1528.37 / Avg: 1532.41 / Max: 1536.391. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU300332020.18730.37460.56190.74920.9365SE +/- 0.001756, N = 3SE +/- 0.001536, N = 30.8161470.832251MIN: 0.74MIN: 0.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU30033202246810Min: 0.81 / Avg: 0.82 / Max: 0.82Min: 0.83 / Avg: 0.83 / Max: 0.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3300332020.93381.86762.80143.73524.669SE +/- 0.01, N = 3SE +/- 0.00, N = 34.074.15MIN: 4.03 / MAX: 5.2MIN: 4.11 / MAX: 5.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v330033202246810Min: 4.06 / Avg: 4.07 / Max: 4.08Min: 4.15 / Avg: 4.15 / Max: 4.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression320230031.22962.45923.68884.91846.148SE +/- 0.062, N = 3SE +/- 0.059, N = 35.3605.4651. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression32023003246810Min: 5.24 / Avg: 5.36 / Max: 5.45Min: 5.39 / Avg: 5.47 / Max: 5.581. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU300332020.33480.66961.00441.33921.674SE +/- 0.00059, N = 3SE +/- 0.00214, N = 31.459701.48786MIN: 1.39MIN: 1.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU30033202246810Min: 1.46 / Avg: 1.46 / Max: 1.46Min: 1.48 / Avg: 1.49 / Max: 1.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive3003320220406080100SE +/- 0.10, N = 3SE +/- 0.15, N = 399.00100.901. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive3003320220406080100Min: 98.81 / Avg: 99 / Max: 99.15Min: 100.61 / Avg: 100.9 / Max: 101.121. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU300332020.54721.09441.64162.18882.736SE +/- 0.00251, N = 3SE +/- 0.00987, N = 32.386902.43213MIN: 2.28MIN: 2.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU30033202246810Min: 2.38 / Avg: 2.39 / Max: 2.39Min: 2.41 / Avg: 2.43 / Max: 2.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K3202300350100150200250SE +/- 0.36, N = 3SE +/- 0.49, N = 3228.62224.40MIN: 172.54 / MAX: 238.82MIN: 172.58 / MAX: 234.671. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K320230034080120160200Min: 228.2 / Avg: 228.62 / Max: 229.33Min: 223.42 / Avg: 224.4 / Max: 224.911. (CC) gcc options: -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2300332021.00352.0073.01054.0145.0175SE +/- 0.01, N = 3SE +/- 0.01, N = 34.384.46MIN: 4.24 / MAX: 7.32MIN: 4.31 / MAX: 5.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v230033202246810Min: 4.37 / Avg: 4.38 / Max: 4.39Min: 4.44 / Avg: 4.46 / Max: 4.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF1123003320230060090012001500SE +/- 0.92, N = 3SE +/- 3.58, N = 31199.391221.191. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112300332022004006008001000Min: 1198.13 / Avg: 1199.39 / Max: 1201.17Min: 1216.7 / Avg: 1221.19 / Max: 1228.261. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Fayalite-FIST Data300332022004006008001000787.50801.64

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM3003320248121620SE +/- 0.11, N = 3SE +/- 0.17, N = 316.8616.571. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM3003320248121620Min: 16.68 / Avg: 16.86 / Max: 17.06Min: 16.23 / Avg: 16.57 / Max: 16.81. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU300332020.14330.28660.42990.57320.7165SE +/- 0.000749, N = 3SE +/- 0.000300, N = 30.6258620.636735MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU30033202246810Min: 0.62 / Avg: 0.63 / Max: 0.63Min: 0.64 / Avg: 0.64 / Max: 0.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second30033202200K400K600K800K1000KSE +/- 1722.61, N = 3SE +/- 552.83, N = 3829763.13815726.121. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second30033202140K280K420K560K700KMin: 826659.78 / Avg: 829763.13 / Max: 832610.58Min: 814629.39 / Avg: 815726.12 / Max: 816395.951. (CC) gcc options: -O2 -lrt" -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-5032023003612182430SE +/- 0.19, N = 3SE +/- 0.25, N = 323.6324.04MIN: 22.31 / MAX: 33.12MIN: 21.95 / MAX: 33.081. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-5032023003612182430Min: 23.37 / Avg: 23.63 / Max: 24.01Min: 23.7 / Avg: 24.04 / Max: 24.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + Dithering3202300380160240320400SE +/- 3.87, N = 3SE +/- 0.36, N = 3355.44349.701. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + Dithering3202300360120180240300Min: 351.54 / Avg: 355.44 / Max: 363.19Min: 349.29 / Avg: 349.7 / Max: 350.411. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K30033202612182430SE +/- 0.35, N = 3SE +/- 0.26, N = 424.5724.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K30033202612182430Min: 23.89 / Avg: 24.57 / Max: 25.03Min: 23.49 / Avg: 24.18 / Max: 24.721. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough300332023691215SE +/- 0.03, N = 3SE +/- 0.02, N = 312.4712.671. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough3003320248121620Min: 12.4 / Avg: 12.47 / Max: 12.51Min: 12.65 / Avg: 12.67 / Max: 12.71. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m3003320248121620SE +/- 0.04, N = 3SE +/- 0.13, N = 317.6717.95MIN: 17.47 / MAX: 18.02MIN: 17.66 / MAX: 19.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m30033202510152025Min: 17.63 / Avg: 17.67 / Max: 17.75Min: 17.78 / Avg: 17.95 / Max: 18.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar30033202246810SE +/- 0.008, N = 3SE +/- 0.011, N = 38.8038.669
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar300332023691215Min: 8.79 / Avg: 8.8 / Max: 8.82Min: 8.65 / Avg: 8.67 / Max: 8.68

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet300332020.8911.7822.6733.5644.455SE +/- 0.00, N = 3SE +/- 0.01, N = 33.903.96MIN: 3.78 / MAX: 4.83MIN: 3.84 / MAX: 5.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet30033202246810Min: 3.89 / Avg: 3.9 / Max: 3.9Min: 3.95 / Avg: 3.96 / Max: 3.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0320230031.17722.35443.53164.70885.886SE +/- 0.022, N = 3SE +/- 0.062, N = 35.1535.232MIN: 5.02 / MAX: 14.32MIN: 5.02 / MAX: 8.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.032023003246810Min: 5.11 / Avg: 5.15 / Max: 5.18Min: 5.14 / Avg: 5.23 / Max: 5.351. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics32023003306090120150SE +/- 0.24, N = 3SE +/- 0.03, N = 3134.74136.801. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics32023003306090120150Min: 134.4 / Avg: 134.74 / Max: 135.19Min: 136.76 / Avg: 136.8 / Max: 136.861. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU300332020.1090.2180.3270.4360.545SE +/- 0.002170, N = 3SE +/- 0.003401, N = 150.4771380.484368MIN: 0.44MIN: 0.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU30033202246810Min: 0.47 / Avg: 0.48 / Max: 0.48Min: 0.45 / Avg: 0.48 / Max: 0.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0300332021.20832.41663.62494.83326.0415SE +/- 0.00, N = 3SE +/- 0.01, N = 35.295.37MIN: 5.24 / MAX: 5.79MIN: 5.31 / MAX: 7.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b030033202246810Min: 5.28 / Avg: 5.29 / Max: 5.29Min: 5.35 / Avg: 5.37 / Max: 5.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack300332023691215SE +/- 0.06, N = 5SE +/- 0.03, N = 510.9511.111. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack300332023691215Min: 10.73 / Avg: 10.95 / Max: 11.09Min: 11.01 / Avg: 11.11 / Max: 11.151. (CXX) g++ options: -rdynamic

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium300332021.22182.44363.66544.88726.109SE +/- 0.04, N = 3SE +/- 0.02, N = 35.355.431. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium30033202246810Min: 5.27 / Avg: 5.35 / Max: 5.4Min: 5.4 / Avg: 5.43 / Max: 5.461. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU300332020.67371.34742.02112.69483.3685SE +/- 0.00671, N = 3SE +/- 0.00911, N = 32.950022.99402MIN: 2.81MIN: 2.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU30033202246810Min: 2.94 / Avg: 2.95 / Max: 2.96Min: 2.98 / Avg: 2.99 / Max: 3.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.23202300350M100M150M200M250MSE +/- 569155.55, N = 3SE +/- 2159740.57, N = 32101652002072611671. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.23202300340M80M120M160M200MMin: 209034900 / Avg: 210165200 / Max: 210847100Min: 202942900 / Avg: 207261166.67 / Max: 2095090001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile300332021122334455SE +/- 0.43, N = 3SE +/- 0.49, N = 345.7946.41
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile30033202918273645Min: 45.02 / Avg: 45.79 / Max: 46.52Min: 45.65 / Avg: 46.41 / Max: 47.32

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark300332020.28690.57380.86071.14761.4345SE +/- 0.001, N = 3SE +/- 0.001, N = 31.2751.2581. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark30033202246810Min: 1.27 / Avg: 1.28 / Max: 1.28Min: 1.26 / Avg: 1.26 / Max: 1.261. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless320230033691215SE +/- 0.05, N = 3SE +/- 0.01, N = 312.8713.041. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless3202300348121620Min: 12.81 / Avg: 12.87 / Max: 12.98Min: 13.02 / Avg: 13.04 / Max: 13.051. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU32023003400800120016002000SE +/- 19.66, N = 4SE +/- 20.56, N = 31795.371818.11MIN: 1755.37MIN: 1764.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU3202300330060090012001500Min: 1769.91 / Avg: 1795.37 / Max: 1853.74Min: 1777 / Avg: 1818.11 / Max: 1839.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16320230031428425670SE +/- 0.09, N = 3SE +/- 0.13, N = 359.9060.60MIN: 58.71 / MAX: 61.76MIN: 59.43 / MAX: 62.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16320230031224364860Min: 59.72 / Avg: 59.9 / Max: 60.04Min: 60.35 / Avg: 60.6 / Max: 60.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms300332020.24470.48940.73410.97881.2235SE +/- 0.00258, N = 3SE +/- 0.00500, N = 31.075191.08736
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms30033202246810Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 1.08 / Avg: 1.09 / Max: 1.1

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed300332023K6K9K12K15KSE +/- 11.50, N = 12SE +/- 11.91, N = 313093.112949.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed300332022K4K6K8K10KMin: 13007.2 / Avg: 13093.05 / Max: 13137.9Min: 12925.3 / Avg: 12949 / Max: 12962.91. (CC) gcc options: -O3

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms300332023691215SE +/- 0.08, N = 3SE +/- 0.03, N = 313.5413.391. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms3003320248121620Min: 13.39 / Avg: 13.54 / Max: 13.64Min: 13.36 / Avg: 13.39 / Max: 13.451. (CXX) g++ options: -O3 -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface300332020.40950.8191.22851.6382.0475SE +/- 0.00, N = 3SE +/- 0.00, N = 31.801.82MIN: 1.78 / MAX: 2.28MIN: 1.79 / MAX: 2.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface30033202246810Min: 1.8 / Avg: 1.8 / Max: 1.81Min: 1.81 / Avg: 1.82 / Max: 1.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile3003320220406080100SE +/- 0.15, N = 3SE +/- 0.31, N = 379.2080.07
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile300332021530456075Min: 78.96 / Avg: 79.2 / Max: 79.47Min: 79.63 / Avg: 80.07 / Max: 80.66

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC13202300380160240320400SE +/- 1.60, N = 3SE +/- 1.98, N = 3387.14382.931. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC13202300370140210280350Min: 385.28 / Avg: 387.14 / Max: 390.33Min: 380.54 / Avg: 382.93 / Max: 386.861. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression30033202612182430SE +/- 0.05, N = 3SE +/- 0.04, N = 327.2327.521. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression30033202612182430Min: 27.14 / Avg: 27.23 / Max: 27.29Min: 27.44 / Avg: 27.52 / Max: 27.61. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed300332023K6K9K12K15KSE +/- 7.36, N = 3SE +/- 2.16, N = 313085.112946.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed300332022K4K6K8K10KMin: 13073.6 / Avg: 13085.1 / Max: 13098.8Min: 12942.4 / Avg: 12946.4 / Max: 12949.81. (CC) gcc options: -O3

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom300332020.93941.87882.81823.75764.697SE +/- 0.006, N = 3SE +/- 0.017, N = 34.1754.132
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom30033202246810Min: 4.17 / Avg: 4.18 / Max: 4.19Min: 4.1 / Avg: 4.13 / Max: 4.15

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans320230030.55761.11521.67282.23042.788SE +/- 0.01009, N = 3SE +/- 0.00427, N = 32.478412.453261. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans32023003246810Min: 2.46 / Avg: 2.48 / Max: 2.49Min: 2.45 / Avg: 2.45 / Max: 2.461. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU3003320220406080100SE +/- 0.33, N = 399981. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU3003320220406080100Min: 98.5 / Avg: 99.17 / Max: 99.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric3003320260K120K180K240K300K2654192627421. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU300332023K6K9K12K15KSE +/- 20.67, N = 3SE +/- 68.50, N = 316013158631. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU300332023K6K9K12K15KMin: 15989 / Avg: 16012.83 / Max: 16054Min: 15732 / Avg: 15863.17 / Max: 159631. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v23003320250100150200250SE +/- 0.32, N = 3SE +/- 0.67, N = 3218.29220.33MIN: 208.52 / MAX: 289.05MIN: 216.95 / MAX: 261.21. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2300332024080120160200Min: 217.67 / Avg: 218.29 / Max: 218.71Min: 219.02 / Avg: 220.33 / Max: 221.171. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU32023003400800120016002000SE +/- 6.24, N = 3SE +/- 3.84, N = 31875.481892.611. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU3202300330060090012001500Min: 1866.17 / Avg: 1875.48 / Max: 1887.34Min: 1885.55 / Avg: 1892.61 / Max: 1898.761. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -pthread -lmpi_cxx -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2300332020.99451.9892.98353.9784.9725SE +/- 0.01, N = 3SE +/- 0.01, N = 34.384.42MIN: 4.33 / MAX: 4.89MIN: 4.35 / MAX: 5.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v230033202246810Min: 4.37 / Avg: 4.38 / Max: 4.39Min: 4.39 / Avg: 4.42 / Max: 4.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M3202300330060090012001500SE +/- 0.41, N = 3SE +/- 0.57, N = 31380.201391.391. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M320230032004006008001000Min: 1379.76 / Avg: 1380.2 / Max: 1381.02Min: 1390.26 / Avg: 1391.39 / Max: 1391.991. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU300332020.79471.58942.38413.17883.9735SE +/- 0.00613, N = 3SE +/- 0.00421, N = 33.504453.53182MIN: 3.38MIN: 3.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU30033202246810Min: 3.5 / Avg: 3.5 / Max: 3.52Min: 3.53 / Avg: 3.53 / Max: 3.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU30033202400800120016002000SE +/- 15.33, N = 3SE +/- 9.60, N = 31809.761823.19MIN: 1773.21MIN: 1798.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU3003320230060090012001500Min: 1779.56 / Avg: 1809.76 / Max: 1829.44Min: 1808.25 / Avg: 1823.19 / Max: 1841.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 21603003320280160240320400SE +/- 3.53, N = 15SE +/- 4.21, N = 15356.17353.57
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 21603003320260120180240300Min: 338.14 / Avg: 356.17 / Max: 383Min: 337.35 / Avg: 353.57 / Max: 393.41

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 3840 x 2160320230032004006008001000SE +/- 1.35, N = 3SE +/- 1.03, N = 3986.5979.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 3840 x 2160320230032004006008001000Min: 984.4 / Avg: 986.47 / Max: 989Min: 978.3 / Avg: 979.33 / Max: 981.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny30033202510152025SE +/- 0.11, N = 3SE +/- 0.08, N = 321.1421.29MIN: 20.72 / MAX: 29.47MIN: 20.98 / MAX: 21.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny30033202510152025Min: 21.03 / Avg: 21.14 / Max: 21.35Min: 21.2 / Avg: 21.29 / Max: 21.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100320230030.39170.78341.17511.56681.9585SE +/- 0.004, N = 3SE +/- 0.001, N = 31.7291.7411. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 10032023003246810Min: 1.73 / Avg: 1.73 / Max: 1.74Min: 1.74 / Avg: 1.74 / Max: 1.741. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed300332023K6K9K12K15KSE +/- 19.80, N = 3SE +/- 25.73, N = 313410.213319.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed300332022K4K6K8K10KMin: 13389.9 / Avg: 13410.2 / Max: 13449.8Min: 13277 / Avg: 13319.63 / Max: 13365.91. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU30033202510152025SE +/- 0.02, N = 3SE +/- 0.04, N = 319.1319.26MIN: 18.77MIN: 18.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU30033202510152025Min: 19.1 / Avg: 19.13 / Max: 19.18Min: 19.22 / Avg: 19.26 / Max: 19.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis30033202510152025SE +/- 0.08, N = 4SE +/- 0.08, N = 421.5621.691. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis30033202510152025Min: 21.34 / Avg: 21.56 / Max: 21.72Min: 21.46 / Avg: 21.69 / Max: 21.831. (CC) gcc options: -O2 -std=c99

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency300332020.11070.22140.33210.44280.5535SE +/- 0.00404, N = 3SE +/- 0.00234, N = 30.489330.492041. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency30033202246810Min: 0.48 / Avg: 0.49 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.51. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.13003320250100150200250SE +/- 0.43, N = 3SE +/- 0.34, N = 3211.46212.62MIN: 210.71 / MAX: 212.37MIN: 211.89 / MAX: 213.241. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1300332024080120160200Min: 210.78 / Avg: 211.46 / Max: 212.26Min: 211.97 / Avg: 212.62 / Max: 213.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5320230030.33860.67721.01581.35441.693SE +/- 0.007, N = 3SE +/- 0.005, N = 31.5051.497
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 532023003246810Min: 1.5 / Avg: 1.51 / Max: 1.52Min: 1.49 / Avg: 1.5 / Max: 1.51

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p300332021122334455SE +/- 0.08, N = 3SE +/- 0.22, N = 347.8747.621. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p300332021020304050Min: 47.77 / Avg: 47.87 / Max: 48.03Min: 47.35 / Avg: 47.62 / Max: 48.051. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access300332020.01120.02240.03360.04480.056SE +/- 0.00046, N = 3SE +/- 0.00017, N = 30.049980.049731. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access3003320212345Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.051. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.33202300311002200330044005500SE +/- 66.75, N = 3SE +/- 59.80, N = 35066.745041.861. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3320230039001800270036004500Min: 4936.18 / Avg: 5066.74 / Max: 5156.13Min: 4922.33 / Avg: 5041.86 / Max: 5105.341. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed300332023K6K9K12K15KSE +/- 49.00, N = 3SE +/- 59.07, N = 311911.0611854.521. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed300332022K4K6K8K10KMin: 11843.76 / Avg: 11911.06 / Max: 12006.4Min: 11780.68 / Avg: 11854.52 / Max: 11971.31. (CC) gcc options: -O3

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE300332023691215SE +/- 0.045, N = 5SE +/- 0.041, N = 59.8059.8511. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE300332023691215Min: 9.66 / Avg: 9.8 / Max: 9.94Min: 9.74 / Avg: 9.85 / Max: 9.991. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet320230033691215SE +/- 0.01, N = 3SE +/- 0.15, N = 311.9912.04MIN: 11.79 / MAX: 12.19MIN: 11.72 / MAX: 14.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet3202300348121620Min: 11.96 / Avg: 11.99 / Max: 12Min: 11.86 / Avg: 12.04 / Max: 12.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet183003320248121620SE +/- 0.01, N = 3SE +/- 0.01, N = 314.5114.57MIN: 14.39 / MAX: 15.06MIN: 14.45 / MAX: 23.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet183003320248121620Min: 14.5 / Avg: 14.51 / Max: 14.53Min: 14.55 / Avg: 14.57 / Max: 14.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU300332026001200180024003000SE +/- 11.23, N = 3SE +/- 9.00, N = 32752.882763.90MIN: 2722.06MIN: 2736.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU300332025001000150020002500Min: 2734.1 / Avg: 2752.88 / Max: 2772.94Min: 2746.29 / Avg: 2763.9 / Max: 2775.921. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode30033202246810SE +/- 0.032, N = 5SE +/- 0.042, N = 56.1266.1501. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode30033202246810Min: 6.04 / Avg: 6.13 / Max: 6.22Min: 6.07 / Avg: 6.15 / Max: 6.31. (CXX) g++ options: -fvisibility=hidden -logg -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p32023003130260390520650SE +/- 0.77, N = 3SE +/- 0.69, N = 3592.56590.32MIN: 447.8 / MAX: 754.79MIN: 447.67 / MAX: 749.271. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p32023003100200300400500Min: 591.05 / Avg: 592.56 / Max: 593.6Min: 589.02 / Avg: 590.32 / Max: 591.371. (CC) gcc options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU300332023691215SE +/- 0.01431, N = 3SE +/- 0.00745, N = 39.484529.51868MIN: 9.38MIN: 9.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU300332023691215Min: 9.47 / Avg: 9.48 / Max: 9.51Min: 9.51 / Avg: 9.52 / Max: 9.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6320230030.44210.88421.32631.76842.2105SE +/- 0.016, N = 3SE +/- 0.005, N = 31.9651.958
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 632023003246810Min: 1.93 / Avg: 1.97 / Max: 1.99Min: 1.95 / Avg: 1.96 / Max: 1.97

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-283003320248121620SE +/- 0.07, N = 3SE +/- 0.18, N = 315.1815.231. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-283003320248121620Min: 15.11 / Avg: 15.18 / Max: 15.31Min: 14.89 / Avg: 15.23 / Max: 15.521. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU3003320248121620SE +/- 0.01, N = 3SE +/- 0.02, N = 317.2317.29MIN: 16.81MIN: 16.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU3003320248121620Min: 17.22 / Avg: 17.23 / Max: 17.25Min: 17.27 / Avg: 17.29 / Max: 17.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit3202300320406080100SE +/- 0.05, N = 3SE +/- 0.06, N = 396.6696.35MIN: 61.56 / MAX: 221.22MIN: 61.49 / MAX: 217.111. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit3202300320406080100Min: 96.56 / Avg: 96.66 / Max: 96.75Min: 96.23 / Avg: 96.35 / Max: 96.451. (CC) gcc options: -pthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed320230031530456075SE +/- 0.20, N = 3SE +/- 0.92, N = 369.1068.881. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed320230031326395265Min: 68.87 / Avg: 69.1 / Max: 69.5Min: 67.07 / Avg: 68.88 / Max: 70.11. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU30033202400800120016002000SE +/- 8.25, N = 3SE +/- 17.01, N = 31783.661789.23MIN: 1765.38MIN: 1761.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU3003320230060090012001500Min: 1770.06 / Avg: 1783.66 / Max: 1798.56Min: 1770.44 / Avg: 1789.23 / Max: 1823.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 33202300310002000300040005000SE +/- 12.71, N = 3SE +/- 8.02, N = 34738.44723.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3320230038001600240032004000Min: 4713.9 / Avg: 4738.37 / Max: 4756.6Min: 4709 / Avg: 4723.9 / Max: 4736.51. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd3202300348121620SE +/- 0.04, N = 3SE +/- 0.03, N = 314.6014.64MIN: 14.21 / MAX: 15.07MIN: 14.31 / MAX: 15.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd3202300348121620Min: 14.54 / Avg: 14.6 / Max: 14.68Min: 14.58 / Avg: 14.64 / Max: 14.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search3003320220406080100SE +/- 0.15, N = 3SE +/- 0.07, N = 382.7082.931. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search300332021632486480Min: 82.56 / Avg: 82.7 / Max: 83Min: 82.79 / Avg: 82.93 / Max: 831. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.43202300316M32M48M64M80MSE +/- 319563.42, N = 3SE +/- 1023469.37, N = 372580800723855531. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.43202300313M26M39M52M65MMin: 72227170 / Avg: 72580800 / Max: 73218670Min: 70857890 / Avg: 72385553.33 / Max: 743292801. (CXX) g++ options: -O3 -fopenmp

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2O30033202510152025SE +/- 0.21, N = 7SE +/- 0.14, N = 322.3122.371. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2O30033202510152025Min: 21.84 / Avg: 22.31 / Max: 23.5Min: 22.14 / Avg: 22.37 / Max: 22.631. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU320230036001200180024003000SE +/- 4.41, N = 3SE +/- 9.46, N = 32761.962769.09MIN: 2740.57MIN: 2739.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU320230035001000150020002500Min: 2753.15 / Avg: 2761.96 / Max: 2766.87Min: 2752.44 / Avg: 2769.09 / Max: 2785.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite32023003200K400K600K800K1000KSE +/- 7522.35, N = 3SE +/- 8175.01, N = 3834019831939
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite32023003140K280K420K560K700KMin: 822242 / Avg: 834018.67 / Max: 848015Min: 817795 / Avg: 831939 / Max: 846114

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast320230030.92481.84962.77443.69924.624SE +/- 0.03, N = 3SE +/- 0.01, N = 34.104.111. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast32023003246810Min: 4.05 / Avg: 4.1 / Max: 4.15Min: 4.1 / Avg: 4.11 / Max: 4.131. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19300332021020304050SE +/- 0.03, N = 3SE +/- 0.06, N = 343.443.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 1930033202918273645Min: 43.3 / Avg: 43.37 / Max: 43.4Min: 43.2 / Avg: 43.3 / Max: 43.41. (CC) gcc options: -O3 -pthread -lz -llzma

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 21603202300390180270360450SE +/- 0.67, N = 3SE +/- 0.72, N = 3430.5429.6
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 21603202300380160240320400Min: 429.8 / Avg: 430.47 / Max: 431.8Min: 428.2 / Avg: 429.63 / Max: 430.5

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet5032023003612182430SE +/- 0.17, N = 3SE +/- 0.13, N = 325.0025.05MIN: 24.58 / MAX: 26.31MIN: 24.65 / MAX: 26.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet5032023003612182430Min: 24.81 / Avg: 25 / Max: 25.34Min: 24.9 / Avg: 25.05 / Max: 25.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark320230031224364860SE +/- 0.24, N = 3SE +/- 0.08, N = 353.0952.991. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark320230031122334455Min: 52.82 / Avg: 53.09 / Max: 53.57Min: 52.9 / Avg: 52.99 / Max: 53.151. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU320230036001200180024003000SE +/- 2.89, N = 3SE +/- 13.63, N = 32749.232753.77MIN: 2735.56MIN: 2727.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU320230035001000150020002500Min: 2744.28 / Avg: 2749.23 / Max: 2754.28Min: 2738.48 / Avg: 2753.77 / Max: 2780.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth300332027K14K21K28K35KSE +/- 146.47, N = 3SE +/- 130.59, N = 334205.2834166.841. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth300332026K12K18K24K30KMin: 33918.14 / Avg: 34205.27 / Max: 34399.13Min: 33938 / Avg: 34166.84 / Max: 34390.261. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p32023003120240360480600SE +/- 1.59, N = 3SE +/- 4.29, N = 3535.52534.95MIN: 453.14 / MAX: 589.94MIN: 432.57 / MAX: 611.741. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p3202300390180270360450Min: 532.44 / Avg: 535.52 / Max: 537.78Min: 529.54 / Avg: 534.95 / Max: 543.421. (CC) gcc options: -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet300332023691215SE +/- 0.00, N = 3SE +/- 0.02, N = 312.9913.00MIN: 12.62 / MAX: 13.57MIN: 12.64 / MAX: 20.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet3003320248121620Min: 12.99 / Avg: 12.99 / Max: 13Min: 12.97 / Avg: 13 / Max: 13.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU300332021632486480SE +/- 0.10, N = 3SE +/- 0.04, N = 370.4870.53
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU300332021428425670Min: 70.29 / Avg: 70.48 / Max: 70.63Min: 70.47 / Avg: 70.53 / Max: 70.62

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100300332022004006008001000SE +/- 3.00, N = 3SE +/- 4.96, N = 3958.66957.951. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100300332022004006008001000Min: 952.71 / Avg: 958.66 / Max: 962.33Min: 948.56 / Avg: 957.95 / Max: 965.431. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein320230033691215SE +/- 0.13, N = 15SE +/- 0.15, N = 1513.1113.101. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein3202300348121620Min: 12.12 / Avg: 13.11 / Max: 13.62Min: 11.99 / Avg: 13.1 / Max: 13.711. (CXX) g++ options: -O3 -pthread -lm

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark30033202110220330440550SE +/- 0.48, N = 3SE +/- 4.15, N = 3514.13514.08
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark3003320290180270360450Min: 513.39 / Avg: 514.13 / Max: 515.02Min: 509.63 / Avg: 514.08 / Max: 522.38

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU320230030.89551.7912.68653.5824.4775SE +/- 0.00985, N = 3SE +/- 0.01644, N = 33.979923.98022MIN: 3.76MIN: 3.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU32023003246810Min: 3.96 / Avg: 3.98 / Max: 3.99Min: 3.96 / Avg: 3.98 / Max: 4.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL320230031224364860SE +/- 0.05, N = 3SE +/- 0.08, N = 353.1753.171. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL320230031122334455Min: 53.07 / Avg: 53.17 / Max: 53.25Min: 53.07 / Avg: 53.17 / Max: 53.331. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. OpenBLAS + Open MPI 4.0.3

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.75Renderer: Renderer2 - Resolution: 3840 x 2160300350100150200250224.3

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 8MB - Disk Target: Default Test Directory3003320230060090012001500SE +/- 6.28, N = 3SE +/- 28.69, N = 131601.211461.18MIN: 1005.82 / MAX: 2534.37MIN: 491.21 / MAX: 2711.81. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 8MB - Disk Target: Default Test Directory3003320230060090012001500Min: 1591.55 / Avg: 1601.21 / Max: 1612.99Min: 1223.03 / Avg: 1461.18 / Max: 1600.361. (CC) gcc options: -O2 -lm -pthread -lmpi

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU3202300316003200480064008000SE +/- 175.44, N = 12SE +/- 202.78, N = 12732470561. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU3202300313002600390052006500Min: 6194 / Avg: 7324.17 / Max: 7782.5Min: 5983.5 / Avg: 7056.21 / Max: 77811. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0320230030.56271.12541.68812.25082.8135SE +/- 0.027, N = 3SE +/- 0.094, N = 32.4812.501MIN: 2.42 / MAX: 2.71MIN: 2.32 / MAX: 4.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.032023003246810Min: 2.45 / Avg: 2.48 / Max: 2.54Min: 2.35 / Avg: 2.5 / Max: 2.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultimate3202300360120180240300SE +/- 3.79, N = 3SE +/- 4.83, N = 15288.97257.46MIN: 60 / MAX: 571MIN: 55 / MAX: 623
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultimate3202300350100150200250Min: 281.44 / Avg: 288.97 / Max: 293.57Min: 233.72 / Avg: 257.46 / Max: 288.05

129 Results Shown

IOR:
  2MB - Default Test Directory
  256MB - Default Test Directory
OpenFOAM
IOR
ONNX Runtime
HPC Challenge:
  Rand Ring Bandwidth
  G-Ffte
oneDNN
IOR
HPC Challenge
Etcpak
Dolfyn
SQLite Speedtest
Crafty
rav1e
LZ4 Compression
Build2
ONNX Runtime
Mobile Neural Network:
  MobileNetV2_224
  inception-v3
Timed Eigen Compilation
NCNN
Etcpak
oneDNN
NCNN
WebP Image Encode
oneDNN
ASTC Encoder
oneDNN
dav1d
NCNN
Quantum ESPRESSO
CP2K Molecular Dynamics
HPC Challenge
oneDNN
Coremark
Mobile Neural Network
Etcpak
x265
ASTC Encoder
NCNN
IndigoBench
NCNN
Mobile Neural Network
CloverLeaf
oneDNN
NCNN
WavPack Audio Encoding
ASTC Encoder
oneDNN
Algebraic Multi-Grid Benchmark
Timed Linux Kernel Compilation
GROMACS
WebP Image Encode
oneDNN
NCNN
NAMD
LZ4 Compression
LAMMPS Molecular Dynamics Simulator
NCNN
Timed Godot Game Engine Compilation
Etcpak
WebP Image Encode
LZ4 Compression
IndigoBench
HPC Challenge
ONNX Runtime
BRL-CAD
ONNX Runtime
TNN
RELION
NCNN
OpenFOAM
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
Tesseract
yquake2
NCNN
WebP Image Encode
LZ4 Compression
oneDNN
eSpeak-NG Speech Engine
HPC Challenge
TNN
rav1e
x265
HPC Challenge
LULESH
LZ4 Compression
Monkey Audio Encoding
NCNN:
  CPU - mobilenet
  CPU - resnet18
oneDNN
Opus Codec Encoding
dav1d
oneDNN
rav1e
RNNoise
oneDNN
dav1d
LZ4 Compression
oneDNN
Zstd Compression
NCNN
Timed HMMer Search
Kripke
QMCPACK
oneDNN
PHPBench
ASTC Encoder
Zstd Compression
Warsow
NCNN
LibRaw
oneDNN
HPC Challenge
dav1d
NCNN
DeepSpeech
Google SynthMark
LAMMPS Molecular Dynamics Simulator
Numpy Benchmark
oneDNN
HPC Challenge
ET: Legacy
IOR
ONNX Runtime
Mobile Neural Network
Xonotic