core-i7-8086k-2021

Intel Core i7-8086K testing with a ASUS PRIME Z370-A (1802 BIOS) and ASUS Intel UHD 630 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102185-HA-COREI780895
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 2 Tests
C++ Boost Tests 3 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 9 Tests
Creator Workloads 14 Tests
Cryptography 3 Tests
Encoding 5 Tests
Fortran Tests 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 16 Tests
Imaging 3 Tests
Machine Learning 4 Tests
Molecular Dynamics 7 Tests
MPI Benchmarks 6 Tests
Multi-Core 10 Tests
NVIDIA GPU Compute 2 Tests
OpenMPI Tests 11 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 3 Tests
Scientific Computing 10 Tests
Server CPU Tests 5 Tests
Single-Threaded 3 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
April 16 2021
  8 Hours, 13 Minutes
2
April 16 2021
  8 Hours, 34 Minutes
1a
April 16 2021
  32 Minutes
3
April 17 2021
  8 Hours, 33 Minutes
4
February 17 2021
  8 Hours, 59 Minutes
Invert Hiding All Results Option
  6 Hours, 58 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


core-i7-8086k-2021 ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution121a34Intel Core i7-8086K @ 5.00GHz (6 Cores / 12 Threads)ASUS PRIME Z370-A (1802 BIOS)Intel 8th Gen Core8GB118GB INTEL SSDPEK1W120GAASUS Intel UHD 630 3GB (1200MHz)Realtek ALC1220G237HLIntel I219-VUbuntu 20.045.9.0-050900rc8daily20201009-generic (x86_64) 20201008GNOME Shell 3.36.4X Server 1.20.84.6 Mesa 20.0.81.2.131GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - Thermald 1.9.1 Python Details- 1, 2, 3, 4: Python 2.7.18 + Python 3.8.5Security Details- itlb_multihit: KVM: Mitigation of VMX unsupported + l1tf: Mitigation of PTE Inversion + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

core-i7-8086k-2021 openfoam: Motorbike 60Mwebp2: Quality 100, Lossless Compressionwebp2: Quality 95, Compression Effort 7cp2k: Fayalite-FIST Datawebp2: Quality 75, Compression Effort 7openfoam: Motorbike 30Mjpegxl: PNG - 8npb: EP.Dgromacs: water_GMX50_barebuild-godot: Time To Compilegcrypt: cloverleaf: Lagrangian-Eulerian Hydrodynamicsdav1d: Chimera 1080p 10-bitvkmark: 1920 x 1080onnx: fcn-resnet101-11 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUonnx: yolov4 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUngspice: C2670mnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0pennant: sedovbigjpegxl: PNG - 7kripke: ngspice: C7552npb: LU.Conednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUwarsow: 1920 x 1080onednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUv-ray: CPUpennant: leblancbigtoybrot: C++ Taskstoybrot: C++ Threadstoybrot: OpenMPgnupg: 2.7GB Sample File Encryptionjpegxl-decode: 1askap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingjpegxl-decode: Allrav1e: 5rav1e: 1cryptsetup: Twofish-XTS 512b Decryptioncryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Serpent-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Encryptioncryptsetup: AES-XTS 512b Decryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Twofish-XTS 256b Encryptioncryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptioncryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 256b Encryptioncryptsetup: PBKDF2-whirlpoolcryptsetup: PBKDF2-sha512lzbench: XZ 0 - Decompressionlzbench: XZ 0 - Compressionrav1e: 6paraview: Wavelet Volume - 1920 x 1080paraview: Wavelet Volume - 1920 x 1080synthmark: VoiceMark_100lzbench: Zstd 8 - Decompressionlzbench: Zstd 8 - Compressionaskap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingdav1d: Summer Nature 4Kqmcpack: simple-H2Olzbench: Crush 0 - Decompressionlzbench: Crush 0 - Compressionredis: SADDquantlib: lzbench: Brotli 2 - Decompressionlzbench: Brotli 2 - Compressionparaview: Wavelet Contour - 1920 x 1080paraview: Wavelet Contour - 1920 x 1080rav1e: 10jpegxl: PNG - 5lzbench: Brotli 0 - Decompressionlzbench: Brotli 0 - Compressionetcpak: ETC2lzbench: Zstd 1 - Decompressionlzbench: Zstd 1 - Compressiondav1d: Chimera 1080plzbench: Libdeflate 1 - Decompressionlzbench: Libdeflate 1 - Compressionunpack-firefox: firefox-84.0.source.tar.xzjpegxl: JPEG - 5encode-wavpack: WAV To WavPackonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUtnn: CPU - MobileNet v2askap: Hogbom Clean OpenMPtnn: CPU - SqueezeNet v1.1webp2: Quality 100, Compression Effort 5redis: LPUSHencode-ape: WAV To APEjpegxl: JPEG - 7onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUredis: LPOPredis: SETaskap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingetcpak: ETC1 + Ditheringredis: GETamg: etcpak: ETC1encode-opus: WAV To Opus Encodeonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUnpb: EP.Conednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUjpegxl: JPEG - 8dav1d: Summer Nature 1080ponednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUwebp2: Defaultlulesh: lammps: Rhodopsin Proteinetcpak: DXT1onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPU121a341218.521173.138600.5851046.02325.165273.490.86945.810.781184.259179.825171.5698.4971962564352164694696108.88443.3823.8333.21540.8915.508105.4019.723633139090.13424235.4584.62223.462230.382220.83748368.2174461976618386173560.75546.202018.801145.05181.531.3300.464506.8503.6924.7908.32356.32358.2507.2502.6925.4906.62634.22617.18557491974719133491.731377.78423.61783.25222471081967.97139.3025.6975851272635219.752898.6840222396.71938.073.87258.66727522213.1042055592532.04129526616.78961.7313.030298.627189.275270.62218.3712023847.689.80561.593323281.832369135.502218.81248.23357.3323138734.00250245633374.7757.665942.5328.80485.435.7261585.76845.0291499.7061219.631175.087600.3901045.369324.887273.320.87949.690.781184.255179.221171.3498.5071162562351165444725108.32943.1353.8653.22540.6375.522105.47079.683631762790.16324224.804025.104023.674023.4285.42254.882248.662255.93746268.3341162100618786172860.28745.972017.101144.51178.471.3110.464506.5502.8924.5906.52359.42355.8506.6503.0925.2907.72630.72621.18566791973482134491.742378.06823.63779.63522421071929.501954.98139.4625.8635861282641940.82901.0840223396.79138.083.78458.42726526212.9692057589526.97129426516.78461.0313.0006.849736.41691301.240189.275270.94318.3292059021.049.81461.344.501012.141542085673.462342227.422226.261235.66357.5192908151.92250118867373.7627.6743.616703.93088941.532.104728.2723428.68484.0016.756520.06855.7191570.79685.0491496.8624.239048.891124024.784025.374025.942222.972216.002220.336.787796.360654.507542.142753.618173.932052.071318.3064316.842420.07704.236928.935901219.871173.720597.4931044.646325.748274.380.87945.800.782184.358179.214171.0598.4170762562351164834672107.16743.3113.8603.17340.8045.521105.81529.653617509789.72924210.114029.874030.564028.1985.42256.422250.342253.87744568.4946262009618846173360.57345.842017.951144.37177.781.3190.463506.2503.4925.4907.52358.12355.6507.2503.8925.9908.92633.52617.58566791974724133491.740377.61023.60782.58022471071920.021948.48139.0825.5195861282634387.832875.7840224395.62237.963.89658.38729525212.5672047588527.85129526516.87360.9913.0036.794906.33404299.738188.680271.01618.3612072916.339.80061.104.633682.142202070909.252377147.502225.021240.44354.9342927900.67248944067374.1797.6823.909503.93529927.392.1824410.304728.58481.6517.881220.75435.7111572.44305.0341495.4094.262668.973401219.281174.594598.2761044.592325.623274.670.87928.520.780184.413179.061171.5898.1970962560351165404706107.34143.4523.8503.25240.8065.527105.78259.653614105088.28024209.084032.074030.494028.1885.52254.152253.552256.59744768.4957862105618806174360.41545.802012.011141.91177.791.3260.464506.8503.6923.9907.62359.62358.0507.2503.8924.7908.32636.02620.28557491974719134501.730377.94823.62782.34122561081923.141948.61138.9725.9365861292607688.022901.8841224396.65638.063.84158.42729527212.9842060593527.21129526516.86561.5713.0006.854946.34826299.925187.150271.01018.3522051125.089.82561.114.602302.141482109775.02385714.252322.041322.78356.9102919971.42248705433375.3407.6673.893533.93813946.582.1604210.401328.55482.1317.863720.76845.7091572.55794.9961503.9484.253938.98756OpenBenchmarking.org

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M432130060090012001500SE +/- 1.05, N = 3SE +/- 0.79, N = 3SE +/- 0.28, N = 3SE +/- 0.86, N = 31219.281219.871219.631218.521. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lgenericPatchFields -lscotchDecomp -lptscotchDecomp -lmeshTools -ldynamicMesh -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M43212004006008001000Min: 1217.4 / Avg: 1219.28 / Max: 1221.04Min: 1218.52 / Avg: 1219.87 / Max: 1221.26Min: 1219.11 / Avg: 1219.63 / Max: 1220.06Min: 1216.82 / Avg: 1218.52 / Max: 1219.61. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lgenericPatchFields -lscotchDecomp -lptscotchDecomp -lmeshTools -ldynamicMesh -lOpenFOAM -ldl -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless Compression432130060090012001500SE +/- 0.60, N = 3SE +/- 0.28, N = 3SE +/- 0.50, N = 3SE +/- 0.06, N = 31174.591173.721175.091173.141. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless Compression43212004006008001000Min: 1173.82 / Avg: 1174.59 / Max: 1175.76Min: 1173.22 / Avg: 1173.72 / Max: 1174.19Min: 1174.54 / Avg: 1175.09 / Max: 1176.09Min: 1173.04 / Avg: 1173.14 / Max: 1173.261. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 74321130260390520650SE +/- 1.37, N = 3SE +/- 0.46, N = 3SE +/- 0.37, N = 3SE +/- 1.42, N = 3598.28597.49600.39600.591. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 74321110220330440550Min: 595.72 / Avg: 598.28 / Max: 600.38Min: 596.58 / Avg: 597.49 / Max: 598.07Min: 599.64 / Avg: 600.39 / Max: 600.8Min: 598 / Avg: 600.59 / Max: 602.911. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Fayalite-FIST Data432120040060080010001044.591044.651045.371046.02

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7432170140210280350SE +/- 0.66, N = 3SE +/- 0.98, N = 3SE +/- 0.58, N = 3SE +/- 0.94, N = 3325.62325.75324.89325.171. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7432160120180240300Min: 324.85 / Avg: 325.62 / Max: 326.94Min: 324.43 / Avg: 325.75 / Max: 327.67Min: 323.73 / Avg: 324.89 / Max: 325.53Min: 323.47 / Avg: 325.17 / Max: 326.711. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M432160120180240300SE +/- 0.10, N = 3SE +/- 0.20, N = 3SE +/- 0.06, N = 3SE +/- 0.20, N = 3274.67274.38273.32273.491. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lgenericPatchFields -lscotchDecomp -lptscotchDecomp -lmeshTools -ldynamicMesh -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M432150100150200250Min: 274.52 / Avg: 274.67 / Max: 274.85Min: 274.15 / Avg: 274.38 / Max: 274.78Min: 273.2 / Avg: 273.32 / Max: 273.4Min: 273.13 / Avg: 273.49 / Max: 273.811. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lgenericPatchFields -lscotchDecomp -lptscotchDecomp -lmeshTools -ldynamicMesh -lOpenFOAM -ldl -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 843210.19580.39160.58740.78320.979SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.870.870.870.861. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 84321246810Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.86 / Avg: 0.87 / Max: 0.87Min: 0.86 / Avg: 0.86 / Max: 0.861. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D43212004006008001000SE +/- 8.82, N = 12SE +/- 1.74, N = 3SE +/- 0.61, N = 3SE +/- 0.11, N = 3928.52945.80949.69945.811. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D4321170340510680850Min: 876.5 / Avg: 928.52 / Max: 950.5Min: 942.56 / Avg: 945.8 / Max: 948.51Min: 948.72 / Avg: 949.69 / Max: 950.83Min: 945.59 / Avg: 945.81 / Max: 945.951. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bare43210.1760.3520.5280.7040.88SE +/- 0.002, N = 3SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.7800.7820.7810.7811. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bare4321246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.78 / Avg: 0.78 / Max: 0.79Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.78 / Avg: 0.78 / Max: 0.781. (CXX) g++ options: -O3 -pthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile43214080120160200SE +/- 0.12, N = 3SE +/- 0.26, N = 3SE +/- 0.19, N = 3SE +/- 0.32, N = 3184.41184.36184.26184.26
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile4321306090120150Min: 184.25 / Avg: 184.41 / Max: 184.64Min: 184.09 / Avg: 184.36 / Max: 184.88Min: 183.87 / Avg: 184.25 / Max: 184.48Min: 183.63 / Avg: 184.26 / Max: 184.65

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.943214080120160200SE +/- 0.14, N = 3SE +/- 0.52, N = 3SE +/- 0.37, N = 3SE +/- 0.39, N = 3179.06179.21179.22179.831. (CC) gcc options: -O2 -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.94321306090120150Min: 178.78 / Avg: 179.06 / Max: 179.25Min: 178.45 / Avg: 179.21 / Max: 180.2Min: 178.48 / Avg: 179.22 / Max: 179.63Min: 179.12 / Avg: 179.82 / Max: 180.481. (CC) gcc options: -O2 -fvisibility=hidden

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics43214080120160200SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3171.58171.05171.34171.561. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics4321306090120150Min: 171.43 / Avg: 171.58 / Max: 171.77Min: 170.95 / Avg: 171.05 / Max: 171.22Min: 171.18 / Avg: 171.34 / Max: 171.42Min: 171.51 / Avg: 171.56 / Max: 171.641. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit432120406080100SE +/- 0.13, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.02, N = 398.1998.4198.5098.49MIN: 63.95 / MAX: 228.89MIN: 64.05 / MAX: 227.43MIN: 64.06 / MAX: 228.81MIN: 63.95 / MAX: 228.951. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit432120406080100Min: 97.96 / Avg: 98.19 / Max: 98.4Min: 98.26 / Avg: 98.41 / Max: 98.52Min: 98.28 / Avg: 98.5 / Max: 98.66Min: 98.45 / Avg: 98.49 / Max: 98.521. (CC) gcc options: -pthread

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 10804321160320480640800SE +/- 1.86, N = 3SE +/- 1.86, N = 3SE +/- 2.73, N = 37097077117191. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 10804321130260390520650Min: 707 / Avg: 709.33 / Max: 713Min: 703 / Avg: 706.67 / Max: 709Min: 707 / Avg: 710.67 / Max: 7161. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU43211428425670SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.00, N = 3SE +/- 0.17, N = 3626262621. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU43211224364860Min: 61.5 / Avg: 61.67 / Max: 62Min: 61.5 / Avg: 61.83 / Max: 62Min: 61.5 / Avg: 61.5 / Max: 61.5Min: 61.5 / Avg: 61.67 / Max: 621. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU4321120240360480600SE +/- 0.73, N = 3SE +/- 0.76, N = 3SE +/- 1.36, N = 3SE +/- 0.50, N = 35605625625641. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU4321100200300400500Min: 559 / Avg: 560.17 / Max: 561.5Min: 560 / Avg: 561.5 / Max: 562.5Min: 559.5 / Avg: 562.17 / Max: 564Min: 563 / Avg: 564 / Max: 564.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU432180160240320400SE +/- 0.60, N = 3SE +/- 0.60, N = 3SE +/- 1.30, N = 3SE +/- 0.88, N = 33513513513521. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU432160120180240300Min: 350 / Avg: 351.17 / Max: 352Min: 350.5 / Avg: 351.33 / Max: 352.5Min: 349 / Avg: 351.17 / Max: 353.5Min: 350 / Avg: 351.67 / Max: 3531. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU43214K8K12K16K20KSE +/- 16.98, N = 3SE +/- 45.62, N = 3SE +/- 15.42, N = 3SE +/- 8.66, N = 3165401648316544164691. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU43213K6K9K12K15KMin: 16508 / Avg: 16540.33 / Max: 16565.5Min: 16434 / Avg: 16482.83 / Max: 16574Min: 16528 / Avg: 16543.67 / Max: 16574.5Min: 16453 / Avg: 16469.33 / Max: 16482.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU432110002000300040005000SE +/- 4.67, N = 3SE +/- 9.22, N = 3SE +/- 9.84, N = 3SE +/- 8.06, N = 347064672472546961. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU43218001600240032004000Min: 4697 / Avg: 4706.33 / Max: 4711Min: 4653.5 / Avg: 4671.5 / Max: 4684Min: 4705 / Avg: 4724.67 / Max: 4735Min: 4681 / Avg: 4696.17 / Max: 4708.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670432120406080100SE +/- 0.54, N = 3SE +/- 0.36, N = 3SE +/- 0.77, N = 3SE +/- 0.22, N = 3107.34107.17108.33108.881. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670432120406080100Min: 106.71 / Avg: 107.34 / Max: 108.41Min: 106.44 / Avg: 107.17 / Max: 107.59Min: 106.79 / Avg: 108.33 / Max: 109.15Min: 108.5 / Avg: 108.88 / Max: 109.251. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v343211020304050SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.27, N = 3SE +/- 0.06, N = 343.4543.3143.1443.38MIN: 43.05 / MAX: 60.53MIN: 42.98 / MAX: 58.83MIN: 42.66 / MAX: 61.01MIN: 43.06 / MAX: 61.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v34321918273645Min: 43.32 / Avg: 43.45 / Max: 43.65Min: 43.14 / Avg: 43.31 / Max: 43.46Min: 42.78 / Avg: 43.13 / Max: 43.66Min: 43.27 / Avg: 43.38 / Max: 43.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.043210.86961.73922.60883.47844.348SE +/- 0.004, N = 3SE +/- 0.023, N = 3SE +/- 0.010, N = 3SE +/- 0.008, N = 33.8503.8603.8653.833MIN: 3.8 / MAX: 4.36MIN: 3.77 / MAX: 20.88MIN: 3.81 / MAX: 5.88MIN: 3.76 / MAX: 6.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.04321246810Min: 3.85 / Avg: 3.85 / Max: 3.86Min: 3.81 / Avg: 3.86 / Max: 3.89Min: 3.85 / Avg: 3.86 / Max: 3.88Min: 3.82 / Avg: 3.83 / Max: 3.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_22443210.73171.46342.19512.92683.6585SE +/- 0.025, N = 3SE +/- 0.041, N = 3SE +/- 0.020, N = 3SE +/- 0.012, N = 33.2523.1733.2253.215MIN: 3.17 / MAX: 7.8MIN: 3.06 / MAX: 7.08MIN: 3.1 / MAX: 4.26MIN: 3.13 / MAX: 20.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_2244321246810Min: 3.22 / Avg: 3.25 / Max: 3.3Min: 3.11 / Avg: 3.17 / Max: 3.25Min: 3.2 / Avg: 3.23 / Max: 3.27Min: 3.2 / Avg: 3.22 / Max: 3.241. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-504321918273645SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 340.8140.8040.6440.89MIN: 40.64 / MAX: 58.22MIN: 40.51 / MAX: 56.52MIN: 40.34 / MAX: 57.72MIN: 40.57 / MAX: 57.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-504321918273645Min: 40.74 / Avg: 40.81 / Max: 40.89Min: 40.64 / Avg: 40.8 / Max: 40.92Min: 40.49 / Avg: 40.64 / Max: 40.82Min: 40.83 / Avg: 40.89 / Max: 411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.043211.24362.48723.73084.97446.218SE +/- 0.015, N = 3SE +/- 0.037, N = 3SE +/- 0.014, N = 3SE +/- 0.023, N = 35.5275.5215.5225.508MIN: 5.39 / MAX: 23.38MIN: 5.38 / MAX: 7.49MIN: 5.41 / MAX: 7.94MIN: 5.39 / MAX: 7.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.04321246810Min: 5.51 / Avg: 5.53 / Max: 5.56Min: 5.46 / Avg: 5.52 / Max: 5.59Min: 5.51 / Avg: 5.52 / Max: 5.55Min: 5.46 / Avg: 5.51 / Max: 5.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbig432120406080100SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3105.78105.82105.47105.401. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbig432120406080100Min: 105.72 / Avg: 105.78 / Max: 105.89Min: 105.69 / Avg: 105.82 / Max: 106.05Min: 105.42 / Avg: 105.47 / Max: 105.5Min: 105.34 / Avg: 105.4 / Max: 105.471. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 743213691215SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 39.659.659.689.721. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 743213691215Min: 9.59 / Avg: 9.65 / Max: 9.7Min: 9.56 / Avg: 9.65 / Max: 9.7Min: 9.64 / Avg: 9.68 / Max: 9.73Min: 9.68 / Avg: 9.72 / Max: 9.741. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.443218M16M24M32M40MSE +/- 114405.39, N = 3SE +/- 107755.07, N = 3SE +/- 126361.09, N = 3SE +/- 116888.91, N = 3361410503617509736317627363313901. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.443216M12M18M24M30MMin: 35913410 / Avg: 36141050 / Max: 36274890Min: 35980170 / Avg: 36175096.67 / Max: 36352160Min: 36151450 / Avg: 36317626.67 / Max: 36565610Min: 36180830 / Avg: 36331390 / Max: 365615501. (CXX) g++ options: -O3 -fopenmp

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552432120406080100SE +/- 0.53, N = 3SE +/- 0.50, N = 3SE +/- 0.25, N = 3SE +/- 0.30, N = 388.2889.7390.1690.131. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552432120406080100Min: 87.21 / Avg: 88.28 / Max: 88.84Min: 88.74 / Avg: 89.73 / Max: 90.39Min: 89.7 / Avg: 90.16 / Max: 90.54Min: 89.59 / Avg: 90.13 / Max: 90.631. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C43215K10K15K20K25KSE +/- 10.80, N = 3SE +/- 8.06, N = 3SE +/- 49.20, N = 3SE +/- 10.13, N = 324209.0824210.1124224.8024235.451. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C43214K8K12K16K20KMin: 24188.25 / Avg: 24209.08 / Max: 24224.43Min: 24199.61 / Avg: 24210.11 / Max: 24225.95Min: 24127 / Avg: 24224.8 / Max: 24283.13Min: 24222.54 / Avg: 24235.45 / Max: 24255.421. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU4321a9001800270036004500SE +/- 1.09, N = 3SE +/- 1.28, N = 3SE +/- 1.81, N = 3SE +/- 2.30, N = 34032.074029.874025.104024.78MIN: 4027.84MIN: 4025.1MIN: 4019.74MIN: 4018.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU4321a7001400210028003500Min: 4030.11 / Avg: 4032.07 / Max: 4033.88Min: 4027.73 / Avg: 4029.87 / Max: 4032.15Min: 4021.62 / Avg: 4025.1 / Max: 4027.73Min: 4020.67 / Avg: 4024.78 / Max: 4028.611. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU4321a9001800270036004500SE +/- 1.40, N = 3SE +/- 2.36, N = 3SE +/- 2.32, N = 3SE +/- 1.13, N = 34030.494030.564023.674025.37MIN: 4026.5MIN: 4023.71MIN: 4017.93MIN: 4021.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU4321a7001400210028003500Min: 4028.89 / Avg: 4030.49 / Max: 4033.28Min: 4027.12 / Avg: 4030.56 / Max: 4035.07Min: 4020.4 / Avg: 4023.67 / Max: 4028.16Min: 4023.24 / Avg: 4025.37 / Max: 4027.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU4321a9001800270036004500SE +/- 1.14, N = 3SE +/- 2.50, N = 3SE +/- 1.97, N = 3SE +/- 0.29, N = 34028.184028.194023.424025.94MIN: 4024.52MIN: 4022.48MIN: 4017.95MIN: 4021.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU4321a7001400210028003500Min: 4026.32 / Avg: 4028.18 / Max: 4030.24Min: 4024.99 / Avg: 4028.19 / Max: 4033.12Min: 4019.83 / Avg: 4023.42 / Max: 4026.61Min: 4025.39 / Avg: 4025.94 / Max: 4026.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080432120406080100SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 3SE +/- 0.47, N = 385.585.485.484.6
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 108043211632486480Min: 85.3 / Avg: 85.5 / Max: 85.7Min: 85.1 / Avg: 85.4 / Max: 85.7Min: 85.3 / Avg: 85.43 / Max: 85.6Min: 83.7 / Avg: 84.63 / Max: 85.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU4321a15001000150020002500SE +/- 3.51, N = 3SE +/- 3.29, N = 3SE +/- 5.08, N = 3SE +/- 2.04, N = 3SE +/- 3.64, N = 32254.152256.422254.882222.972223.46MIN: 2247.34MIN: 2250.83MIN: 2245.82MIN: 2219.12MIN: 2215.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU4321a1400800120016002000Min: 2248.97 / Avg: 2254.15 / Max: 2260.84Min: 2252.71 / Avg: 2256.42 / Max: 2262.98Min: 2247.43 / Avg: 2254.88 / Max: 2264.59Min: 2220.86 / Avg: 2222.97 / Max: 2227.06Min: 2218.42 / Avg: 2223.46 / Max: 2230.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU4321a15001000150020002500SE +/- 3.29, N = 3SE +/- 1.92, N = 3SE +/- 1.21, N = 3SE +/- 1.62, N = 3SE +/- 1.59, N = 32253.552250.342248.662216.002230.38MIN: 2246.46MIN: 2246.36MIN: 2245.47MIN: 2211.59MIN: 2226.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU4321a1400800120016002000Min: 2248.26 / Avg: 2253.55 / Max: 2259.58Min: 2247.87 / Avg: 2250.34 / Max: 2254.13Min: 2247.17 / Avg: 2248.66 / Max: 2251.06Min: 2212.76 / Avg: 2216 / Max: 2217.79Min: 2227.6 / Avg: 2230.38 / Max: 2233.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU4321a15001000150020002500SE +/- 1.03, N = 3SE +/- 1.66, N = 3SE +/- 1.19, N = 3SE +/- 4.85, N = 3SE +/- 5.92, N = 32256.592253.872255.932220.332220.83MIN: 2253.05MIN: 2248.87MIN: 2252MIN: 2210.09MIN: 2210.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU4321a1400800120016002000Min: 2254.52 / Avg: 2256.59 / Max: 2257.68Min: 2250.54 / Avg: 2253.87 / Max: 2255.59Min: 2253.7 / Avg: 2255.93 / Max: 2257.76Min: 2211.59 / Avg: 2220.33 / Max: 2228.33Min: 2213.39 / Avg: 2220.83 / Max: 2232.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPU432116003200480064008000SE +/- 42.90, N = 3SE +/- 32.98, N = 3SE +/- 28.75, N = 3SE +/- 17.10, N = 37447744574627483
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPU432113002600390052006500Min: 7363 / Avg: 7446.67 / Max: 7505Min: 7384 / Avg: 7445.33 / Max: 7497Min: 7422 / Avg: 7462.33 / Max: 7518Min: 7455 / Avg: 7483 / Max: 7514

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbig43211530456075SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 368.5068.4968.3368.221. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbig43211326395265Min: 68.32 / Avg: 68.5 / Max: 68.66Min: 68.38 / Avg: 68.49 / Max: 68.6Min: 68.28 / Avg: 68.33 / Max: 68.43Min: 68.12 / Avg: 68.22 / Max: 68.271. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasks432113K26K39K52K65KSE +/- 124.65, N = 3SE +/- 63.76, N = 3SE +/- 100.54, N = 3SE +/- 13.67, N = 3621056200962100619761. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasks432111K22K33K44K55KMin: 61905 / Avg: 62105.33 / Max: 62334Min: 61922 / Avg: 62008.67 / Max: 62133Min: 61900 / Avg: 62100 / Max: 62218Min: 61962 / Avg: 61975.67 / Max: 620031. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threads432113K26K39K52K65KSE +/- 32.42, N = 3SE +/- 42.55, N = 3SE +/- 28.06, N = 3SE +/- 13.45, N = 3618806188461878618381. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threads432111K22K33K44K55KMin: 61844 / Avg: 61880.33 / Max: 61945Min: 61800 / Avg: 61883.67 / Max: 61939Min: 61823 / Avg: 61877.67 / Max: 61916Min: 61819 / Avg: 61838 / Max: 618641. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMP432113K26K39K52K65KSE +/- 8.11, N = 3SE +/- 4.04, N = 3SE +/- 1.67, N = 3SE +/- 6.00, N = 3617436173361728617351. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMP432111K22K33K44K55KMin: 61728 / Avg: 61742.67 / Max: 61756Min: 61725 / Avg: 61733 / Max: 61738Min: 61726 / Avg: 61727.67 / Max: 61731Min: 61729 / Avg: 61735 / Max: 617471. (CXX) g++ options: -O3 -lpthread

GnuPG

This test times how long it takes to encrypt a sample file using GnuPG. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File Encryption43211428425670SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.44, N = 360.4260.5760.2960.761. (CC) gcc options: -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File Encryption43211224364860Min: 60.31 / Avg: 60.42 / Max: 60.53Min: 60.51 / Avg: 60.57 / Max: 60.69Min: 60.14 / Avg: 60.29 / Max: 60.46Min: 60.26 / Avg: 60.76 / Max: 61.631. (CC) gcc options: -O2

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 143211020304050SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 345.8045.8445.9746.20
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 14321918273645Min: 45.78 / Avg: 45.8 / Max: 45.81Min: 45.79 / Avg: 45.84 / Max: 45.92Min: 45.92 / Avg: 45.97 / Max: 46.02Min: 46.14 / Avg: 46.2 / Max: 46.3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding4321400800120016002000SE +/- 0.00, N = 3SE +/- 3.48, N = 3SE +/- 2.65, N = 3SE +/- 3.32, N = 32012.012017.952017.102018.801. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding4321400800120016002000Min: 2012.01 / Avg: 2012.01 / Max: 2012.01Min: 2013.28 / Avg: 2017.95 / Max: 2024.76Min: 2013.28 / Avg: 2017.1 / Max: 2022.2Min: 2013.28 / Avg: 2018.8 / Max: 2024.761. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding43212004006008001000SE +/- 0.00, N = 3SE +/- 0.24, N = 3SE +/- 0.14, N = 3SE +/- 0.49, N = 31141.911144.371144.511145.051. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding43212004006008001000Min: 1141.91 / Avg: 1141.91 / Max: 1141.91Min: 1143.96 / Avg: 1144.37 / Max: 1144.78Min: 1144.37 / Avg: 1144.51 / Max: 1144.78Min: 1144.37 / Avg: 1145.05 / Max: 1146.011. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: All43214080120160200SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3177.79177.78178.47181.53
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: All4321306090120150Min: 177.66 / Avg: 177.79 / Max: 178.02Min: 177.72 / Avg: 177.78 / Max: 177.84Min: 178.38 / Avg: 178.47 / Max: 178.62Min: 181.51 / Avg: 181.53 / Max: 181.55

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 543210.29930.59860.89791.19721.4965SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 3SE +/- 0.005, N = 31.3261.3191.3111.330
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 54321246810Min: 1.32 / Avg: 1.33 / Max: 1.33Min: 1.32 / Avg: 1.32 / Max: 1.32Min: 1.31 / Avg: 1.31 / Max: 1.32Min: 1.32 / Avg: 1.33 / Max: 1.34

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 143210.10440.20880.31320.41760.522SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.4640.4630.4640.464
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 1432112345Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.47

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Decryption4321110220330440550SE +/- 0.41, N = 3SE +/- 0.43, N = 3SE +/- 0.26, N = 3SE +/- 0.10, N = 2506.8506.2506.5506.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Decryption432190180270360450Min: 506 / Avg: 506.77 / Max: 507.4Min: 505.4 / Avg: 506.17 / Max: 506.9Min: 506.1 / Avg: 506.5 / Max: 507Min: 506.7 / Avg: 506.8 / Max: 506.9

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryption4321110220330440550SE +/- 0.35, N = 3SE +/- 0.15, N = 3SE +/- 0.64, N = 3SE +/- 0.09, N = 3503.6503.4502.8503.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryption432190180270360450Min: 502.9 / Avg: 503.57 / Max: 504.1Min: 503.1 / Avg: 503.37 / Max: 503.6Min: 501.6 / Avg: 502.77 / Max: 503.8Min: 503.4 / Avg: 503.57 / Max: 503.7

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryption43212004006008001000SE +/- 0.85, N = 2SE +/- 1.18, N = 3SE +/- 0.80, N = 3SE +/- 0.64, N = 3923.9925.4924.5924.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryption4321160320480640800Min: 923 / Avg: 923.85 / Max: 924.7Min: 923.1 / Avg: 925.43 / Max: 926.9Min: 923.3 / Avg: 924.47 / Max: 926Min: 923.4 / Avg: 924.67 / Max: 925.5

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Encryption43212004006008001000SE +/- 0.93, N = 3SE +/- 1.20, N = 3SE +/- 0.45, N = 3SE +/- 0.40, N = 3907.6907.5906.5908.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Encryption4321160320480640800Min: 905.9 / Avg: 907.63 / Max: 909.1Min: 905.1 / Avg: 907.5 / Max: 908.7Min: 905.9 / Avg: 906.53 / Max: 907.4Min: 907.6 / Avg: 908.3 / Max: 909

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryption43215001000150020002500SE +/- 6.54, N = 3SE +/- 1.04, N = 3SE +/- 5.47, N = 3SE +/- 3.61, N = 32359.62358.12359.42356.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryption4321400800120016002000Min: 2347.1 / Avg: 2359.57 / Max: 2369.2Min: 2356.2 / Avg: 2358.1 / Max: 2359.8Min: 2353.5 / Avg: 2359.37 / Max: 2370.3Min: 2349.2 / Avg: 2356.3 / Max: 2361

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryption43215001000150020002500SE +/- 5.53, N = 3SE +/- 0.71, N = 3SE +/- 3.53, N = 3SE +/- 0.52, N = 32358.02355.62355.82358.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryption4321400800120016002000Min: 2347.4 / Avg: 2357.97 / Max: 2366.1Min: 2354.2 / Avg: 2355.6 / Max: 2356.5Min: 2351.7 / Avg: 2355.77 / Max: 2362.8Min: 2357.2 / Avg: 2358.23 / Max: 2358.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Decryption4321110220330440550SE +/- 0.35, N = 3SE +/- 0.07, N = 3SE +/- 0.34, N = 3SE +/- 0.03, N = 3507.2507.2506.6507.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Decryption432190180270360450Min: 506.5 / Avg: 507.17 / Max: 507.7Min: 507.1 / Avg: 507.17 / Max: 507.3Min: 506.2 / Avg: 506.63 / Max: 507.3Min: 507.1 / Avg: 507.17 / Max: 507.2

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Encryption4321110220330440550SE +/- 0.38, N = 3SE +/- 0.06, N = 3SE +/- 0.66, N = 3SE +/- 1.12, N = 3503.8503.8503.0502.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Encryption432190180270360450Min: 503 / Avg: 503.77 / Max: 504.2Min: 503.7 / Avg: 503.8 / Max: 503.9Min: 501.7 / Avg: 502.97 / Max: 503.9Min: 500.4 / Avg: 502.63 / Max: 503.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryption43212004006008001000SE +/- 0.50, N = 3SE +/- 0.92, N = 3SE +/- 0.96, N = 3SE +/- 0.49, N = 3924.7925.9925.2925.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryption4321160320480640800Min: 923.7 / Avg: 924.67 / Max: 925.4Min: 924.1 / Avg: 925.9 / Max: 927.1Min: 923.4 / Avg: 925.17 / Max: 926.7Min: 924.4 / Avg: 925.37 / Max: 926

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryption43212004006008001000SE +/- 0.87, N = 3SE +/- 0.48, N = 3SE +/- 0.61, N = 3SE +/- 1.39, N = 3908.3908.9907.7906.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryption4321160320480640800Min: 906.6 / Avg: 908.33 / Max: 909.4Min: 908 / Avg: 908.93 / Max: 909.6Min: 906.7 / Avg: 907.7 / Max: 908.8Min: 903.9 / Avg: 906.6 / Max: 908.5

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryption43216001200180024003000SE +/- 6.10, N = 3SE +/- 2.27, N = 3SE +/- 0.85, N = 3SE +/- 1.38, N = 32636.02633.52630.72634.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryption43215001000150020002500Min: 2623.9 / Avg: 2636 / Max: 2643.4Min: 2629.2 / Avg: 2633.5 / Max: 2636.9Min: 2629 / Avg: 2630.7 / Max: 2631.6Min: 2632.3 / Avg: 2634.23 / Max: 2636.9

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Encryption43216001200180024003000SE +/- 8.59, N = 3SE +/- 3.07, N = 3SE +/- 0.93, N = 3SE +/- 6.46, N = 32620.22617.52621.12617.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Encryption43215001000150020002500Min: 2605.9 / Avg: 2620.17 / Max: 2635.6Min: 2612.5 / Avg: 2617.53 / Max: 2623.1Min: 2619.9 / Avg: 2621.07 / Max: 2622.9Min: 2604.8 / Avg: 2617.07 / Max: 2626.7

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpool4321200K400K600K800K1000KSE +/- 930.00, N = 3SE +/- 930.00, N = 3855749856679856679855749
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpool4321150K300K450K600K750KMin: 853889 / Avg: 855749 / Max: 856679Min: 853889 / Avg: 855749 / Max: 856679

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha5124321400K800K1200K1600K2000KSE +/- 2147.17, N = 3SE +/- 1237.33, N = 31974719197472419734821974719
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha5124321300K600K900K1200K1500KMin: 1971007 / Avg: 1974723.67 / Max: 1978445Min: 1971007 / Avg: 1973481.67 / Max: 1974719

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Decompression4321306090120150SE +/- 0.33, N = 31341331341331. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Decompression4321306090120150Min: 133 / Avg: 133.67 / Max: 1341. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: Compression43211122334455504949491. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 643210.3920.7841.1761.5681.96SE +/- 0.009, N = 3SE +/- 0.005, N = 3SE +/- 0.008, N = 3SE +/- 0.012, N = 31.7301.7401.7421.731
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 64321246810Min: 1.71 / Avg: 1.73 / Max: 1.74Min: 1.73 / Avg: 1.74 / Max: 1.75Min: 1.73 / Avg: 1.74 / Max: 1.76Min: 1.71 / Avg: 1.73 / Max: 1.75

ParaView

This test runs ParaView benchmarks: an open-source data analytics and visualization application. Paraview describes itself as "an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiVoxels / Sec, More Is BetterParaView 5.9Test: Wavelet Volume - Resolution: 1920 x 1080432180160240320400SE +/- 0.12, N = 3SE +/- 0.59, N = 3SE +/- 0.38, N = 3SE +/- 0.19, N = 3377.95377.61378.07377.78
OpenBenchmarking.orgMiVoxels / Sec, More Is BetterParaView 5.9Test: Wavelet Volume - Resolution: 1920 x 1080432170140210280350Min: 377.74 / Avg: 377.95 / Max: 378.16Min: 376.56 / Avg: 377.61 / Max: 378.58Min: 377.31 / Avg: 378.07 / Max: 378.52Min: 377.43 / Avg: 377.78 / Max: 378.1

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.9Test: Wavelet Volume - Resolution: 1920 x 10804321612182430SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 323.6223.6023.6323.61
OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.9Test: Wavelet Volume - Resolution: 1920 x 10804321612182430Min: 23.61 / Avg: 23.62 / Max: 23.63Min: 23.53 / Avg: 23.6 / Max: 23.66Min: 23.58 / Avg: 23.63 / Max: 23.66Min: 23.59 / Avg: 23.61 / Max: 23.63

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_10043212004006008001000SE +/- 0.30, N = 3SE +/- 0.39, N = 3SE +/- 3.25, N = 3SE +/- 0.34, N = 3782.34782.58779.64783.251. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_1004321140280420560700Min: 781.84 / Avg: 782.34 / Max: 782.87Min: 782 / Avg: 782.58 / Max: 783.33Min: 773.14 / Avg: 779.64 / Max: 782.95Min: 782.72 / Avg: 783.25 / Max: 783.891. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Decompression43215001000150020002500SE +/- 6.36, N = 3SE +/- 5.36, N = 3SE +/- 4.18, N = 322562247224222471. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Decompression4321400800120016002000Min: 2245 / Avg: 2256.33 / Max: 2267Min: 2236 / Avg: 2246.67 / Max: 2253Min: 2239 / Avg: 2247.33 / Max: 22521. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Compression432120406080100SE +/- 0.58, N = 31081071071081. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: Compression432120406080100Min: 107 / Avg: 108 / Max: 1091. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding432400800120016002000SE +/- 6.24, N = 3SE +/- 9.37, N = 2SE +/- 10.92, N = 31923.141920.021929.501. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding43230060090012001500Min: 1910.65 / Avg: 1923.14 / Max: 1929.38Min: 1910.65 / Avg: 1920.02 / Max: 1929.38Min: 1910.65 / Avg: 1929.5 / Max: 1948.481. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding4321400800120016002000SE +/- 11.14, N = 3SE +/- 0.00, N = 3SE +/- 6.50, N = 3SE +/- 0.00, N = 31948.611948.481954.981967.971. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding432130060090012001500Min: 1929.38 / Avg: 1948.61 / Max: 1967.97Min: 1948.48 / Avg: 1948.48 / Max: 1948.48Min: 1948.48 / Avg: 1954.98 / Max: 1967.97Min: 1967.97 / Avg: 1967.97 / Max: 1967.971. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K4321306090120150SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3138.97139.08139.46139.30MIN: 131.06 / MAX: 156.71MIN: 130.77 / MAX: 156.83MIN: 131.42 / MAX: 157.06MIN: 131.22 / MAX: 157.11. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K4321306090120150Min: 138.74 / Avg: 138.97 / Max: 139.13Min: 138.79 / Avg: 139.08 / Max: 139.39Min: 139.41 / Avg: 139.46 / Max: 139.52Min: 139.19 / Avg: 139.3 / Max: 139.471. (CC) gcc options: -pthread

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2O4321612182430SE +/- 0.21, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 325.9425.5225.8625.701. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm
OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2O4321612182430Min: 25.59 / Avg: 25.94 / Max: 26.33Min: 25.25 / Avg: 25.52 / Max: 25.72Min: 25.68 / Avg: 25.86 / Max: 26.15Min: 25.56 / Avg: 25.7 / Max: 25.851. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Decompression43211302603905206505865865865851. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Compression4321306090120150SE +/- 0.33, N = 3SE +/- 1.00, N = 31291281281271. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: Compression432120406080100Min: 128 / Avg: 128.33 / Max: 129Min: 127 / Avg: 128 / Max: 1301. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD4321600K1200K1800K2400K3000KSE +/- 28568.59, N = 13SE +/- 17037.49, N = 3SE +/- 12039.46, N = 3SE +/- 27951.55, N = 32607688.022634387.832641940.802635219.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD4321500K1000K1500K2000K2500KMin: 2279462 / Avg: 2607688.02 / Max: 2671859Min: 2608972.75 / Avg: 2634387.83 / Max: 2666752Min: 2629511.5 / Avg: 2641940.83 / Max: 2666015.5Min: 2579413 / Avg: 2635219.75 / Max: 2665964.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.2143216001200180024003000SE +/- 5.47, N = 3SE +/- 10.28, N = 3SE +/- 9.53, N = 3SE +/- 8.50, N = 32901.82875.72901.02898.61. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.2143215001000150020002500Min: 2890.9 / Avg: 2901.8 / Max: 2908.1Min: 2859.7 / Avg: 2875.73 / Max: 2894.9Min: 2882.2 / Avg: 2901 / Max: 2913.1Min: 2889.9 / Avg: 2898.6 / Max: 2915.61. (CXX) g++ options: -O3 -march=native -rdynamic

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Decompression43212004006008001000SE +/- 0.67, N = 38418408408401. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Decompression4321150300450600750Min: 840 / Avg: 841.33 / Max: 8421. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Compression432150100150200250SE +/- 0.58, N = 32242242232221. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: Compression43214080120160200Min: 222 / Avg: 223 / Max: 2241. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

ParaView

This test runs ParaView benchmarks: an open-source data analytics and visualization application. Paraview describes itself as "an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.9Test: Wavelet Contour - Resolution: 1920 x 1080432190180270360450SE +/- 0.91, N = 3SE +/- 0.27, N = 3SE +/- 0.59, N = 3SE +/- 1.00, N = 3396.66395.62396.79396.72
OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.9Test: Wavelet Contour - Resolution: 1920 x 1080432170140210280350Min: 395.35 / Avg: 396.66 / Max: 398.42Min: 395.16 / Avg: 395.62 / Max: 396.11Min: 395.63 / Avg: 396.79 / Max: 397.52Min: 395.04 / Avg: 396.72 / Max: 398.51

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.9Test: Wavelet Contour - Resolution: 1920 x 10804321918273645SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 338.0637.9638.0838.07
OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.9Test: Wavelet Contour - Resolution: 1920 x 10804321816243240Min: 37.94 / Avg: 38.06 / Max: 38.23Min: 37.92 / Avg: 37.96 / Max: 38.01Min: 37.96 / Avg: 38.08 / Max: 38.15Min: 37.91 / Avg: 38.07 / Max: 38.24

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 1043210.87661.75322.62983.50644.383SE +/- 0.014, N = 3SE +/- 0.026, N = 3SE +/- 0.061, N = 3SE +/- 0.019, N = 33.8413.8963.7843.872
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 104321246810Min: 3.81 / Avg: 3.84 / Max: 3.86Min: 3.84 / Avg: 3.9 / Max: 3.93Min: 3.66 / Avg: 3.78 / Max: 3.85Min: 3.85 / Avg: 3.87 / Max: 3.91

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 543211326395265SE +/- 0.15, N = 3SE +/- 0.20, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 358.4258.3858.4258.661. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 543211224364860Min: 58.24 / Avg: 58.42 / Max: 58.71Min: 57.99 / Avg: 58.38 / Max: 58.63Min: 58.35 / Avg: 58.42 / Max: 58.49Min: 58.5 / Avg: 58.66 / Max: 58.831. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Decompression4321160320480640800SE +/- 0.67, N = 3SE +/- 2.03, N = 3SE +/- 1.00, N = 37297297267271. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Decompression4321130260390520650Min: 728 / Avg: 728.67 / Max: 730Min: 722 / Avg: 725.67 / Max: 729Min: 725 / Avg: 727 / Max: 7281. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Compression4321110220330440550SE +/- 1.00, N = 3SE +/- 1.33, N = 35275255265221. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: Compression432190180270360450Min: 524 / Avg: 525 / Max: 527Min: 519 / Avg: 521.67 / Max: 5231. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2432150100150200250SE +/- 0.00, N = 3SE +/- 0.42, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3212.98212.57212.97213.101. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC243214080120160200Min: 212.98 / Avg: 212.98 / Max: 212.99Min: 211.72 / Avg: 212.57 / Max: 213.02Min: 212.89 / Avg: 212.97 / Max: 213.01Min: 213.07 / Avg: 213.1 / Max: 213.131. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Decompression4321400800120016002000SE +/- 1.45, N = 3SE +/- 12.33, N = 3SE +/- 0.67, N = 3SE +/- 3.18, N = 320602047205720551. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Decompression4321400800120016002000Min: 2058 / Avg: 2060.33 / Max: 2063Min: 2022 / Avg: 2046.67 / Max: 2059Min: 2056 / Avg: 2057.33 / Max: 2058Min: 2049 / Avg: 2055.33 / Max: 20591. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Compression4321130260390520650SE +/- 1.00, N = 3SE +/- 0.67, N = 35935885895921. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: Compression4321100200300400500Min: 587 / Avg: 588 / Max: 590Min: 591 / Avg: 592.33 / Max: 5931. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p4321120240360480600SE +/- 0.66, N = 3SE +/- 0.26, N = 3SE +/- 1.05, N = 3SE +/- 0.61, N = 3527.21527.85526.97532.04MIN: 391.75 / MAX: 803.53MIN: 391.98 / MAX: 817.62MIN: 392.13 / MAX: 800.13MIN: 393.83 / MAX: 800.331. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p432190180270360450Min: 525.9 / Avg: 527.21 / Max: 527.89Min: 527.48 / Avg: 527.85 / Max: 528.34Min: 525.18 / Avg: 526.97 / Max: 528.81Min: 530.98 / Avg: 532.04 / Max: 533.091. (CC) gcc options: -pthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: Decompression432130060090012001500SE +/- 0.33, N = 3SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 312951295129412951. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: Decompression43212004006008001000Min: 1295 / Avg: 1295.33 / Max: 1296Min: 1294 / Avg: 1295 / Max: 1296Min: 1294 / Avg: 1294.33 / Max: 1295Min: 1294 / Avg: 1294.67 / Max: 12961. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: Compression432160120180240300SE +/- 1.00, N = 3SE +/- 1.53, N = 3SE +/- 1.20, N = 3SE +/- 0.58, N = 32652652652661. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: Compression432150100150200250Min: 264 / Avg: 265 / Max: 267Min: 262 / Avg: 265 / Max: 267Min: 263 / Avg: 265.33 / Max: 267Min: 265 / Avg: 266 / Max: 2671. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz432148121620SE +/- 0.02, N = 4SE +/- 0.06, N = 4SE +/- 0.06, N = 4SE +/- 0.05, N = 416.8716.8716.7816.79
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz432148121620Min: 16.83 / Avg: 16.86 / Max: 16.93Min: 16.76 / Avg: 16.87 / Max: 17.03Min: 16.69 / Avg: 16.78 / Max: 16.95Min: 16.68 / Avg: 16.79 / Max: 16.91

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 543211428425670SE +/- 0.19, N = 3SE +/- 0.30, N = 3SE +/- 0.47, N = 3SE +/- 0.23, N = 361.5760.9961.0361.731. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 543211224364860Min: 61.36 / Avg: 61.57 / Max: 61.94Min: 60.39 / Avg: 60.99 / Max: 61.3Min: 60.09 / Avg: 61.03 / Max: 61.54Min: 61.29 / Avg: 61.73 / Max: 62.081. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack43213691215SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.02, N = 513.0013.0013.0013.031. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack432148121620Min: 13 / Avg: 13 / Max: 13.01Min: 13 / Avg: 13 / Max: 13.01Min: 12.99 / Avg: 13 / Max: 13.01Min: 13 / Avg: 13.03 / Max: 13.11. (CXX) g++ options: -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU4321a246810SE +/- 0.06163, N = 3SE +/- 0.00909, N = 3SE +/- 0.06379, N = 3SE +/- 0.01649, N = 36.854946.794906.849736.78779MIN: 6.72MIN: 6.7MIN: 6.72MIN: 6.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU4321a3691215Min: 6.77 / Avg: 6.85 / Max: 6.98Min: 6.78 / Avg: 6.79 / Max: 6.81Min: 6.77 / Avg: 6.85 / Max: 6.98Min: 6.76 / Avg: 6.79 / Max: 6.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU4321a246810SE +/- 0.01283, N = 3SE +/- 0.00815, N = 3SE +/- 0.01672, N = 3SE +/- 0.02142, N = 36.348266.334046.416916.36065MIN: 6.29MIN: 6.28MIN: 6.31MIN: 6.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU4321a3691215Min: 6.33 / Avg: 6.35 / Max: 6.37Min: 6.32 / Avg: 6.33 / Max: 6.35Min: 6.39 / Avg: 6.42 / Max: 6.44Min: 6.32 / Avg: 6.36 / Max: 6.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2432170140210280350SE +/- 1.01, N = 3SE +/- 0.54, N = 3SE +/- 0.93, N = 3SE +/- 0.27, N = 3299.93299.74301.24298.63MIN: 297.54 / MAX: 306.69MIN: 297.93 / MAX: 302.27MIN: 298.47 / MAX: 304.26MIN: 297.04 / MAX: 300.971. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2432150100150200250Min: 298.52 / Avg: 299.93 / Max: 301.88Min: 298.98 / Avg: 299.74 / Max: 300.77Min: 300 / Avg: 301.24 / Max: 303.07Min: 298.29 / Avg: 298.63 / Max: 299.161. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP43214080120160200SE +/- 0.31, N = 3SE +/- 0.21, N = 3SE +/- 0.24, N = 3SE +/- 0.32, N = 3187.15188.68189.28189.281. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP4321306090120150Min: 186.57 / Avg: 187.15 / Max: 187.62Min: 188.32 / Avg: 188.68 / Max: 189.04Min: 189.04 / Avg: 189.28 / Max: 189.75Min: 188.68 / Avg: 189.28 / Max: 189.751. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1432160120180240300SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3271.01271.02270.94270.62MIN: 270.36 / MAX: 271.8MIN: 270.32 / MAX: 271.67MIN: 270.24 / MAX: 271.7MIN: 270.11 / MAX: 271.331. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1432150100150200250Min: 270.89 / Avg: 271.01 / Max: 271.17Min: 270.89 / Avg: 271.02 / Max: 271.24Min: 270.93 / Avg: 270.94 / Max: 270.96Min: 270.56 / Avg: 270.62 / Max: 270.651. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 54321510152025SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 318.3518.3618.3318.371. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 54321510152025Min: 18.33 / Avg: 18.35 / Max: 18.4Min: 18.35 / Avg: 18.36 / Max: 18.37Min: 18.32 / Avg: 18.33 / Max: 18.34Min: 18.34 / Avg: 18.37 / Max: 18.391. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH4321400K800K1200K1600K2000KSE +/- 19842.96, N = 3SE +/- 13361.81, N = 3SE +/- 9300.78, N = 3SE +/- 25186.84, N = 52051125.082072916.332059021.042023847.681. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH4321400K800K1200K1600K2000KMin: 2012888.88 / Avg: 2051125.08 / Max: 2079447.75Min: 2048380.12 / Avg: 2072916.33 / Max: 2094354.75Min: 2040419.5 / Avg: 2059021.04 / Max: 2068345Min: 1923878.38 / Avg: 2023847.68 / Max: 2057711.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE43213691215SE +/- 0.016, N = 5SE +/- 0.015, N = 5SE +/- 0.016, N = 5SE +/- 0.014, N = 59.8259.8009.8149.8051. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE43213691215Min: 9.77 / Avg: 9.82 / Max: 9.86Min: 9.76 / Avg: 9.8 / Max: 9.85Min: 9.78 / Avg: 9.81 / Max: 9.86Min: 9.78 / Avg: 9.81 / Max: 9.851. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 743211428425670SE +/- 0.21, N = 3SE +/- 0.10, N = 3SE +/- 0.46, N = 3SE +/- 0.15, N = 361.1161.1061.3461.591. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 743211224364860Min: 60.77 / Avg: 61.11 / Max: 61.5Min: 60.9 / Avg: 61.1 / Max: 61.2Min: 60.45 / Avg: 61.34 / Max: 62.02Min: 61.35 / Avg: 61.59 / Max: 61.861. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU4321a1.04262.08523.12784.17045.213SE +/- 0.00329, N = 3SE +/- 0.00530, N = 3SE +/- 0.00886, N = 3SE +/- 0.01402, N = 34.602304.633684.501014.50754MIN: 4.51MIN: 4.54MIN: 4.37MIN: 4.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU4321a246810Min: 4.6 / Avg: 4.6 / Max: 4.61Min: 4.63 / Avg: 4.63 / Max: 4.64Min: 4.48 / Avg: 4.5 / Max: 4.51Min: 4.49 / Avg: 4.51 / Max: 4.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU4321a0.48210.96421.44631.92842.4105SE +/- 0.00130, N = 3SE +/- 0.00203, N = 3SE +/- 0.00277, N = 3SE +/- 0.00109, N = 32.141482.142202.141542.14275MIN: 2.13MIN: 2.12MIN: 2.12MIN: 2.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU4321a246810Min: 2.14 / Avg: 2.14 / Max: 2.14Min: 2.14 / Avg: 2.14 / Max: 2.15Min: 2.14 / Avg: 2.14 / Max: 2.15Min: 2.14 / Avg: 2.14 / Max: 2.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP4321700K1400K2100K2800K3500KSE +/- 8626.43, N = 3SE +/- 32269.04, N = 3SE +/- 11005.15, N = 3SE +/- 19183.06, N = 32109775.002070909.252085673.463323281.831. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP4321600K1200K1800K2400K3000KMin: 2098636 / Avg: 2109775 / Max: 2126754.5Min: 2007693.25 / Avg: 2070909.25 / Max: 2113772.25Min: 2064462.5 / Avg: 2085673.46 / Max: 2101369.25Min: 3285266.75 / Avg: 3323281.83 / Max: 3346773.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET4321500K1000K1500K2000K2500KSE +/- 3546.38, N = 3SE +/- 12358.77, N = 3SE +/- 10631.93, N = 3SE +/- 13128.97, N = 32385714.252377147.502342227.422369135.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET4321400K800K1200K1600K2000KMin: 2378694.5 / Avg: 2385714.25 / Max: 2390103.25Min: 2353510 / Avg: 2377147.5 / Max: 2395224.75Min: 2321262.75 / Avg: 2342227.42 / Max: 2355788Min: 2349706.75 / Avg: 2369135.5 / Max: 23941471. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding43215001000150020002500SE +/- 6.77, N = 3SE +/- 6.22, N = 3SE +/- 4.57, N = 5SE +/- 0.00, N = 32322.042225.022226.262218.801. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding4321400800120016002000Min: 2315.27 / Avg: 2322.04 / Max: 2335.58Min: 2218.8 / Avg: 2225.02 / Max: 2237.45Min: 2218.8 / Avg: 2226.26 / Max: 2237.45Min: 2218.8 / Avg: 2218.8 / Max: 2218.81. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding432130060090012001500SE +/- 14.50, N = 3SE +/- 8.41, N = 3SE +/- 14.95, N = 5SE +/- 9.83, N = 31322.781240.441235.661248.231. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding43212004006008001000Min: 1305.18 / Avg: 1322.78 / Max: 1351.55Min: 1226.99 / Avg: 1240.44 / Max: 1255.92Min: 1204.78 / Avg: 1235.66 / Max: 1292.5Min: 1238.4 / Avg: 1248.23 / Max: 1267.891. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + Dithering432180160240320400SE +/- 0.35, N = 3SE +/- 2.23, N = 3SE +/- 0.12, N = 3SE +/- 0.18, N = 3356.91354.93357.52357.331. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + Dithering432160120180240300Min: 356.23 / Avg: 356.91 / Max: 357.39Min: 350.48 / Avg: 354.93 / Max: 357.38Min: 357.4 / Avg: 357.52 / Max: 357.75Min: 357 / Avg: 357.33 / Max: 357.61. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET4321700K1400K2100K2800K3500KSE +/- 18481.68, N = 3SE +/- 15356.90, N = 3SE +/- 44568.10, N = 3SE +/- 18933.24, N = 32919971.422927900.672908151.923138734.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET4321500K1000K1500K2000K2500KMin: 2900241.25 / Avg: 2919971.42 / Max: 2956906Min: 2900297 / Avg: 2927900.67 / Max: 2953365.75Min: 2820874.5 / Avg: 2908151.92 / Max: 2967473Min: 3100884.5 / Avg: 3138734 / Max: 3158640.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2432150M100M150M200M250MSE +/- 1517817.32, N = 3SE +/- 1277684.47, N = 3SE +/- 614809.90, N = 3SE +/- 323790.40, N = 32487054332489440672501188672502456331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2432140M80M120M160M200MMin: 245672300 / Avg: 248705433.33 / Max: 250328700Min: 246388700 / Avg: 248944066.67 / Max: 250224700Min: 248897700 / Avg: 250118866.67 / Max: 250854100Min: 249603600 / Avg: 250245633.33 / Max: 2506399001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1432180160240320400SE +/- 0.15, N = 3SE +/- 0.53, N = 3SE +/- 1.79, N = 3SE +/- 0.58, N = 3375.34374.18373.76374.781. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1432170140210280350Min: 375.05 / Avg: 375.34 / Max: 375.55Min: 373.56 / Avg: 374.18 / Max: 375.24Min: 370.19 / Avg: 373.76 / Max: 375.64Min: 373.63 / Avg: 374.78 / Max: 375.531. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode4321246810SE +/- 0.011, N = 5SE +/- 0.012, N = 5SE +/- 0.012, N = 5SE +/- 0.010, N = 57.6677.6827.6747.6651. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode43213691215Min: 7.65 / Avg: 7.67 / Max: 7.71Min: 7.66 / Avg: 7.68 / Max: 7.73Min: 7.66 / Avg: 7.67 / Max: 7.72Min: 7.65 / Avg: 7.66 / Max: 7.71. (CXX) g++ options: -fvisibility=hidden -logg -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU4321a0.87961.75922.63883.51844.398SE +/- 0.01176, N = 3SE +/- 0.00680, N = 3SE +/- 0.02046, N = 3SE +/- 0.00712, N = 33.893533.909503.616703.61817MIN: 3.82MIN: 3.84MIN: 3.52MIN: 3.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU4321a246810Min: 3.87 / Avg: 3.89 / Max: 3.91Min: 3.9 / Avg: 3.91 / Max: 3.92Min: 3.58 / Avg: 3.62 / Max: 3.65Min: 3.6 / Avg: 3.62 / Max: 3.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU4321a0.88611.77222.65833.54444.4305SE +/- 0.00145, N = 3SE +/- 0.00396, N = 3SE +/- 0.00431, N = 3SE +/- 0.00444, N = 33.938133.935293.930883.93205MIN: 3.91MIN: 3.9MIN: 3.89MIN: 3.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU4321a246810Min: 3.94 / Avg: 3.94 / Max: 3.94Min: 3.93 / Avg: 3.94 / Max: 3.94Min: 3.92 / Avg: 3.93 / Max: 3.94Min: 3.92 / Avg: 3.93 / Max: 3.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C43212004006008001000SE +/- 1.34, N = 3SE +/- 13.08, N = 3SE +/- 2.85, N = 3SE +/- 0.53, N = 3946.58927.39941.53942.531. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C4321170340510680850Min: 943.92 / Avg: 946.58 / Max: 948.12Min: 901.97 / Avg: 927.39 / Max: 945.44Min: 937.51 / Avg: 941.53 / Max: 947.04Min: 941.67 / Avg: 942.53 / Max: 943.51. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU4321a0.4910.9821.4731.9642.455SE +/- 0.00374, N = 3SE +/- 0.00718, N = 3SE +/- 0.02847, N = 3SE +/- 0.00510, N = 32.160422.182442.104722.07131MIN: 2.12MIN: 2.13MIN: 2MIN: 2.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU4321a246810Min: 2.15 / Avg: 2.16 / Max: 2.17Min: 2.17 / Avg: 2.18 / Max: 2.2Min: 2.06 / Avg: 2.1 / Max: 2.16Min: 2.07 / Avg: 2.07 / Max: 2.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU4321a3691215SE +/- 0.01545, N = 3SE +/- 0.01907, N = 3SE +/- 0.01851, N = 3SE +/- 0.02438, N = 310.4013010.304708.272348.30643MIN: 10.25MIN: 10.11MIN: 7.96MIN: 8.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU4321a3691215Min: 10.37 / Avg: 10.4 / Max: 10.42Min: 10.27 / Avg: 10.3 / Max: 10.33Min: 8.24 / Avg: 8.27 / Max: 8.3Min: 8.26 / Avg: 8.31 / Max: 8.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 84321714212835SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.08, N = 328.5528.5828.6828.801. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 84321612182430Min: 28.23 / Avg: 28.55 / Max: 28.79Min: 28.39 / Avg: 28.58 / Max: 28.79Min: 28.41 / Avg: 28.68 / Max: 28.85Min: 28.66 / Avg: 28.8 / Max: 28.951. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p4321110220330440550SE +/- 0.31, N = 3SE +/- 1.00, N = 3SE +/- 0.94, N = 3SE +/- 0.56, N = 3482.13481.65484.00485.43MIN: 428.5 / MAX: 525.3MIN: 426.68 / MAX: 527.14MIN: 431.31 / MAX: 528.36MIN: 440.26 / MAX: 533.51. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p432190180270360450Min: 481.51 / Avg: 482.13 / Max: 482.52Min: 479.69 / Avg: 481.65 / Max: 482.98Min: 482.26 / Avg: 484 / Max: 485.48Min: 484.32 / Avg: 485.43 / Max: 486.151. (CC) gcc options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU4321a48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 317.8617.8816.7616.84MIN: 17.69MIN: 17.63MIN: 16.26MIN: 16.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU4321a510152025Min: 17.86 / Avg: 17.86 / Max: 17.87Min: 17.86 / Avg: 17.88 / Max: 17.91Min: 16.7 / Avg: 16.76 / Max: 16.8Min: 16.79 / Avg: 16.84 / Max: 16.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU4321a510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 320.7720.7520.0720.08MIN: 20.69MIN: 20.67MIN: 19.96MIN: 19.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU4321a510152025Min: 20.75 / Avg: 20.77 / Max: 20.8Min: 20.73 / Avg: 20.75 / Max: 20.8Min: 20.05 / Avg: 20.07 / Max: 20.1Min: 20.05 / Avg: 20.08 / Max: 20.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Default43211.28842.57683.86525.15366.442SE +/- 0.009, N = 3SE +/- 0.027, N = 3SE +/- 0.022, N = 3SE +/- 0.017, N = 35.7095.7115.7195.7261. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Default4321246810Min: 5.69 / Avg: 5.71 / Max: 5.72Min: 5.67 / Avg: 5.71 / Max: 5.76Min: 5.69 / Avg: 5.72 / Max: 5.76Min: 5.69 / Avg: 5.73 / Max: 5.751. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -ljpeg -lgif -lpthread

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3432130060090012001500SE +/- 1.89, N = 3SE +/- 0.47, N = 3SE +/- 0.70, N = 3SE +/- 1.22, N = 31572.561572.441570.801585.771. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3432130060090012001500Min: 1568.85 / Avg: 1572.56 / Max: 1575.02Min: 1571.85 / Avg: 1572.44 / Max: 1573.38Min: 1569.46 / Avg: 1570.8 / Max: 1571.83Min: 1583.49 / Avg: 1585.77 / Max: 1587.661. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein43211.1362.2723.4084.5445.68SE +/- 0.025, N = 3SE +/- 0.021, N = 3SE +/- 0.005, N = 3SE +/- 0.013, N = 34.9965.0345.0495.0291. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein4321246810Min: 4.97 / Avg: 5 / Max: 5.05Min: 4.99 / Avg: 5.03 / Max: 5.06Min: 5.04 / Avg: 5.05 / Max: 5.06Min: 5.02 / Avg: 5.03 / Max: 5.061. (CXX) g++ options: -O3 -pthread -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1432130060090012001500SE +/- 0.57, N = 3SE +/- 3.85, N = 3SE +/- 2.90, N = 3SE +/- 3.68, N = 31503.951495.411496.861499.711. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1432130060090012001500Min: 1503.01 / Avg: 1503.95 / Max: 1504.99Min: 1491.33 / Avg: 1495.41 / Max: 1503.1Min: 1493.47 / Avg: 1496.86 / Max: 1502.63Min: 1492.35 / Avg: 1499.71 / Max: 1503.671. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU4321a0.95911.91822.87733.83644.7955SE +/- 0.00315, N = 3SE +/- 0.00176, N = 3SE +/- 0.00463, N = 3SE +/- 0.00597, N = 34.253934.262664.239044.23692MIN: 4.22MIN: 4.23MIN: 4.2MIN: 4.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU4321a246810Min: 4.25 / Avg: 4.25 / Max: 4.26Min: 4.26 / Avg: 4.26 / Max: 4.27Min: 4.23 / Avg: 4.24 / Max: 4.25Min: 4.23 / Avg: 4.24 / Max: 4.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU4321a3691215SE +/- 0.01901, N = 3SE +/- 0.00861, N = 3SE +/- 0.02003, N = 3SE +/- 0.00635, N = 38.987568.973408.891128.93590MIN: 8.91MIN: 8.92MIN: 8.82MIN: 8.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU4321a3691215Min: 8.95 / Avg: 8.99 / Max: 9.01Min: 8.96 / Avg: 8.97 / Max: 8.99Min: 8.86 / Avg: 8.89 / Max: 8.93Min: 8.93 / Avg: 8.94 / Max: 8.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

131 Results Shown

OpenFOAM
WebP2 Image Encode:
  Quality 100, Lossless Compression
  Quality 95, Compression Effort 7
CP2K Molecular Dynamics
WebP2 Image Encode
OpenFOAM
JPEG XL
NAS Parallel Benchmarks
GROMACS
Timed Godot Game Engine Compilation
Gcrypt Library
CloverLeaf
dav1d
VKMark
ONNX Runtime:
  fcn-resnet101-11 - OpenMP CPU
  bertsquad-10 - OpenMP CPU
  yolov4 - OpenMP CPU
  shufflenet-v2-10 - OpenMP CPU
  super-resolution-10 - OpenMP CPU
Ngspice
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Pennant
JPEG XL
Kripke
Ngspice
NAS Parallel Benchmarks
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
Warsow
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
Chaos Group V-RAY
Pennant
toyBrot Fractal Generator:
  C++ Tasks
  C++ Threads
  OpenMP
GnuPG
JPEG XL Decoding
ASKAP:
  tConvolve MT - Degridding
  tConvolve MT - Gridding
JPEG XL Decoding
rav1e:
  5
  1
Cryptsetup:
  Twofish-XTS 512b Decryption
  Twofish-XTS 512b Encryption
  Serpent-XTS 512b Decryption
  Serpent-XTS 512b Encryption
  AES-XTS 512b Decryption
  AES-XTS 512b Encryption
  Twofish-XTS 256b Decryption
  Twofish-XTS 256b Encryption
  Serpent-XTS 256b Decryption
  Serpent-XTS 256b Encryption
  AES-XTS 256b Decryption
  AES-XTS 256b Encryption
  PBKDF2-whirlpool
  PBKDF2-sha512
lzbench:
  XZ 0 - Decompression
  XZ 0 - Compression
rav1e
ParaView:
  Wavelet Volume - 1920 x 1080:
    MiVoxels / Sec
    Frames / Sec
Google SynthMark
lzbench:
  Zstd 8 - Decompression
  Zstd 8 - Compression
ASKAP:
  tConvolve MPI - Gridding
  tConvolve MPI - Degridding
dav1d
QMCPACK
lzbench:
  Crush 0 - Decompression
  Crush 0 - Compression
Redis
QuantLib
lzbench:
  Brotli 2 - Decompression
  Brotli 2 - Compression
ParaView:
  Wavelet Contour - 1920 x 1080:
    MiPolys / Sec
    Frames / Sec
rav1e
JPEG XL
lzbench:
  Brotli 0 - Decompression
  Brotli 0 - Compression
Etcpak
lzbench:
  Zstd 1 - Decompression
  Zstd 1 - Compression
dav1d
lzbench:
  Libdeflate 1 - Decompression
  Libdeflate 1 - Compression
Unpacking Firefox
JPEG XL
WavPack Audio Encoding
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
TNN
ASKAP
TNN
WebP2 Image Encode
Redis
Monkey Audio Encoding
JPEG XL
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
Redis:
  LPOP
  SET
ASKAP:
  tConvolve OpenMP - Degridding
  tConvolve OpenMP - Gridding
Etcpak
Redis
Algebraic Multi-Grid Benchmark
Etcpak
Opus Codec Encoding
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
NAS Parallel Benchmarks
oneDNN:
  IP Shapes 3D - u8s8f32 - CPU
  IP Shapes 3D - f32 - CPU
JPEG XL
dav1d
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
WebP2 Image Encode
LULESH
LAMMPS Molecular Dynamics Simulator
Etcpak
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU