EPYC 7763 LLVM Clang Compiler Tests

AMD EPYC 7763 64-Core testing with a Supermicro H12SSL-i v1.01 (2.0 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2104129-IB-EPYC7763L05
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 4 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 16 Tests
CPU Massive 16 Tests
Creator Workloads 22 Tests
Cryptography 3 Tests
Encoding 10 Tests
Finance 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Imaging 6 Tests
Machine Learning 2 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 2 Tests
Raytracing 3 Tests
Renderers 3 Tests
Scientific Computing 2 Tests
Server 2 Tests
Server CPU Tests 8 Tests
Single-Threaded 4 Tests
Texture Compression 2 Tests
Video Encoding 7 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Clang 12.0
April 10 2021
  8 Hours, 55 Minutes
Clang 11.0
April 11 2021
  7 Hours, 36 Minutes
Clang 12.0 LTO
April 12 2021
  23 Minutes
Invert Hiding All Results Option
  5 Hours, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 7763 LLVM Clang Compiler TestsOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads)Supermicro H12SSL-i v1.01 (2.0 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPASPEED2 x Broadcom NetXtreme BCM5720 2-port PCIeUbuntu 20.045.12.0-051200rc6daily20210408-generic (x86_64) 20210407GNOME Shell 3.36.4X Server 1.20.8Clang 12.0.0-++20210409092622+fa0971b87fb2-1~exp1~20210409193326.73Clang 11.0.0-2~ubuntu20.04.1ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerCompilersFile-SystemScreen ResolutionEPYC 7763 LLVM Clang Compiler Tests PerformanceSystem Logs- Transparent Huge Pages: madvise- Clang 12.0: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- Clang 11.0: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- Clang 12.0 LTO: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119- Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Clang 12.0Clang 11.0Clang 12.0 LTOResult OverviewPhoronix Test Suite100%109%117%126%EtcpaktoyBrot Fractal GeneratorTimed MrBayes AnalysisLZ4 CompressionQuantLib

EPYC 7763 LLVM Clang Compiler Testsviennacl: CPU BLAS - dGEMM-NNviennacl: CPU BLAS - dGEMM-TNdav1d: Chimera 1080p 10-bitetcpak: DXT1onednn: Convolution Batch Shapes Auto - f32 - CPUetcpak: ETC1onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUviennacl: CPU BLAS - dGEMM-NTetcpak: ETC2onednn: IP Shapes 3D - u8s8f32 - CPUbotan: Blowfishviennacl: CPU BLAS - dGEMM-TTngspice: C2670toybrot: TBBtoybrot: C++ Threadswebp2: Quality 100, Compression Effort 5scimark2: Fast Fourier Transformtoybrot: C++ Tasksviennacl: CPU BLAS - dGEMV-Tlibraw: Post-Processing Benchmarkonednn: IP Shapes 3D - f32 - CPUfftw: Float + SSE - 1D FFT Size 32scimark2: Sparse Matrix Multiplygraphics-magick: Rotatetoybrot: OpenMPbotan: Twofish - Decryptonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUngspice: C7552mrbayes: Primate Phylogeny Analysisjpegxl: PNG - 5onednn: Recurrent Neural Network Inference - f32 - CPUbotan: Twofishbotan: AES-256webp2: Quality 100, Lossless Compressionbotan: KASUMI - Decryptonednn: Recurrent Neural Network Inference - u8s8f32 - CPUsimdjson: DistinctUserIDfftw: Stock - 1D FFT Size 2048botan: AES-256 - Decryptfftw: Stock - 1D FFT Size 4096botan: KASUMIsimdjson: PartialTweetstscp: AI Chess Performancebotan: CAST-256 - Decryptgraphics-magick: Swirlscimark2: Compositepgbench: 100 - 250 - Read Writeonnx: yolov4 - OpenMP CPUpgbench: 100 - 250 - Read Write - Average Latencyonnx: fcn-resnet101-11 - OpenMP CPUsimdjson: LargeRandscimark2: Dense LU Matrix Factorizationbotan: CAST-256jpegxl: JPEG - 8avifenc: 6, Losslessfftw: Stock - 2D FFT Size 1024aom-av1: Speed 8 Realtime - Bosphorus 1080psvt-av1: Enc Mode 4 - 1080pcompress-lz4: 3 - Compression Speedfftw: Float + SSE - 1D FFT Size 4096simdjson: Kostyaonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080ppgbench: 100 - 1 - Read Onlypgbench: 100 - 1 - Read Only - Average Latencyjpegxl: PNG - 8webp: Quality 100, Losslessencode-opus: WAV To Opus Encodeonednn: Recurrent Neural Network Training - u8s8f32 - CPUfftw: Float + SSE - 1D FFT Size 2048avifenc: 10, Losslessfftw: Stock - 1D FFT Size 1024tachyon: Total Timeaom-av1: Speed 9 Realtime - Bosphorus 4Konednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUavifenc: 10securemark: SecureMark-TLSwebp: Quality 100, Lossless, Highest Compressionwebp: Quality 100onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUgraphics-magick: HWB Color Spacec-ray: Total Time - 4K, 16 Rays Per Pixelcompress-lz4: 9 - Decompression Speedaom-av1: Speed 6 Two-Pass - Bosphorus 4Kwebp2: Quality 95, Compression Effort 7jpegxl: JPEG - 5aom-av1: Speed 4 Two-Pass - Bosphorus 4Kencode-flac: WAV To FLACjpegxl: JPEG - 7compress-lz4: 3 - Decompression Speedsvt-hevc: 10 - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080pgcrypt: svt-vp9: VMAF Optimized - Bosphorus 1080pliquid-dsp: 128 - 256 - 57graphics-magick: Noise-Gaussiansvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080px265: Bosphorus 4Kpovray: Trace Timepgbench: 100 - 100 - Read Write - Average Latencyfftw: Stock - 2D FFT Size 4096avifenc: 2jpegxl: PNG - 7liquid-dsp: 1 - 256 - 57pgbench: 100 - 100 - Read Writefftw: Stock - 2D FFT Size 2048compress-lz4: 9 - Compression Speedsvt-av1: Enc Mode 0 - 1080ponnx: shufflenet-v2-10 - OpenMP CPUwebp: Quality 100, Highest Compressionpgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 1 - Read Writeonednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 6 Realtime - Bosphorus 1080px265: Bosphorus 1080pliquid-dsp: 32 - 256 - 57aom-av1: Speed 8 Realtime - Bosphorus 4Kgraphics-magick: Enhancedquantlib: dav1d: Chimera 1080pliquid-dsp: 64 - 256 - 57fftw: Float + SSE - 2D FFT Size 2048aom-av1: Speed 6 Two-Pass - Bosphorus 1080pfinancebench: Bonds OpenMPonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUsvt-av1: Enc Mode 8 - 1080pdav1d: Summer Nature 1080pastcenc: Mediumonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUpgbench: 100 - 250 - Read Onlyaom-av1: Speed 6 Realtime - Bosphorus 4Kfftw: Float + SSE - 2D FFT Size 4096svt-hevc: 7 - Bosphorus 1080ppgbench: 100 - 250 - Read Only - Average Latencysvt-vp9: Visual Quality Optimized - Bosphorus 1080pwebp: Defaultdav1d: Summer Nature 4Kbotan: ChaCha20Poly1305 - Decryptcoremark: CoreMark Size 666 - Iterations Per Secondonednn: IP Shapes 1D - f32 - CPUavifenc: 6botan: ChaCha20Poly1305financebench: Repo OpenMPsvt-hevc: 1 - Bosphorus 1080pastcenc: Exhaustivegraphics-magick: Sharpenfftw: Float + SSE - 2D FFT Size 1024webp2: Defaultwebp2: Quality 75, Compression Effort 7onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUencode-mp3: WAV To MP3fftw: Stock - 1D FFT Size 32onednn: IP Shapes 1D - u8s8f32 - CPUbotan: Blowfish - Decryptscimark2: Monte Carloastcenc: Thoroughpgbench: 100 - 100 - Read Onlyavifenc: 0scimark2: Jacobi Successive Over-Relaxationpgbench: 100 - 100 - Read Only - Average Latencyaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Konnx: super-resolution-10 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUviennacl: CPU BLAS - dGEMV-Nviennacl: CPU BLAS - dDOTviennacl: CPU BLAS - dAXPYviennacl: CPU BLAS - dCOPYviennacl: CPU BLAS - sDOTviennacl: CPU BLAS - sAXPYviennacl: CPU BLAS - sCOPYgraphics-magick: Resizingfftw: Float + SSE - 1D FFT Size 1024Clang 12.0Clang 11.0Clang 12.0 LTO48.651.9308.322718.5251.22132284.6422.0360665.7202.0850.710124380.05473.0118.870678072206.690363.85743762641.783.28507156494280.227127507321.190597.48195.95689.11674.27593.972315.4094659.338374.03584.229590.1824.62104674682.4559862.082.6444.601570966133.04819933190.62566843334.4311120.848848.40132.82028.1325.2209088.388.7811.47452.07454282.751305.10103.17243100.0410.8219.0167.5671307.49512545.7461080516.046838.112.367971302.703.36126520438.4492.1991.1725860515.87013926.58.99207.00866.664.877.85466.3813911.5643.587.10236.924487.433643766667457488.2330.329.2961.6076744.125.17512.1555663000623197789.948.500.18399046.3090.30532811.4442526.8574.00156483333333.3910762653.81198.2230706333333193522.1351596.8671870.313689118.0671244.114.00580.491940107120917.2222797345.300.234372.491.331541.56843.4041785466.2839691.077019.510850.49633246.83723941.0918.9936614362392.739109.5250.7797768.256133331.07507351.284675.136.7647106902247.8841785.500.0940.530.21445649869.181987860443435747121365035083.688.3184.191872.7590.841169205.0651.6054079.3168.8190.594729319.23484.0103.826624763957.366399.16683667738.713.52787145904590.376657029302.405563.24790.52788.62078.41563.200299.2144901.127392.84980.221562.9704.4110004.24895.5589438.679.1494.411638265127.74019153319.34544883464.6031080.819146.88128.58627.2426.0348809.686.0911.82152.35466762.681271.91100.55249430.0400.818.5737.3921277.62500845.8791056416.409937.282.318591276.043.42926011937.7272.2401.1514061615.59913927.99.14203.63465.584.957.97965.4313840.3652.747.20240.205481.053596533333463482.0229.949.4081.6266823.825.47212.0156307000616167878.549.010.18197976.2430.30233121.4575726.6173.36157840000033.1410682640.21190.4130513666673174122.0051900.4348960.315522117.3921251.253.98370.489278106550617.1322913346.890.235373.991.336543.43840.6371790837.0100001.080119.536848.23633178.49869841.0119.0255613361812.743109.6360.7791018.250133241.07577351.075674.866.7674106936747.8941785.420.0940.530.21452347151.2933104318774624124952034507402719.985284.763202.10170857143736793.63350.9313698.713715.048.472657.8OpenBenchmarking.org

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNClang 12.0Clang 11.020406080100SE +/- 0.05, N = 12SE +/- 0.06, N = 1548.683.61. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNClang 12.0Clang 11.01632486480Min: 48.1 / Avg: 48.58 / Max: 48.8Min: 83.3 / Avg: 83.64 / Max: 83.91. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNClang 12.0Clang 11.020406080100SE +/- 0.09, N = 12SE +/- 0.02, N = 1551.988.31. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNClang 12.0Clang 11.020406080100Min: 51.3 / Avg: 51.89 / Max: 52.2Min: 88.1 / Avg: 88.28 / Max: 88.41. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p 10-bitClang 12.0Clang 11.070140210280350SE +/- 0.93, N = 3SE +/- 0.48, N = 3308.32184.19MIN: 220.53 / MAX: 490.51-lm - MIN: 114.52 / MAX: 310.51. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p 10-bitClang 12.0Clang 11.060120180240300Min: 306.49 / Avg: 308.32 / Max: 309.53Min: 183.38 / Avg: 184.19 / Max: 185.041. (CC) gcc options: -O3 -march=native -pthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1Clang 12.0Clang 11.0Clang 12.0 LTO6001200180024003000SE +/- 2.64, N = 3SE +/- 1.69, N = 3SE +/- 6.09, N = 32718.531872.762719.991. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1Clang 12.0Clang 11.0Clang 12.0 LTO5001000150020002500Min: 2714.63 / Avg: 2718.53 / Max: 2723.56Min: 1869.74 / Avg: 1872.76 / Max: 1875.59Min: 2708.19 / Avg: 2719.99 / Max: 2728.511. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUClang 12.0Clang 11.00.27480.54960.82441.09921.374SE +/- 0.018279, N = 4SE +/- 0.000480, N = 31.2213200.841169MIN: 1.13MIN: 0.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 1.18 / Avg: 1.22 / Max: 1.25Min: 0.84 / Avg: 0.84 / Max: 0.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1Clang 12.0Clang 11.0Clang 12.0 LTO60120180240300SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3284.64205.07284.761. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1Clang 12.0Clang 11.0Clang 12.0 LTO50100150200250Min: 284.42 / Avg: 284.64 / Max: 284.76Min: 205 / Avg: 205.07 / Max: 205.11Min: 284.64 / Avg: 284.76 / Max: 284.841. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.00.45810.91621.37431.83242.2905SE +/- 0.01922, N = 12SE +/- 0.00118, N = 32.036061.60540MIN: 1.81MIN: 1.551. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 1.96 / Avg: 2.04 / Max: 2.24Min: 1.6 / Avg: 1.61 / Max: 1.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTClang 12.0Clang 11.020406080100SE +/- 0.56, N = 12SE +/- 0.03, N = 1565.779.31. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTClang 12.0Clang 11.01530456075Min: 59.7 / Avg: 65.68 / Max: 66.7Min: 79.1 / Avg: 79.32 / Max: 79.41. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2Clang 12.0Clang 11.0Clang 12.0 LTO4080120160200SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3202.09168.82202.101. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2Clang 12.0Clang 11.0Clang 12.0 LTO4080120160200Min: 202.05 / Avg: 202.08 / Max: 202.12Min: 168.8 / Avg: 168.82 / Max: 168.85Min: 202.02 / Avg: 202.1 / Max: 202.161. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.00.15980.31960.47940.63920.799SE +/- 0.011383, N = 3SE +/- 0.008914, N = 30.7101240.594729MIN: 0.64MIN: 0.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 0.7 / Avg: 0.71 / Max: 0.73Min: 0.58 / Avg: 0.59 / Max: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishClang 12.0Clang 11.080160240320400SE +/- 0.05, N = 3SE +/- 1.73, N = 3380.05319.231. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishClang 12.0Clang 11.070140210280350Min: 379.98 / Avg: 380.05 / Max: 380.14Min: 315.77 / Avg: 319.23 / Max: 321.031. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTClang 12.0Clang 11.020406080100SE +/- 0.07, N = 12SE +/- 0.02, N = 1473.084.01. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTClang 12.0Clang 11.01632486480Min: 72.4 / Avg: 73.01 / Max: 73.3Min: 83.8 / Avg: 84.04 / Max: 84.11. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Clang 12.0Clang 11.0306090120150SE +/- 0.53, N = 3SE +/- 0.06, N = 3118.87103.831. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Clang 12.0Clang 11.020406080100Min: 117.93 / Avg: 118.87 / Max: 119.78Min: 103.71 / Avg: 103.83 / Max: 103.91. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBClang 12.0Clang 11.0Clang 12.0 LTO15003000450060007500SE +/- 87.21, N = 3SE +/- 67.11, N = 7SE +/- 86.43, N = 3678062477085-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-flto1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBClang 12.0Clang 11.0Clang 12.0 LTO12002400360048006000Min: 6610 / Avg: 6780.33 / Max: 6898Min: 6089 / Avg: 6247.43 / Max: 6556Min: 6982 / Avg: 7085.33 / Max: 72571. (CXX) g++ options: -O3 -march=native -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsClang 12.0Clang 11.0Clang 12.0 LTO15003000450060007500SE +/- 30.90, N = 3SE +/- 25.04, N = 3SE +/- 15.06, N = 3722063957143-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-flto1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsClang 12.0Clang 11.0Clang 12.0 LTO13002600390052006500Min: 7171 / Avg: 7219.67 / Max: 7277Min: 6365 / Avg: 6395.33 / Max: 6445Min: 7121 / Avg: 7143.33 / Max: 71721. (CXX) g++ options: -O3 -march=native -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5Clang 12.0Clang 11.0246810SE +/- 0.006, N = 3SE +/- 0.022, N = 36.6907.3661. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5Clang 12.0Clang 11.03691215Min: 6.68 / Avg: 6.69 / Max: 6.7Min: 7.34 / Avg: 7.37 / Max: 7.411. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Fast Fourier TransformClang 12.0Clang 11.090180270360450SE +/- 0.46, N = 3SE +/- 0.67, N = 3363.85399.161. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Fast Fourier TransformClang 12.0Clang 11.070140210280350Min: 362.97 / Avg: 363.85 / Max: 364.55Min: 398.38 / Avg: 399.16 / Max: 400.51. (CC) gcc options: -O3 -march=native -lm

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksClang 12.0Clang 11.0Clang 12.0 LTO16003200480064008000SE +/- 33.67, N = 3SE +/- 7.31, N = 3SE +/- 17.21, N = 3743768367367-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-flto1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksClang 12.0Clang 11.0Clang 12.0 LTO13002600390052006500Min: 7402 / Avg: 7436.67 / Max: 7504Min: 6826 / Avg: 6835.67 / Max: 6850Min: 7342 / Avg: 7367 / Max: 74001. (CXX) g++ options: -O3 -march=native -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TClang 12.0Clang 11.0150300450600750SE +/- 4.04, N = 12SE +/- 1.41, N = 146266771. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TClang 12.0Clang 11.0120240360480600Min: 586 / Avg: 626 / Max: 642Min: 669 / Avg: 676.79 / Max: 6851. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClang 12.0Clang 11.01020304050SE +/- 0.12, N = 3SE +/- 0.33, N = 341.7838.711. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClang 12.0Clang 11.0918273645Min: 41.59 / Avg: 41.78 / Max: 42.01Min: 38.07 / Avg: 38.71 / Max: 39.131. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.00.79381.58762.38143.17523.969SE +/- 0.01639, N = 3SE +/- 0.04735, N = 33.285073.52787MIN: 3.15MIN: 3.291. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 3.25 / Avg: 3.29 / Max: 3.3Min: 3.44 / Avg: 3.53 / Max: 3.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 32Clang 12.0Clang 11.03K6K9K12K15KSE +/- 48.79, N = 3SE +/- 129.55, N = 315649145901. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 32Clang 12.0Clang 11.03K6K9K12K15KMin: 15564 / Avg: 15648.67 / Max: 15733Min: 14332 / Avg: 14589.67 / Max: 147421. (CC) gcc options: -pthread -O3 -march=native -lm

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Sparse Matrix MultiplyClang 12.0Clang 11.010002000300040005000SE +/- 10.41, N = 3SE +/- 3.87, N = 34280.224590.371. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Sparse Matrix MultiplyClang 12.0Clang 11.08001600240032004000Min: 4266.04 / Avg: 4280.22 / Max: 4300.51Min: 4583.08 / Avg: 4590.37 / Max: 4596.261. (CC) gcc options: -O3 -march=native -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateClang 12.0Clang 11.0150300450600750SE +/- 2.60, N = 3SE +/- 1.33, N = 37126651. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateClang 12.0Clang 11.0130260390520650Min: 708 / Avg: 712.33 / Max: 717Min: 662 / Avg: 664.67 / Max: 6661. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPClang 12.0Clang 11.016003200480064008000SE +/- 14.89, N = 3SE +/- 20.42, N = 3750770291. (CXX) g++ options: -O3 -march=native -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPClang 12.0Clang 11.013002600390052006500Min: 7478 / Avg: 7506.67 / Max: 7528Min: 6989 / Avg: 7029.33 / Max: 70551. (CXX) g++ options: -O3 -march=native -lpthread -lm -lgcc -lgcc_s -lc

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptClang 12.0Clang 11.070140210280350SE +/- 0.16, N = 3SE +/- 0.15, N = 3321.19302.411. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptClang 12.0Clang 11.060120180240300Min: 320.87 / Avg: 321.19 / Max: 321.41Min: 302.12 / Avg: 302.41 / Max: 302.621. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.0130260390520650SE +/- 3.02, N = 3SE +/- 0.10, N = 3597.48563.25MIN: 580.8MIN: 551.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.0110220330440550Min: 591.68 / Avg: 597.48 / Max: 601.87Min: 563.05 / Avg: 563.25 / Max: 563.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Clang 12.0Clang 11.020406080100SE +/- 1.11, N = 6SE +/- 1.37, N = 395.9690.531. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Clang 12.0Clang 11.020406080100Min: 91.73 / Avg: 95.96 / Max: 97.88Min: 87.79 / Avg: 90.53 / Max: 92.041. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisClang 12.0Clang 11.0Clang 12.0 LTO20406080100SE +/- 0.98, N = 3SE +/- 0.98, N = 3SE +/- 1.09, N = 389.1288.6293.63-flto1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisClang 12.0Clang 11.0Clang 12.0 LTO20406080100Min: 87.37 / Avg: 89.12 / Max: 90.78Min: 86.96 / Avg: 88.62 / Max: 90.35Min: 92.03 / Avg: 93.63 / Max: 95.711. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5Clang 12.0Clang 11.020406080100SE +/- 0.17, N = 3SE +/- 0.24, N = 374.2778.411. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5Clang 12.0Clang 11.01530456075Min: 73.95 / Avg: 74.27 / Max: 74.55Min: 77.95 / Avg: 78.41 / Max: 78.741. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0130260390520650SE +/- 9.50, N = 3SE +/- 0.83, N = 3593.97563.20MIN: 570.44MIN: 550.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0100200300400500Min: 581.26 / Avg: 593.97 / Max: 612.56Min: 561.62 / Avg: 563.2 / Max: 564.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishClang 12.0Clang 11.070140210280350SE +/- 0.13, N = 3SE +/- 0.09, N = 3315.41299.211. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishClang 12.0Clang 11.060120180240300Min: 315.17 / Avg: 315.41 / Max: 315.6Min: 299.05 / Avg: 299.21 / Max: 299.371. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256Clang 12.0Clang 11.011002200330044005500SE +/- 2.14, N = 3SE +/- 2.16, N = 34659.344901.131. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256Clang 12.0Clang 11.09001800270036004500Min: 4655.62 / Avg: 4659.34 / Max: 4663.03Min: 4896.81 / Avg: 4901.13 / Max: 4903.541. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionClang 12.0Clang 11.090180270360450SE +/- 0.49, N = 3SE +/- 0.17, N = 3374.04392.851. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionClang 12.0Clang 11.070140210280350Min: 373.2 / Avg: 374.04 / Max: 374.91Min: 392.57 / Avg: 392.85 / Max: 393.171. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptClang 12.0Clang 11.020406080100SE +/- 0.06, N = 3SE +/- 0.04, N = 384.2380.221. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptClang 12.0Clang 11.01632486480Min: 84.12 / Avg: 84.23 / Max: 84.29Min: 80.17 / Avg: 80.22 / Max: 80.311. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0130260390520650SE +/- 1.89, N = 3SE +/- 0.25, N = 3590.18562.97MIN: 575.41MIN: 551.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0100200300400500Min: 586.49 / Avg: 590.18 / Max: 592.75Min: 562.47 / Avg: 562.97 / Max: 563.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDClang 12.0Clang 11.01.03952.0793.11854.1585.1975SE +/- 0.00, N = 3SE +/- 0.00, N = 34.624.411. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDClang 12.0Clang 11.0246810Min: 4.61 / Avg: 4.62 / Max: 4.62Min: 4.41 / Avg: 4.41 / Max: 4.421. (CXX) g++ options: -O3 -march=native -pthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 2048Clang 12.0Clang 11.02K4K6K8K10KSE +/- 7.75, N = 3SE +/- 28.76, N = 310467.010004.21. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 2048Clang 12.0Clang 11.02K4K6K8K10KMin: 10457 / Avg: 10466.67 / Max: 10482Min: 9958 / Avg: 10004.23 / Max: 100571. (CC) gcc options: -pthread -O3 -march=native -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptClang 12.0Clang 11.010002000300040005000SE +/- 4.78, N = 3SE +/- 1.35, N = 34682.464895.561. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptClang 12.0Clang 11.09001800270036004500Min: 4675.07 / Avg: 4682.46 / Max: 4691.39Min: 4893.3 / Avg: 4895.56 / Max: 4897.961. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 4096Clang 12.0Clang 11.02K4K6K8K10KSE +/- 101.36, N = 3SE +/- 15.16, N = 39862.09438.61. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 4096Clang 12.0Clang 11.02K4K6K8K10KMin: 9659.8 / Avg: 9862.03 / Max: 9975.4Min: 9413.4 / Avg: 9438.63 / Max: 9465.81. (CC) gcc options: -pthread -O3 -march=native -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIClang 12.0Clang 11.020406080100SE +/- 0.01, N = 3SE +/- 0.06, N = 382.6479.151. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIClang 12.0Clang 11.01632486480Min: 82.63 / Avg: 82.64 / Max: 82.65Min: 79.08 / Avg: 79.15 / Max: 79.261. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsClang 12.0Clang 11.01.0352.073.1054.145.175SE +/- 0.01, N = 3SE +/- 0.01, N = 34.604.411. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsClang 12.0Clang 11.0246810Min: 4.59 / Avg: 4.6 / Max: 4.61Min: 4.39 / Avg: 4.41 / Max: 4.421. (CXX) g++ options: -O3 -march=native -pthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceClang 12.0Clang 11.0400K800K1200K1600K2000KSE +/- 1798.40, N = 5SE +/- 2852.59, N = 5157096616382651. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceClang 12.0Clang 11.0300K600K900K1200K1500KMin: 1569168 / Avg: 1570966.4 / Max: 1578160Min: 1634356 / Avg: 1638264.6 / Max: 16490351. (CC) gcc options: -O3 -march=native

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptClang 12.0Clang 11.0306090120150SE +/- 0.01, N = 3SE +/- 0.01, N = 3133.05127.741. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptClang 12.0Clang 11.020406080100Min: 133.03 / Avg: 133.05 / Max: 133.07Min: 127.72 / Avg: 127.74 / Max: 127.751. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlClang 12.0Clang 11.0400800120016002000SE +/- 6.57, N = 3SE +/- 12.41, N = 3199319151. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlClang 12.0Clang 11.030060090012001500Min: 1980 / Avg: 1992.67 / Max: 2002Min: 1897 / Avg: 1915.33 / Max: 19391. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: CompositeClang 12.0Clang 11.07001400210028003500SE +/- 1.11, N = 3SE +/- 15.12, N = 33190.623319.341. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: CompositeClang 12.0Clang 11.06001200180024003000Min: 3188.61 / Avg: 3190.62 / Max: 3192.43Min: 3289.1 / Avg: 3319.34 / Max: 3335.121. (CC) gcc options: -O3 -march=native -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteClang 12.0Clang 11.012K24K36K48K60KSE +/- 702.52, N = 15SE +/- 883.12, N = 356684544881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteClang 12.0Clang 11.010K20K30K40K50KMin: 53398.67 / Avg: 56683.93 / Max: 61036.19Min: 53558.74 / Avg: 54487.76 / Max: 56253.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUClang 12.0Clang 11.080160240320400SE +/- 4.15, N = 4SE +/- 1.42, N = 33333461. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUClang 12.0Clang 11.060120180240300Min: 321.5 / Avg: 332.75 / Max: 341.5Min: 344 / Avg: 345.67 / Max: 348.51. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyClang 12.0Clang 11.01.03572.07143.10714.14285.1785SE +/- 0.054, N = 15SE +/- 0.074, N = 34.4314.6031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0246810Min: 4.11 / Avg: 4.43 / Max: 4.69Min: 4.46 / Avg: 4.6 / Max: 4.681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUClang 12.0Clang 11.0306090120150SE +/- 0.50, N = 3SE +/- 0.29, N = 31121081. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUClang 12.0Clang 11.020406080100Min: 111.5 / Avg: 112 / Max: 113Min: 107.5 / Avg: 108 / Max: 108.51. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomClang 12.0Clang 11.00.1890.3780.5670.7560.945SE +/- 0.00, N = 3SE +/- 0.00, N = 30.840.811. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomClang 12.0Clang 11.0246810Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.81 / Avg: 0.81 / Max: 0.811. (CXX) g++ options: -O3 -march=native -pthread

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Dense LU Matrix FactorizationClang 12.0Clang 11.02K4K6K8K10KSE +/- 7.16, N = 3SE +/- 77.81, N = 38848.409146.881. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Dense LU Matrix FactorizationClang 12.0Clang 11.016003200480064008000Min: 8837.66 / Avg: 8848.4 / Max: 8861.98Min: 8991.26 / Avg: 9146.88 / Max: 9225.11. (CC) gcc options: -O3 -march=native -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256Clang 12.0Clang 11.0306090120150SE +/- 0.02, N = 3SE +/- 0.02, N = 3132.82128.591. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256Clang 12.0Clang 11.020406080100Min: 132.79 / Avg: 132.82 / Max: 132.85Min: 128.56 / Avg: 128.59 / Max: 128.611. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8Clang 12.0Clang 11.0714212835SE +/- 0.03, N = 3SE +/- 0.01, N = 328.1327.241. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8Clang 12.0Clang 11.0612182430Min: 28.1 / Avg: 28.13 / Max: 28.19Min: 27.21 / Avg: 27.24 / Max: 27.251. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessClang 12.0Clang 11.0612182430SE +/- 0.04, N = 3SE +/- 0.22, N = 325.2226.031. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessClang 12.0Clang 11.0612182430Min: 25.16 / Avg: 25.22 / Max: 25.28Min: 25.81 / Avg: 26.03 / Max: 26.481. (CXX) g++ options: -O3 -fPIC -lm

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 1024Clang 12.0Clang 11.02K4K6K8K10KSE +/- 48.25, N = 3SE +/- 45.95, N = 39088.38809.61. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 1024Clang 12.0Clang 11.016003200480064008000Min: 8998.3 / Avg: 9088.27 / Max: 9163.5Min: 8734.5 / Avg: 8809.57 / Max: 88931. (CC) gcc options: -pthread -O3 -march=native -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.020406080100SE +/- 1.07, N = 3SE +/- 0.51, N = 388.7886.091. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.020406080100Min: 87.54 / Avg: 88.78 / Max: 90.92Min: 85.15 / Avg: 86.09 / Max: 86.921. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClang 12.0Clang 11.03691215SE +/- 0.17, N = 3SE +/- 0.16, N = 411.4711.821. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClang 12.0Clang 11.03691215Min: 11.18 / Avg: 11.47 / Max: 11.77Min: 11.36 / Avg: 11.82 / Max: 12.131. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTO1224364860SE +/- 0.80, N = 3SE +/- 0.33, N = 3SE +/- 0.02, N = 352.0752.3550.931. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTO1020304050Min: 50.59 / Avg: 52.07 / Max: 53.36Min: 51.96 / Avg: 52.35 / Max: 53Min: 50.91 / Avg: 50.93 / Max: 50.961. (CC) gcc options: -O3

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096Clang 12.0Clang 11.010K20K30K40K50KSE +/- 671.66, N = 15SE +/- 413.24, N = 1545428466761. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096Clang 12.0Clang 11.08K16K24K32K40KMin: 40185 / Avg: 45428.07 / Max: 49157Min: 44248 / Avg: 46676.4 / Max: 494171. (CC) gcc options: -pthread -O3 -march=native -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaClang 12.0Clang 11.00.61881.23761.85642.47523.094SE +/- 0.01, N = 3SE +/- 0.00, N = 32.752.681. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaClang 12.0Clang 11.0246810Min: 2.74 / Avg: 2.75 / Max: 2.77Min: 2.68 / Avg: 2.68 / Max: 2.691. (CXX) g++ options: -O3 -march=native -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.030060090012001500SE +/- 1.78, N = 3SE +/- 9.75, N = 31305.101271.91MIN: 1294.76MIN: 1252.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.02004006008001000Min: 1303.14 / Avg: 1305.1 / Max: 1308.65Min: 1261.78 / Avg: 1271.91 / Max: 1291.411. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.020406080100SE +/- 0.31, N = 3SE +/- 0.53, N = 3103.17100.551. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.020406080100Min: 102.59 / Avg: 103.17 / Max: 103.67Min: 99.52 / Avg: 100.55 / Max: 101.291. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyClang 12.0Clang 11.05K10K15K20K25KSE +/- 303.43, N = 3SE +/- 289.16, N = 324310249431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyClang 12.0Clang 11.04K8K12K16K20KMin: 23860.82 / Avg: 24309.92 / Max: 24887.93Min: 24402.51 / Avg: 24942.89 / Max: 25391.51. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyClang 12.0Clang 11.00.00920.01840.02760.03680.046SE +/- 0.001, N = 3SE +/- 0.001, N = 30.0410.0401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyClang 12.0Clang 11.012345Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8Clang 12.0Clang 11.00.18450.3690.55350.7380.9225SE +/- 0.00, N = 3SE +/- 0.00, N = 30.820.801. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8Clang 12.0Clang 11.0246810Min: 0.82 / Avg: 0.82 / Max: 0.82Min: 0.8 / Avg: 0.8 / Max: 0.81. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClang 12.0Clang 11.0510152025SE +/- 0.02, N = 3SE +/- 0.13, N = 319.0218.571. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClang 12.0Clang 11.0510152025Min: 18.99 / Avg: 19.02 / Max: 19.06Min: 18.41 / Avg: 18.57 / Max: 18.831. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeClang 12.0Clang 11.0246810SE +/- 0.013, N = 5SE +/- 0.002, N = 57.5677.3921. (CXX) g++ options: -O3 -march=native -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeClang 12.0Clang 11.03691215Min: 7.55 / Avg: 7.57 / Max: 7.62Min: 7.39 / Avg: 7.39 / Max: 7.41. (CXX) g++ options: -O3 -march=native -logg -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.030060090012001500SE +/- 3.61, N = 3SE +/- 7.11, N = 31307.491277.62MIN: 1293.38MIN: 1252.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.02004006008001000Min: 1300.95 / Avg: 1307.49 / Max: 1313.42Min: 1265.25 / Avg: 1277.62 / Max: 1289.891. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 2048Clang 12.0Clang 11.011K22K33K44K55KSE +/- 439.50, N = 3SE +/- 582.34, N = 351254500841. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 2048Clang 12.0Clang 11.09K18K27K36K45KMin: 50603 / Avg: 51254 / Max: 52091Min: 49197 / Avg: 50083.67 / Max: 511811. (CC) gcc options: -pthread -O3 -march=native -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessClang 12.0Clang 11.01.32282.64563.96845.29126.614SE +/- 0.013, N = 3SE +/- 0.011, N = 35.7465.8791. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessClang 12.0Clang 11.0246810Min: 5.72 / Avg: 5.75 / Max: 5.76Min: 5.87 / Avg: 5.88 / Max: 5.91. (CXX) g++ options: -O3 -fPIC -lm

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 1024Clang 12.0Clang 11.02K4K6K8K10KSE +/- 27.10, N = 3SE +/- 35.53, N = 310805105641. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 1024Clang 12.0Clang 11.02K4K6K8K10KMin: 10774 / Avg: 10805 / Max: 10859Min: 10508 / Avg: 10564.33 / Max: 106301. (CC) gcc options: -pthread -O3 -march=native -lm

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeClang 12.0Clang 11.048121620SE +/- 0.06, N = 3SE +/- 0.02, N = 316.0516.411. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeClang 12.0Clang 11.048121620Min: 15.95 / Avg: 16.05 / Max: 16.15Min: 16.38 / Avg: 16.41 / Max: 16.451. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0918273645SE +/- 0.43, N = 3SE +/- 0.31, N = 338.1137.281. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0816243240Min: 37.26 / Avg: 38.11 / Max: 38.55Min: 36.76 / Avg: 37.28 / Max: 37.831. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.00.53281.06561.59842.13122.664SE +/- 0.02100, N = 3SE +/- 0.02389, N = 32.367972.31859MIN: 2.01MIN: 1.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 2.33 / Avg: 2.37 / Max: 2.41Min: 2.29 / Avg: 2.32 / Max: 2.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUClang 12.0Clang 11.030060090012001500SE +/- 3.92, N = 3SE +/- 9.46, N = 31302.701276.04MIN: 1289.86MIN: 1249.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUClang 12.0Clang 11.02004006008001000Min: 1296.9 / Avg: 1302.7 / Max: 1310.16Min: 1257.62 / Avg: 1276.04 / Max: 1289.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10Clang 12.0Clang 11.00.77151.5432.31453.0863.8575SE +/- 0.014, N = 3SE +/- 0.010, N = 33.3613.4291. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10Clang 12.0Clang 11.0246810Min: 3.35 / Avg: 3.36 / Max: 3.39Min: 3.41 / Avg: 3.43 / Max: 3.441. (CXX) g++ options: -O3 -fPIC -lm

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSClang 12.0Clang 11.060K120K180K240K300KSE +/- 1778.47, N = 3SE +/- 407.86, N = 32652042601191. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSClang 12.0Clang 11.050K100K150K200K250KMin: 261680.5 / Avg: 265203.86 / Max: 267387.75Min: 259393.3 / Avg: 260118.78 / Max: 260804.481. (CC) gcc options: -pedantic -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClang 12.0Clang 11.0918273645SE +/- 0.07, N = 3SE +/- 0.08, N = 338.4537.731. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClang 12.0Clang 11.0816243240Min: 38.32 / Avg: 38.45 / Max: 38.54Min: 37.64 / Avg: 37.73 / Max: 37.891. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Clang 12.0Clang 11.00.5041.0081.5122.0162.52SE +/- 0.001, N = 3SE +/- 0.000, N = 32.1992.2401. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Clang 12.0Clang 11.0246810Min: 2.2 / Avg: 2.2 / Max: 2.2Min: 2.24 / Avg: 2.24 / Max: 2.241. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.00.26380.52760.79141.05521.319SE +/- 0.00458, N = 3SE +/- 0.00653, N = 31.172581.15140MIN: 1.12MIN: 1.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 1.14 / Avg: 1.15 / Max: 1.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceClang 12.0Clang 11.0130260390520650SE +/- 0.67, N = 3SE +/- 0.88, N = 36056161. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceClang 12.0Clang 11.0110220330440550Min: 604 / Avg: 604.67 / Max: 606Min: 614 / Avg: 615.67 / Max: 6171. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelClang 12.0Clang 11.048121620SE +/- 0.02, N = 3SE +/- 0.01, N = 315.8715.601. (CC) gcc options: -lm -lpthread -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelClang 12.0Clang 11.048121620Min: 15.83 / Avg: 15.87 / Max: 15.9Min: 15.59 / Avg: 15.6 / Max: 15.621. (CC) gcc options: -lm -lpthread -O3 -march=native

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTO3K6K9K12K15KSE +/- 65.90, N = 3SE +/- 23.21, N = 3SE +/- 46.50, N = 313926.513927.913698.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTO2K4K6K8K10KMin: 13799.5 / Avg: 13926.5 / Max: 14020.5Min: 13900 / Avg: 13927.93 / Max: 13974Min: 13611.1 / Avg: 13698.67 / Max: 13769.61. (CC) gcc options: -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.03691215SE +/- 0.10, N = 3SE +/- 0.03, N = 38.999.141. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.03691215Min: 8.85 / Avg: 8.99 / Max: 9.19Min: 9.08 / Avg: 9.14 / Max: 9.21. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7Clang 12.0Clang 11.050100150200250SE +/- 0.07, N = 3SE +/- 0.66, N = 3207.01203.631. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7Clang 12.0Clang 11.04080120160200Min: 206.92 / Avg: 207.01 / Max: 207.15Min: 202.59 / Avg: 203.63 / Max: 204.871. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5Clang 12.0Clang 11.01530456075SE +/- 0.14, N = 3SE +/- 0.20, N = 366.6665.581. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5Clang 12.0Clang 11.01326395265Min: 66.45 / Avg: 66.66 / Max: 66.93Min: 65.32 / Avg: 65.58 / Max: 65.981. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.01.11382.22763.34144.45525.569SE +/- 0.04, N = 3SE +/- 0.07, N = 34.874.951. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.0246810Min: 4.79 / Avg: 4.87 / Max: 4.94Min: 4.82 / Avg: 4.95 / Max: 5.021. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACClang 12.0Clang 11.0246810SE +/- 0.007, N = 5SE +/- 0.006, N = 57.8547.9791. (CXX) g++ options: -O3 -march=native -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACClang 12.0Clang 11.03691215Min: 7.83 / Avg: 7.85 / Max: 7.87Min: 7.96 / Avg: 7.98 / Max: 7.991. (CXX) g++ options: -O3 -march=native -logg -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7Clang 12.0Clang 11.01530456075SE +/- 0.16, N = 3SE +/- 0.08, N = 366.3865.431. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7Clang 12.0Clang 11.01326395265Min: 66.07 / Avg: 66.38 / Max: 66.63Min: 65.29 / Avg: 65.43 / Max: 65.581. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTO3K6K9K12K15KSE +/- 71.01, N = 3SE +/- 15.91, N = 3SE +/- 60.82, N = 313911.513840.313715.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTO2K4K6K8K10KMin: 13769.6 / Avg: 13911.53 / Max: 13986.7Min: 13808.6 / Avg: 13840.3 / Max: 13858.5Min: 13611.3 / Avg: 13715.03 / Max: 13821.91. (CC) gcc options: -O3

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pClang 12.0Clang 11.0140280420560700SE +/- 3.01, N = 3SE +/- 5.55, N = 3643.58652.741. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pClang 12.0Clang 11.0120240360480600Min: 637.62 / Avg: 643.58 / Max: 647.25Min: 641.71 / Avg: 652.74 / Max: 659.341. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0246810SE +/- 0.04, N = 3SE +/- 0.01, N = 37.107.201. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.03691215Min: 7.04 / Avg: 7.1 / Max: 7.18Min: 7.18 / Avg: 7.2 / Max: 7.211. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Clang 12.0Clang 11.050100150200250SE +/- 0.44, N = 3SE +/- 0.28, N = 3236.92240.211. (CC) gcc options: -O3 -march=native -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Clang 12.0Clang 11.04080120160200Min: 236.32 / Avg: 236.92 / Max: 237.78Min: 239.65 / Avg: 240.2 / Max: 240.541. (CC) gcc options: -O3 -march=native -fvisibility=hidden

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0110220330440550SE +/- 1.37, N = 3SE +/- 0.23, N = 3487.43481.051. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.090180270360450Min: 485.57 / Avg: 487.43 / Max: 490.1Min: 480.74 / Avg: 481.05 / Max: 481.511. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0800M1600M2400M3200M4000MSE +/- 883804.91, N = 3SE +/- 1559202.08, N = 3364376666735965333331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0600M1200M1800M2400M3000MMin: 3642600000 / Avg: 3643766666.67 / Max: 3645500000Min: 3593800000 / Avg: 3596533333.33 / Max: 35992000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianClang 12.0Clang 11.0100200300400500SE +/- 1.00, N = 34574631. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianClang 12.0Clang 11.080160240320400Min: 455 / Avg: 457 / Max: 4581. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0110220330440550SE +/- 0.73, N = 3SE +/- 1.76, N = 3488.23482.021. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.090180270360450Min: 486.79 / Avg: 488.23 / Max: 489.18Min: 479.86 / Avg: 482.02 / Max: 485.51. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClang 12.0Clang 11.0714212835SE +/- 0.23, N = 3SE +/- 0.25, N = 330.3229.941. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClang 12.0Clang 11.0714212835Min: 30.04 / Avg: 30.32 / Max: 30.78Min: 29.64 / Avg: 29.94 / Max: 30.451. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeClang 12.0Clang 11.03691215SE +/- 0.041, N = 3SE +/- 0.032, N = 39.2969.4081. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeClang 12.0Clang 11.03691215Min: 9.22 / Avg: 9.3 / Max: 9.36Min: 9.36 / Avg: 9.41 / Max: 9.471. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyClang 12.0Clang 11.00.36590.73181.09771.46361.8295SE +/- 0.004, N = 3SE +/- 0.011, N = 31.6071.6261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0246810Min: 1.6 / Avg: 1.61 / Max: 1.61Min: 1.61 / Avg: 1.63 / Max: 1.641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096Clang 12.0Clang 11.015003000450060007500SE +/- 35.20, N = 3SE +/- 60.67, N = 36744.16823.81. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096Clang 12.0Clang 11.012002400360048006000Min: 6681.5 / Avg: 6744.13 / Max: 6803.3Min: 6721.1 / Avg: 6823.77 / Max: 6931.11. (CC) gcc options: -pthread -O3 -march=native -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2Clang 12.0Clang 11.0612182430SE +/- 0.06, N = 3SE +/- 0.06, N = 325.1825.471. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2Clang 12.0Clang 11.0612182430Min: 25.12 / Avg: 25.18 / Max: 25.29Min: 25.41 / Avg: 25.47 / Max: 25.581. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7Clang 12.0Clang 11.03691215SE +/- 0.05, N = 3SE +/- 0.02, N = 312.1512.011. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7Clang 12.0Clang 11.048121620Min: 12.08 / Avg: 12.15 / Max: 12.25Min: 11.97 / Avg: 12.01 / Max: 12.051. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.012M24M36M48M60MSE +/- 790005.27, N = 3SE +/- 40360.87, N = 355663000563070001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.010M20M30M40M50MMin: 54083000 / Avg: 55663000 / Max: 56458000Min: 56229000 / Avg: 56307000 / Max: 563640001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteClang 12.0Clang 11.013K26K39K52K65KSE +/- 162.92, N = 3SE +/- 400.92, N = 362319616161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteClang 12.0Clang 11.011K22K33K44K55KMin: 62085.76 / Avg: 62318.78 / Max: 62632.53Min: 60930.16 / Avg: 61616.36 / Max: 62318.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 2048Clang 12.0Clang 11.02K4K6K8K10KSE +/- 65.76, N = 3SE +/- 27.38, N = 37789.97878.51. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 2048Clang 12.0Clang 11.014002800420056007000Min: 7681.5 / Avg: 7789.87 / Max: 7908.6Min: 7840.6 / Avg: 7878.53 / Max: 7931.71. (CC) gcc options: -pthread -O3 -march=native -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTO1122334455SE +/- 0.42, N = 3SE +/- 0.46, N = 3SE +/- 0.74, N = 348.5049.0148.471. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTO1020304050Min: 47.66 / Avg: 48.5 / Max: 48.94Min: 48.1 / Avg: 49.01 / Max: 49.56Min: 47.71 / Avg: 48.47 / Max: 49.961. (CC) gcc options: -O3

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pClang 12.0Clang 11.00.04120.08240.12360.16480.206SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1830.1811. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pClang 12.0Clang 11.012345Min: 0.18 / Avg: 0.18 / Max: 0.18Min: 0.18 / Avg: 0.18 / Max: 0.181. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUClang 12.0Clang 11.02K4K6K8K10KSE +/- 88.25, N = 12SE +/- 102.76, N = 8990497971. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUClang 12.0Clang 11.02K4K6K8K10KMin: 9495.5 / Avg: 9903.79 / Max: 10452.5Min: 9469.5 / Avg: 9796.94 / Max: 10334.51. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClang 12.0Clang 11.0246810SE +/- 0.004, N = 3SE +/- 0.018, N = 36.3096.2431. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClang 12.0Clang 11.03691215Min: 6.3 / Avg: 6.31 / Max: 6.32Min: 6.22 / Avg: 6.24 / Max: 6.281. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyClang 12.0Clang 11.00.06860.13720.20580.27440.343SE +/- 0.000, N = 3SE +/- 0.002, N = 30.3050.3021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyClang 12.0Clang 11.012345Min: 0.3 / Avg: 0.3 / Max: 0.31Min: 0.3 / Avg: 0.3 / Max: 0.311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteClang 12.0Clang 11.07001400210028003500SE +/- 3.48, N = 3SE +/- 14.62, N = 3328133121. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteClang 12.0Clang 11.06001200180024003000Min: 3276.12 / Avg: 3280.86 / Max: 3287.64Min: 3283.68 / Avg: 3312.16 / Max: 3332.151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.00.3280.6560.9841.3121.64SE +/- 0.00123, N = 3SE +/- 0.00568, N = 31.444251.45757MIN: 1.34MIN: 1.351. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 1.44 / Avg: 1.44 / Max: 1.45Min: 1.45 / Avg: 1.46 / Max: 1.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0612182430SE +/- 0.27, N = 3SE +/- 0.13, N = 326.8526.611. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0612182430Min: 26.44 / Avg: 26.85 / Max: 27.35Min: 26.37 / Avg: 26.61 / Max: 26.791. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClang 12.0Clang 11.01632486480SE +/- 0.49, N = 3SE +/- 0.49, N = 374.0073.361. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClang 12.0Clang 11.01428425670Min: 73.09 / Avg: 74 / Max: 74.77Min: 72.87 / Avg: 73.36 / Max: 74.331. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0300M600M900M1200M1500MSE +/- 2255610.29, N = 3SE +/- 1331665.62, N = 3156483333315784000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0300M600M900M1200M1500MMin: 1560600000 / Avg: 1564833333.33 / Max: 1568300000Min: 1575800000 / Avg: 1578400000 / Max: 15802000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0816243240SE +/- 0.48, N = 3SE +/- 0.22, N = 333.3933.141. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0714212835Min: 32.44 / Avg: 33.39 / Max: 33.95Min: 32.7 / Avg: 33.14 / Max: 33.361. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedClang 12.0Clang 11.02004006008001000SE +/- 1.86, N = 3107610681. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedClang 12.0Clang 11.02004006008001000Min: 1072 / Avg: 1075.67 / Max: 10781. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Clang 12.0Clang 11.0Clang 12.0 LTO6001200180024003000SE +/- 1.92, N = 3SE +/- 1.01, N = 3SE +/- 1.62, N = 32653.82640.22657.81. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Clang 12.0Clang 11.0Clang 12.0 LTO5001000150020002500Min: 2651.6 / Avg: 2653.77 / Max: 2657.6Min: 2638.4 / Avg: 2640.17 / Max: 2641.9Min: 2654.9 / Avg: 2657.8 / Max: 2660.51. (CXX) g++ options: -O3 -march=native -rdynamic

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080pClang 12.0Clang 11.030060090012001500SE +/- 2.95, N = 3SE +/- 6.69, N = 31198.221190.41MIN: 700.24 / MAX: 1494.16-lm - MIN: 685.16 / MAX: 1496.361. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080pClang 12.0Clang 11.02004006008001000Min: 1192.93 / Avg: 1198.22 / Max: 1203.12Min: 1178.82 / Avg: 1190.41 / Max: 1201.981. (CC) gcc options: -O3 -march=native -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0700M1400M2100M2800M3500MSE +/- 6045475.81, N = 3SE +/- 2452436.43, N = 3307063333330513666671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0500M1000M1500M2000M2500MMin: 3058800000 / Avg: 3070633333.33 / Max: 3078700000Min: 3046800000 / Avg: 3051366666.67 / Max: 30552000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048Clang 12.0Clang 11.07K14K21K28K35KSE +/- 77.17, N = 3SE +/- 146.10, N = 331935317411. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048Clang 12.0Clang 11.06K12K18K24K30KMin: 31817 / Avg: 31934.67 / Max: 32080Min: 31454 / Avg: 31741.33 / Max: 319311. (CC) gcc options: -pthread -O3 -march=native -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0510152025SE +/- 0.05, N = 3SE +/- 0.15, N = 322.1322.001. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0510152025Min: 22.06 / Avg: 22.13 / Max: 22.23Min: 21.74 / Avg: 22 / Max: 22.261. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPClang 12.0Clang 11.011K22K33K44K55KSE +/- 10.95, N = 3SE +/- 4.51, N = 351596.8751900.431. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPClang 12.0Clang 11.09K18K27K36K45KMin: 51580.59 / Avg: 51596.87 / Max: 51617.69Min: 51893.82 / Avg: 51900.43 / Max: 51909.051. (CXX) g++ options: -O3 -march=native -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUClang 12.0Clang 11.00.0710.1420.2130.2840.355SE +/- 0.000321, N = 3SE +/- 0.000247, N = 30.3136890.315522MIN: 0.3MIN: 0.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUClang 12.0Clang 11.012345Min: 0.31 / Avg: 0.31 / Max: 0.31Min: 0.32 / Avg: 0.32 / Max: 0.321. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClang 12.0Clang 11.0306090120150SE +/- 0.10, N = 3SE +/- 0.46, N = 3118.07117.391. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClang 12.0Clang 11.020406080100Min: 117.91 / Avg: 118.07 / Max: 118.26Min: 116.49 / Avg: 117.39 / Max: 118.041. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pClang 12.0Clang 11.030060090012001500SE +/- 7.87, N = 3SE +/- 2.13, N = 31244.111251.25MIN: 549.81 / MAX: 1390.03-lm - MIN: 556.46 / MAX: 1394.061. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pClang 12.0Clang 11.02004006008001000Min: 1228.6 / Avg: 1244.11 / Max: 1254.19Min: 1247.87 / Avg: 1251.25 / Max: 1255.21. (CC) gcc options: -O3 -march=native -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumClang 12.0Clang 11.00.90131.80262.70393.60524.5065SE +/- 0.0116, N = 3SE +/- 0.0013, N = 34.00583.98371. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumClang 12.0Clang 11.0246810Min: 3.99 / Avg: 4.01 / Max: 4.03Min: 3.98 / Avg: 3.98 / Max: 3.991. (CXX) g++ options: -O3 -march=native -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.00.11070.22140.33210.44280.5535SE +/- 0.002843, N = 3SE +/- 0.001652, N = 30.4919400.489278MIN: 0.47MIN: 0.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyClang 12.0Clang 11.0200K400K600K800K1000KSE +/- 6289.60, N = 3SE +/- 13844.42, N = 3107120910655061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyClang 12.0Clang 11.0200K400K600K800K1000KMin: 1058631.67 / Avg: 1071209 / Max: 1077685.31Min: 1037871.76 / Avg: 1065506.4 / Max: 1080823.271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.048121620SE +/- 0.11, N = 3SE +/- 0.11, N = 317.2217.131. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.048121620Min: 17.08 / Avg: 17.22 / Max: 17.45Min: 16.91 / Avg: 17.13 / Max: 17.241. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Clang 12.0Clang 11.05K10K15K20K25KSE +/- 348.10, N = 9SE +/- 220.77, N = 322797229131. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Clang 12.0Clang 11.04K8K12K16K20KMin: 21982 / Avg: 22796.78 / Max: 24994Min: 22605 / Avg: 22913 / Max: 233411. (CC) gcc options: -pthread -O3 -march=native -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pClang 12.0Clang 11.080160240320400SE +/- 1.56, N = 3SE +/- 3.43, N = 3345.30346.891. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pClang 12.0Clang 11.060120180240300Min: 342.27 / Avg: 345.3 / Max: 347.42Min: 340.14 / Avg: 346.89 / Max: 351.291. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyClang 12.0Clang 11.00.05290.10580.15870.21160.2645SE +/- 0.001, N = 3SE +/- 0.003, N = 30.2340.2351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyClang 12.0Clang 11.012345Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.080160240320400SE +/- 1.11, N = 3SE +/- 1.91, N = 3372.49373.991. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.070140210280350Min: 370.44 / Avg: 372.49 / Max: 374.27Min: 370.89 / Avg: 373.99 / Max: 377.471. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultClang 12.0Clang 11.00.30060.60120.90181.20241.503SE +/- 0.001, N = 3SE +/- 0.001, N = 31.3311.3361. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultClang 12.0Clang 11.0246810Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 1.33 / Avg: 1.34 / Max: 1.341. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KClang 12.0Clang 11.0120240360480600SE +/- 1.79, N = 3SE +/- 1.43, N = 3541.56543.43MIN: 252.01 / MAX: 587.53-lm - MIN: 256.75 / MAX: 593.991. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KClang 12.0Clang 11.0100200300400500Min: 539.28 / Avg: 541.56 / Max: 545.09Min: 541.24 / Avg: 543.43 / Max: 546.121. (CC) gcc options: -O3 -march=native -pthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptClang 12.0Clang 11.02004006008001000SE +/- 4.64, N = 3SE +/- 0.16, N = 3843.40840.641. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptClang 12.0Clang 11.0150300450600750Min: 837.08 / Avg: 843.4 / Max: 852.45Min: 840.37 / Avg: 840.64 / Max: 840.941. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClang 12.0Clang 11.0400K800K1200K1600K2000KSE +/- 984.68, N = 3SE +/- 971.31, N = 31785466.281790837.011. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClang 12.0Clang 11.0300K600K900K1200K1500KMin: 1783537.12 / Avg: 1785466.28 / Max: 1786773.69Min: 1789333.89 / Avg: 1790837.01 / Max: 1792654.321. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.00.2430.4860.7290.9721.215SE +/- 0.00199, N = 3SE +/- 0.00127, N = 31.077011.08011MIN: 1.04MIN: 1.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 1.08 / Avg: 1.08 / Max: 1.081. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6Clang 12.0Clang 11.03691215SE +/- 0.014, N = 3SE +/- 0.022, N = 39.5109.5361. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6Clang 12.0Clang 11.03691215Min: 9.49 / Avg: 9.51 / Max: 9.54Min: 9.5 / Avg: 9.54 / Max: 9.581. (CXX) g++ options: -O3 -fPIC -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305Clang 12.0Clang 11.02004006008001000SE +/- 4.85, N = 3SE +/- 0.62, N = 3850.50848.241. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305Clang 12.0Clang 11.0150300450600750Min: 844.74 / Avg: 850.5 / Max: 860.13Min: 847 / Avg: 848.24 / Max: 848.991. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPClang 12.0Clang 11.07K14K21K28K35KSE +/- 64.93, N = 3SE +/- 0.81, N = 333246.8433178.501. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPClang 12.0Clang 11.06K12K18K24K30KMin: 33177.27 / Avg: 33246.84 / Max: 33376.58Min: 33177.27 / Avg: 33178.5 / Max: 33180.021. (CXX) g++ options: -O3 -march=native -fopenmp

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pClang 12.0Clang 11.0918273645SE +/- 0.17, N = 3SE +/- 0.09, N = 341.0941.011. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pClang 12.0Clang 11.0918273645Min: 40.79 / Avg: 41.09 / Max: 41.37Min: 40.9 / Avg: 41.01 / Max: 41.21. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveClang 12.0Clang 11.0510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 318.9919.031. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveClang 12.0Clang 11.0510152025Min: 18.98 / Avg: 18.99 / Max: 19Min: 19.02 / Avg: 19.03 / Max: 19.041. (CXX) g++ options: -O3 -march=native -flto -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenClang 12.0Clang 11.01302603905206506146131. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 1024Clang 12.0Clang 11.08K16K24K32K40KSE +/- 165.99, N = 3SE +/- 530.09, N = 436239361811. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 1024Clang 12.0Clang 11.06K12K18K24K30KMin: 36061 / Avg: 36239.33 / Max: 36571Min: 34793 / Avg: 36180.5 / Max: 371601. (CC) gcc options: -pthread -O3 -march=native -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultClang 12.0Clang 11.00.61721.23441.85162.46883.086SE +/- 0.027, N = 3SE +/- 0.031, N = 32.7392.7431. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultClang 12.0Clang 11.0246810Min: 2.69 / Avg: 2.74 / Max: 2.78Min: 2.69 / Avg: 2.74 / Max: 2.81. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7Clang 12.0Clang 11.020406080100SE +/- 0.10, N = 3SE +/- 0.10, N = 3109.53109.641. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7Clang 12.0Clang 11.020406080100Min: 109.34 / Avg: 109.52 / Max: 109.68Min: 109.47 / Avg: 109.64 / Max: 109.821. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.00.17540.35080.52620.70160.877SE +/- 0.004246, N = 3SE +/- 0.001200, N = 30.7797760.779101MIN: 0.73MIN: 0.731. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 0.77 / Avg: 0.78 / Max: 0.79Min: 0.78 / Avg: 0.78 / Max: 0.781. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Clang 12.0Clang 11.0246810SE +/- 0.003, N = 3SE +/- 0.021, N = 38.2568.2501. (CC) gcc options: -O3 -pipe -march=native -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Clang 12.0Clang 11.03691215Min: 8.25 / Avg: 8.26 / Max: 8.26Min: 8.23 / Avg: 8.25 / Max: 8.291. (CC) gcc options: -O3 -pipe -march=native -lncurses -lm

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 32Clang 12.0Clang 11.03K6K9K12K15KSE +/- 24.25, N = 3SE +/- 20.33, N = 313333133241. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 32Clang 12.0Clang 11.02K4K6K8K10KMin: 13291 / Avg: 13333.33 / Max: 13375Min: 13299 / Avg: 13323.67 / Max: 133641. (CC) gcc options: -pthread -O3 -march=native -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.00.2420.4840.7260.9681.21SE +/- 0.00286, N = 3SE +/- 0.00395, N = 31.075071.07577MIN: 0.87MIN: 0.861. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0246810Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 1.07 / Avg: 1.08 / Max: 1.081. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptClang 12.0Clang 11.080160240320400SE +/- 0.04, N = 3SE +/- 2.03, N = 3351.28351.081. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptClang 12.0Clang 11.060120180240300Min: 351.22 / Avg: 351.28 / Max: 351.36Min: 347.01 / Avg: 351.08 / Max: 353.231. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Monte CarloClang 12.0Clang 11.0150300450600750SE +/- 0.40, N = 3SE +/- 0.40, N = 3675.13674.861. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Monte CarloClang 12.0Clang 11.0120240360480600Min: 674.49 / Avg: 675.13 / Max: 675.88Min: 674.22 / Avg: 674.86 / Max: 675.591. (CC) gcc options: -O3 -march=native -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughClang 12.0Clang 11.0246810SE +/- 0.0028, N = 3SE +/- 0.0026, N = 36.76476.76741. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughClang 12.0Clang 11.03691215Min: 6.76 / Avg: 6.76 / Max: 6.77Min: 6.76 / Avg: 6.77 / Max: 6.771. (CXX) g++ options: -O3 -march=native -flto -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyClang 12.0Clang 11.0200K400K600K800K1000KSE +/- 720.87, N = 3SE +/- 1740.88, N = 3106902210693671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyClang 12.0Clang 11.0200K400K600K800K1000KMin: 1068260.38 / Avg: 1069022.45 / Max: 1070463.39Min: 1066077.45 / Avg: 1069366.74 / Max: 1072000.031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0Clang 12.0Clang 11.01122334455SE +/- 0.04, N = 3SE +/- 0.07, N = 347.8847.891. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0Clang 12.0Clang 11.01020304050Min: 47.82 / Avg: 47.88 / Max: 47.97Min: 47.81 / Avg: 47.89 / Max: 48.041. (CXX) g++ options: -O3 -fPIC -lm

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Jacobi Successive Over-RelaxationClang 12.0Clang 11.0400800120016002000SE +/- 0.08, N = 3SE +/- 0.12, N = 31785.501785.421. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Jacobi Successive Over-RelaxationClang 12.0Clang 11.030060090012001500Min: 1785.38 / Avg: 1785.5 / Max: 1785.66Min: 1785.18 / Avg: 1785.42 / Max: 1785.561. (CC) gcc options: -O3 -march=native -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyClang 12.0Clang 11.00.02120.04240.06360.08480.106SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0940.0941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyClang 12.0Clang 11.012345Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.00.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.531. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.00.04730.09460.14190.18920.2365SE +/- 0.00, N = 3SE +/- 0.00, N = 30.210.211. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.012345Min: 0.21 / Avg: 0.21 / Max: 0.21Min: 0.21 / Avg: 0.21 / Max: 0.211. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUClang 12.0Clang 11.010002000300040005000SE +/- 126.29, N = 12SE +/- 169.87, N = 9445645231. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUClang 12.0Clang 11.08001600240032004000Min: 3951 / Avg: 4456.21 / Max: 5216Min: 3843 / Avg: 4523.33 / Max: 5208.51. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUClang 12.0Clang 11.0110220330440550SE +/- 10.30, N = 12SE +/- 5.55, N = 34984711. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUClang 12.0Clang 11.090180270360450Min: 460 / Avg: 497.67 / Max: 552Min: 462 / Avg: 470.67 / Max: 4811. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NClang 12.0Clang 11.01530456075SE +/- 2.22, N = 12SE +/- 3.65, N = 1569.151.21. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NClang 12.0Clang 11.01326395265Min: 53.6 / Avg: 69.08 / Max: 79.9Min: 36.9 / Avg: 51.23 / Max: 97.91. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTClang 12.0Clang 11.02004006008001000SE +/- 17.06, N = 12SE +/- 1.49, N = 158199331. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTClang 12.0Clang 11.0160320480640800Min: 782 / Avg: 818.83 / Max: 1000Min: 921 / Avg: 933.33 / Max: 9431. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYClang 12.0Clang 11.02004006008001000SE +/- 20.06, N = 12SE +/- 1.59, N = 1587810431. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYClang 12.0Clang 11.02004006008001000Min: 830 / Avg: 877.67 / Max: 1090Min: 1030 / Avg: 1043.33 / Max: 10501. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYClang 12.0Clang 11.0400800120016002000SE +/- 15.32, N = 11SE +/- 8.32, N = 1560418771. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYClang 12.0Clang 11.030060090012001500Min: 563 / Avg: 603.82 / Max: 751Min: 1830 / Avg: 1876.67 / Max: 19401. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTClang 12.0Clang 11.0100200300400500SE +/- 35.24, N = 12SE +/- 38.96, N = 154344621. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTClang 12.0Clang 11.080160240320400Min: 250 / Avg: 433.83 / Max: 598Min: 214 / Avg: 461.93 / Max: 5971. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYClang 12.0Clang 11.090180270360450SE +/- 15.69, N = 12SE +/- 34.43, N = 153574121. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYClang 12.0Clang 11.070140210280350Min: 285 / Avg: 357 / Max: 482Min: 278 / Avg: 411.93 / Max: 6751. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYClang 12.0Clang 11.0110220330440550SE +/- 15.30, N = 12SE +/- 36.50, N = 154714951. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYClang 12.0Clang 11.090180270360450Min: 339 / Avg: 471.17 / Max: 531Min: 256 / Avg: 494.8 / Max: 8161. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingClang 12.0Clang 11.05001000150020002500SE +/- 41.63, N = 12SE +/- 27.29, N = 3213620341. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingClang 12.0Clang 11.0400800120016002000Min: 1981 / Avg: 2135.5 / Max: 2507Min: 1980 / Avg: 2034.33 / Max: 20661. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 1024Clang 12.0Clang 11.011K22K33K44K55KSE +/- 952.64, N = 12SE +/- 585.78, N = 350350507401. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 1024Clang 12.0Clang 11.09K18K27K36K45KMin: 40245 / Avg: 50350.08 / Max: 52560Min: 49569 / Avg: 50740 / Max: 513571. (CC) gcc options: -pthread -O3 -march=native -lm

174 Results Shown

ViennaCL:
  CPU BLAS - dGEMM-NN
  CPU BLAS - dGEMM-TN
dav1d
Etcpak
oneDNN
Etcpak
oneDNN
ViennaCL
Etcpak
oneDNN
Botan
ViennaCL
Ngspice
toyBrot Fractal Generator:
  TBB
  C++ Threads
WebP2 Image Encode
SciMark
toyBrot Fractal Generator
ViennaCL
LibRaw
oneDNN
FFTW
SciMark
GraphicsMagick
toyBrot Fractal Generator
Botan
oneDNN
Ngspice
Timed MrBayes Analysis
JPEG XL
oneDNN
Botan:
  Twofish
  AES-256
WebP2 Image Encode
Botan
oneDNN
simdjson
FFTW
Botan
FFTW
Botan
simdjson
TSCP
Botan
GraphicsMagick
SciMark
PostgreSQL pgbench
ONNX Runtime
PostgreSQL pgbench
ONNX Runtime
simdjson
SciMark
Botan
JPEG XL
libavif avifenc
FFTW
AOM AV1
SVT-AV1
LZ4 Compression
FFTW
simdjson
oneDNN
AOM AV1
PostgreSQL pgbench:
  100 - 1 - Read Only
  100 - 1 - Read Only - Average Latency
JPEG XL
WebP Image Encode
Opus Codec Encoding
oneDNN
FFTW
libavif avifenc
FFTW
Tachyon
AOM AV1
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
libavif avifenc
SecureMark
WebP Image Encode:
  Quality 100, Lossless, Highest Compression
  Quality 100
oneDNN
GraphicsMagick
C-Ray
LZ4 Compression
AOM AV1
WebP2 Image Encode
JPEG XL
AOM AV1
FLAC Audio Encoding
JPEG XL
LZ4 Compression
SVT-HEVC
AOM AV1
Gcrypt Library
SVT-VP9
Liquid-DSP
GraphicsMagick
SVT-VP9
x265
POV-Ray
PostgreSQL pgbench
FFTW
libavif avifenc
JPEG XL
Liquid-DSP
PostgreSQL pgbench
FFTW
LZ4 Compression
SVT-AV1
ONNX Runtime
WebP Image Encode
PostgreSQL pgbench:
  100 - 1 - Read Write - Average Latency
  100 - 1 - Read Write
oneDNN
AOM AV1
x265
Liquid-DSP
AOM AV1
GraphicsMagick
QuantLib
dav1d
Liquid-DSP
FFTW
AOM AV1
FinanceBench
oneDNN
SVT-AV1
dav1d
ASTC Encoder
oneDNN
PostgreSQL pgbench
AOM AV1
FFTW
SVT-HEVC
PostgreSQL pgbench
SVT-VP9
WebP Image Encode
dav1d
Botan
Coremark
oneDNN
libavif avifenc
Botan
FinanceBench
SVT-HEVC
ASTC Encoder
GraphicsMagick
FFTW
WebP2 Image Encode:
  Default
  Quality 75, Compression Effort 7
oneDNN
LAME MP3 Encoding
FFTW
oneDNN
Botan
SciMark
ASTC Encoder
PostgreSQL pgbench
libavif avifenc
SciMark
PostgreSQL pgbench
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 4K
ONNX Runtime:
  super-resolution-10 - OpenMP CPU
  bertsquad-10 - OpenMP CPU
ViennaCL:
  CPU BLAS - dGEMV-N
  CPU BLAS - dDOT
  CPU BLAS - dAXPY
  CPU BLAS - dCOPY
  CPU BLAS - sDOT
  CPU BLAS - sAXPY
  CPU BLAS - sCOPY
GraphicsMagick
FFTW