AMD EPYC 7763 64-Core testing with a Supermicro H12SSL-i v1.01 (2.0 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.
Clang 12.0 Processor: AMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads), Motherboard: Supermicro H12SSL-i v1.01 (2.0 BIOS), Chipset: AMD Starship/Matisse, Memory: 126GB, Disk: 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Network: 2 x Broadcom NetXtreme BCM5720 2-port PCIe
OS: Ubuntu 20.04, Kernel: 5.12.0-051200rc6daily20210408-generic (x86_64) 20210407, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: Clang 12.0.0-++20210409092622+fa0971b87fb2-1~exp1~20210409193326.73, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Processor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119Python Notes: Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Clang 11.0 OS: Ubuntu 20.04, Kernel: 5.12.0-051200rc6daily20210408-generic (x86_64) 20210407, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: Clang 11.0.0-2~ubuntu20.04.1, File-System: ext4, Screen Resolution: 1024x768
Clang 12.0 LTO OS: Ubuntu 20.04, Kernel: 5.12.0-051200rc6daily20210408-generic (x86_64) 20210407, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: Clang 12.0.0-++20210409092622+fa0971b87fb2-1~exp1~20210409193326.73, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"Processor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119Python Notes: Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.21 Clang 12.0 Clang 11.0 Clang 12.0 LTO 600 1200 1800 2400 3000 SE +/- 1.92, N = 3 SE +/- 1.01, N = 3 SE +/- 1.62, N = 3 2653.8 2640.2 2657.8 1. (CXX) g++ options: -O3 -march=native -rdynamic
Etcpak Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 0.7 Configuration: DXT1 Clang 12.0 Clang 11.0 Clang 12.0 LTO 600 1200 1800 2400 3000 SE +/- 2.64, N = 3 SE +/- 1.69, N = 3 SE +/- 6.09, N = 3 2718.53 1872.76 2719.99 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 0.7 Configuration: ETC1 Clang 12.0 Clang 11.0 Clang 12.0 LTO 60 120 180 240 300 SE +/- 0.11, N = 3 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 284.64 205.07 284.76 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 0.7 Configuration: ETC2 Clang 12.0 Clang 11.0 Clang 12.0 LTO 40 80 120 160 200 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 202.09 168.82 202.10 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org ms, Fewer Is Better toyBrot Fractal Generator 2020-11-18 Implementation: C++ Tasks Clang 12.0 Clang 11.0 Clang 12.0 LTO 1600 3200 4800 6400 8000 SE +/- 33.67, N = 3 SE +/- 7.31, N = 3 SE +/- 17.21, N = 3 7437 6836 7367 -lm -lgcc -lgcc_s -lc -lm -lgcc -lgcc_s -lc -flto 1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.org ms, Fewer Is Better toyBrot Fractal Generator 2020-11-18 Implementation: C++ Threads Clang 12.0 Clang 11.0 Clang 12.0 LTO 1500 3000 4500 6000 7500 SE +/- 30.90, N = 3 SE +/- 25.04, N = 3 SE +/- 15.06, N = 3 7220 6395 7143 -lm -lgcc -lgcc_s -lc -lm -lgcc -lgcc_s -lc -flto 1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Stock - Size: 1D FFT Size 1024 Clang 12.0 Clang 11.0 2K 4K 6K 8K 10K SE +/- 27.10, N = 3 SE +/- 35.53, N = 3 10805 10564 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Stock - Size: 1D FFT Size 2048 Clang 12.0 Clang 11.0 2K 4K 6K 8K 10K SE +/- 7.75, N = 3 SE +/- 28.76, N = 3 10467.0 10004.2 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Stock - Size: 1D FFT Size 4096 Clang 12.0 Clang 11.0 2K 4K 6K 8K 10K SE +/- 101.36, N = 3 SE +/- 15.16, N = 3 9862.0 9438.6 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Stock - Size: 2D FFT Size 1024 Clang 12.0 Clang 11.0 2K 4K 6K 8K 10K SE +/- 48.25, N = 3 SE +/- 45.95, N = 3 9088.3 8809.6 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Stock - Size: 2D FFT Size 2048 Clang 12.0 Clang 11.0 2K 4K 6K 8K 10K SE +/- 65.76, N = 3 SE +/- 27.38, N = 3 7789.9 7878.5 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Stock - Size: 2D FFT Size 4096 Clang 12.0 Clang 11.0 1500 3000 4500 6000 7500 SE +/- 35.20, N = 3 SE +/- 60.67, N = 3 6744.1 6823.8 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Float + SSE - Size: 1D FFT Size 32 Clang 12.0 Clang 11.0 3K 6K 9K 12K 15K SE +/- 48.79, N = 3 SE +/- 129.55, N = 3 15649 14590 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Float + SSE - Size: 1D FFT Size 1024 Clang 12.0 Clang 11.0 11K 22K 33K 44K 55K SE +/- 952.64, N = 12 SE +/- 585.78, N = 3 50350 50740 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Float + SSE - Size: 1D FFT Size 2048 Clang 12.0 Clang 11.0 11K 22K 33K 44K 55K SE +/- 439.50, N = 3 SE +/- 582.34, N = 3 51254 50084 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Float + SSE - Size: 1D FFT Size 4096 Clang 12.0 Clang 11.0 10K 20K 30K 40K 50K SE +/- 671.66, N = 15 SE +/- 413.24, N = 15 45428 46676 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Float + SSE - Size: 2D FFT Size 1024 Clang 12.0 Clang 11.0 8K 16K 24K 32K 40K SE +/- 165.99, N = 3 SE +/- 530.09, N = 4 36239 36181 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Float + SSE - Size: 2D FFT Size 2048 Clang 12.0 Clang 11.0 7K 14K 21K 28K 35K SE +/- 77.17, N = 3 SE +/- 146.10, N = 3 31935 31741 1. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better FFTW 3.3.6 Build: Float + SSE - Size: 2D FFT Size 4096 Clang 12.0 Clang 11.0 5K 10K 15K 20K 25K SE +/- 348.10, N = 9 SE +/- 220.77, N = 3 22797 22913 1. (CC) gcc options: -pthread -O3 -march=native -lm
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis Clang 12.0 Clang 11.0 Clang 12.0 LTO 20 40 60 80 100 SE +/- 0.98, N = 3 SE +/- 0.98, N = 3 SE +/- 1.09, N = 3 89.12 88.62 93.63 -flto 1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Default Clang 12.0 Clang 11.0 0.3006 0.6012 0.9018 1.2024 1.503 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 1.331 1.336 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 Clang 12.0 Clang 11.0 0.504 1.008 1.512 2.016 2.52 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 2.199 2.240 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless Clang 12.0 Clang 11.0 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.13, N = 3 19.02 18.57 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression Clang 12.0 Clang 11.0 2 4 6 8 10 SE +/- 0.004, N = 3 SE +/- 0.018, N = 3 6.309 6.243 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression Clang 12.0 Clang 11.0 9 18 27 36 45 SE +/- 0.07, N = 3 SE +/- 0.08, N = 3 38.45 37.73 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: Kostya Clang 12.0 Clang 11.0 0.6188 1.2376 1.8564 2.4752 3.094 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 2.75 2.68 1. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: LargeRandom Clang 12.0 Clang 11.0 0.189 0.378 0.567 0.756 0.945 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.84 0.81 1. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: PartialTweets Clang 12.0 Clang 11.0 1.035 2.07 3.105 4.14 5.175 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 4.60 4.41 1. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: DistinctUserID Clang 12.0 Clang 11.0 1.0395 2.079 3.1185 4.158 5.1975 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 4.62 4.41 1. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 3 - Decompression Speed Clang 12.0 Clang 11.0 Clang 12.0 LTO 3K 6K 9K 12K 15K SE +/- 71.01, N = 3 SE +/- 15.91, N = 3 SE +/- 60.82, N = 3 13911.5 13840.3 13715.0 1. (CC) gcc options: -O3
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 9 - Compression Speed Clang 12.0 Clang 11.0 Clang 12.0 LTO 11 22 33 44 55 SE +/- 0.42, N = 3 SE +/- 0.46, N = 3 SE +/- 0.74, N = 3 48.50 49.01 48.47 1. (CC) gcc options: -O3
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 9 - Decompression Speed Clang 12.0 Clang 11.0 Clang 12.0 LTO 3K 6K 9K 12K 15K SE +/- 65.90, N = 3 SE +/- 23.21, N = 3 SE +/- 46.50, N = 3 13926.5 13927.9 13698.7 1. (CC) gcc options: -O3
JPEG XL The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 5 Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.17, N = 3 SE +/- 0.24, N = 3 74.27 78.41 1. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 7 Clang 12.0 Clang 11.0 3 6 9 12 15 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 12.15 12.01 1. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 8 Clang 12.0 Clang 11.0 0.1845 0.369 0.5535 0.738 0.9225 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.82 0.80 1. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 5 Clang 12.0 Clang 11.0 15 30 45 60 75 SE +/- 0.14, N = 3 SE +/- 0.20, N = 3 66.66 65.58 1. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 7 Clang 12.0 Clang 11.0 15 30 45 60 75 SE +/- 0.16, N = 3 SE +/- 0.08, N = 3 66.38 65.43 1. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 8 Clang 12.0 Clang 11.0 7 14 21 28 35 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 28.13 27.24 1. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
SciMark This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Composite Clang 12.0 Clang 11.0 700 1400 2100 2800 3500 SE +/- 1.11, N = 3 SE +/- 15.12, N = 3 3190.62 3319.34 1. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Monte Carlo Clang 12.0 Clang 11.0 150 300 450 600 750 SE +/- 0.40, N = 3 SE +/- 0.40, N = 3 675.13 674.86 1. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Fast Fourier Transform Clang 12.0 Clang 11.0 90 180 270 360 450 SE +/- 0.46, N = 3 SE +/- 0.67, N = 3 363.85 399.16 1. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Sparse Matrix Multiply Clang 12.0 Clang 11.0 1000 2000 3000 4000 5000 SE +/- 10.41, N = 3 SE +/- 3.87, N = 3 4280.22 4590.37 1. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Dense LU Matrix Factorization Clang 12.0 Clang 11.0 2K 4K 6K 8K 10K SE +/- 7.16, N = 3 SE +/- 77.81, N = 3 8848.40 9146.88 1. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Jacobi Successive Over-Relaxation Clang 12.0 Clang 11.0 400 800 1200 1600 2000 SE +/- 0.08, N = 3 SE +/- 0.12, N = 3 1785.50 1785.42 1. (CC) gcc options: -O3 -march=native -lm
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 82.64 79.15 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI - Decrypt Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 84.23 80.22 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 Clang 12.0 Clang 11.0 1100 2200 3300 4400 5500 SE +/- 2.14, N = 3 SE +/- 2.16, N = 3 4659.34 4901.13 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 - Decrypt Clang 12.0 Clang 11.0 1000 2000 3000 4000 5000 SE +/- 4.78, N = 3 SE +/- 1.35, N = 3 4682.46 4895.56 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish Clang 12.0 Clang 11.0 70 140 210 280 350 SE +/- 0.13, N = 3 SE +/- 0.09, N = 3 315.41 299.21 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish - Decrypt Clang 12.0 Clang 11.0 70 140 210 280 350 SE +/- 0.16, N = 3 SE +/- 0.15, N = 3 321.19 302.41 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish Clang 12.0 Clang 11.0 80 160 240 320 400 SE +/- 0.05, N = 3 SE +/- 1.73, N = 3 380.05 319.23 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish - Decrypt Clang 12.0 Clang 11.0 80 160 240 320 400 SE +/- 0.04, N = 3 SE +/- 2.03, N = 3 351.28 351.08 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 Clang 12.0 Clang 11.0 30 60 90 120 150 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 132.82 128.59 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 - Decrypt Clang 12.0 Clang 11.0 30 60 90 120 150 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 133.05 127.74 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 Clang 12.0 Clang 11.0 200 400 600 800 1000 SE +/- 4.85, N = 3 SE +/- 0.62, N = 3 850.50 848.24 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 - Decrypt Clang 12.0 Clang 11.0 200 400 600 800 1000 SE +/- 4.64, N = 3 SE +/- 0.16, N = 3 843.40 840.64 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Swirl Clang 12.0 Clang 11.0 400 800 1200 1600 2000 SE +/- 6.57, N = 3 SE +/- 12.41, N = 3 1993 1915 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate Clang 12.0 Clang 11.0 150 300 450 600 750 SE +/- 2.60, N = 3 SE +/- 1.33, N = 3 712 665 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen Clang 12.0 Clang 11.0 130 260 390 520 650 614 613 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced Clang 12.0 Clang 11.0 200 400 600 800 1000 SE +/- 1.86, N = 3 1076 1068 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing Clang 12.0 Clang 11.0 500 1000 1500 2000 2500 SE +/- 41.63, N = 12 SE +/- 27.29, N = 3 2136 2034 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Noise-Gaussian Clang 12.0 Clang 11.0 100 200 300 400 500 SE +/- 1.00, N = 3 457 463 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: HWB Color Space Clang 12.0 Clang 11.0 130 260 390 520 650 SE +/- 0.67, N = 3 SE +/- 0.88, N = 3 605 616 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Chimera 1080p Clang 12.0 Clang 11.0 300 600 900 1200 1500 SE +/- 2.95, N = 3 SE +/- 6.69, N = 3 1198.22 1190.41 MIN: 700.24 / MAX: 1494.16 -lm - MIN: 685.16 / MAX: 1496.36 1. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 4K Clang 12.0 Clang 11.0 120 240 360 480 600 SE +/- 1.79, N = 3 SE +/- 1.43, N = 3 541.56 543.43 MIN: 252.01 / MAX: 587.53 -lm - MIN: 256.75 / MAX: 593.99 1. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 1080p Clang 12.0 Clang 11.0 300 600 900 1200 1500 SE +/- 7.87, N = 3 SE +/- 2.13, N = 3 1244.11 1251.25 MIN: 549.81 / MAX: 1390.03 -lm - MIN: 556.46 / MAX: 1394.06 1. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Chimera 1080p 10-bit Clang 12.0 Clang 11.0 70 140 210 280 350 SE +/- 0.93, N = 3 SE +/- 0.48, N = 3 308.32 184.19 MIN: 220.53 / MAX: 490.51 -lm - MIN: 114.52 / MAX: 310.5 1. (CC) gcc options: -O3 -march=native -pthread
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K Clang 12.0 Clang 11.0 0.0473 0.0946 0.1419 0.1892 0.2365 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.21 0.21 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K Clang 12.0 Clang 11.0 1.1138 2.2276 3.3414 4.4552 5.569 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 4.87 4.95 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K Clang 12.0 Clang 11.0 4 8 12 16 20 SE +/- 0.11, N = 3 SE +/- 0.11, N = 3 17.22 17.13 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K Clang 12.0 Clang 11.0 3 6 9 12 15 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 8.99 9.14 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K Clang 12.0 Clang 11.0 8 16 24 32 40 SE +/- 0.48, N = 3 SE +/- 0.22, N = 3 33.39 33.14 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K Clang 12.0 Clang 11.0 9 18 27 36 45 SE +/- 0.43, N = 3 SE +/- 0.31, N = 3 38.11 37.28 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 0.1193 0.2386 0.3579 0.4772 0.5965 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.53 0.53 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 2 4 6 8 10 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 7.10 7.20 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 6 12 18 24 30 SE +/- 0.27, N = 3 SE +/- 0.13, N = 3 26.85 26.61 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 5 10 15 20 25 SE +/- 0.05, N = 3 SE +/- 0.15, N = 3 22.13 22.00 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 1.07, N = 3 SE +/- 0.51, N = 3 88.78 86.09 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.31, N = 3 SE +/- 0.53, N = 3 103.17 100.55 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
SVT-AV1 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 0 - Input: 1080p Clang 12.0 Clang 11.0 0.0412 0.0824 0.1236 0.1648 0.206 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.183 0.181 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 4 - Input: 1080p Clang 12.0 Clang 11.0 3 6 9 12 15 SE +/- 0.17, N = 3 SE +/- 0.16, N = 4 11.47 11.82 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 8 - Input: 1080p Clang 12.0 Clang 11.0 30 60 90 120 150 SE +/- 0.10, N = 3 SE +/- 0.46, N = 3 118.07 117.39 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 9 18 27 36 45 SE +/- 0.17, N = 3 SE +/- 0.09, N = 3 41.09 41.01 1. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 80 160 240 320 400 SE +/- 1.56, N = 3 SE +/- 3.43, N = 3 345.30 346.89 1. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 140 280 420 560 700 SE +/- 3.01, N = 3 SE +/- 5.55, N = 3 643.58 652.74 1. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 110 220 330 440 550 SE +/- 1.37, N = 3 SE +/- 0.23, N = 3 487.43 481.05 1. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 110 220 330 440 550 SE +/- 0.73, N = 3 SE +/- 1.76, N = 3 488.23 482.02 1. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p Clang 12.0 Clang 11.0 80 160 240 320 400 SE +/- 1.11, N = 3 SE +/- 1.91, N = 3 372.49 373.99 1. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
x265 This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 4K Clang 12.0 Clang 11.0 7 14 21 28 35 SE +/- 0.23, N = 3 SE +/- 0.25, N = 3 30.32 29.94 1. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p Clang 12.0 Clang 11.0 16 32 48 64 80 SE +/- 0.49, N = 3 SE +/- 0.49, N = 3 74.00 73.36 1. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.9.0 Encoder Speed: 10 Clang 12.0 Clang 11.0 0.7715 1.543 2.3145 3.086 3.8575 SE +/- 0.014, N = 3 SE +/- 0.010, N = 3 3.361 3.429 1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.9.0 Encoder Speed: 10, Lossless Clang 12.0 Clang 11.0 1.3228 2.6456 3.9684 5.2912 6.614 SE +/- 0.013, N = 3 SE +/- 0.011, N = 3 5.746 5.879 1. (CXX) g++ options: -O3 -fPIC -lm
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel Clang 12.0 Clang 11.0 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 15.87 15.60 1. (CC) gcc options: -lm -lpthread -O3 -march=native
POV-Ray This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray 3.7.0.7 Trace Time Clang 12.0 Clang 11.0 3 6 9 12 15 SE +/- 0.041, N = 3 SE +/- 0.032, N = 3 9.296 9.408 1. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 0.243 0.486 0.729 0.972 1.215 SE +/- 0.00199, N = 3 SE +/- 0.00127, N = 3 1.07701 1.08011 MIN: 1.04 MIN: 1.03 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 0.7938 1.5876 2.3814 3.1752 3.969 SE +/- 0.01639, N = 3 SE +/- 0.04735, N = 3 3.28507 3.52787 MIN: 3.15 MIN: 3.29 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 0.242 0.484 0.726 0.968 1.21 SE +/- 0.00286, N = 3 SE +/- 0.00395, N = 3 1.07507 1.07577 MIN: 0.87 MIN: 0.86 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 0.1598 0.3196 0.4794 0.6392 0.799 SE +/- 0.011383, N = 3 SE +/- 0.008914, N = 3 0.710124 0.594729 MIN: 0.64 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 0.2748 0.5496 0.8244 1.0992 1.374 SE +/- 0.018279, N = 4 SE +/- 0.000480, N = 3 1.221320 0.841169 MIN: 1.13 MIN: 0.82 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 0.328 0.656 0.984 1.312 1.64 SE +/- 0.00123, N = 3 SE +/- 0.00568, N = 3 1.44425 1.45757 MIN: 1.34 MIN: 1.35 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 0.5328 1.0656 1.5984 2.1312 2.664 SE +/- 0.02100, N = 3 SE +/- 0.02389, N = 3 2.36797 2.31859 MIN: 2.01 MIN: 1.92 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 0.4581 0.9162 1.3743 1.8324 2.2905 SE +/- 0.01922, N = 12 SE +/- 0.00118, N = 3 2.03606 1.60540 MIN: 1.81 MIN: 1.55 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 0.1107 0.2214 0.3321 0.4428 0.5535 SE +/- 0.002843, N = 3 SE +/- 0.001652, N = 3 0.491940 0.489278 MIN: 0.47 MIN: 0.46 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 0.1754 0.3508 0.5262 0.7016 0.877 SE +/- 0.004246, N = 3 SE +/- 0.001200, N = 3 0.779776 0.779101 MIN: 0.73 MIN: 0.73 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 300 600 900 1200 1500 SE +/- 3.92, N = 3 SE +/- 9.46, N = 3 1302.70 1276.04 MIN: 1289.86 MIN: 1249.65 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 130 260 390 520 650 SE +/- 9.50, N = 3 SE +/- 0.83, N = 3 593.97 563.20 MIN: 570.44 MIN: 550.23 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 300 600 900 1200 1500 SE +/- 3.61, N = 3 SE +/- 7.11, N = 3 1307.49 1277.62 MIN: 1293.38 MIN: 1252.39 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 130 260 390 520 650 SE +/- 1.89, N = 3 SE +/- 0.25, N = 3 590.18 562.97 MIN: 575.41 MIN: 551.49 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU Clang 12.0 Clang 11.0 0.071 0.142 0.213 0.284 0.355 SE +/- 0.000321, N = 3 SE +/- 0.000247, N = 3 0.313689 0.315522 MIN: 0.3 MIN: 0.3 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU Clang 12.0 Clang 11.0 300 600 900 1200 1500 SE +/- 1.78, N = 3 SE +/- 9.75, N = 3 1305.10 1271.91 MIN: 1294.76 MIN: 1252.33 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU Clang 12.0 Clang 11.0 130 260 390 520 650 SE +/- 3.02, N = 3 SE +/- 0.10, N = 3 597.48 563.25 MIN: 580.8 MIN: 551.31 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU Clang 12.0 Clang 11.0 0.2638 0.5276 0.7914 1.0552 1.319 SE +/- 0.00458, N = 3 SE +/- 0.00653, N = 3 1.17258 1.15140 MIN: 1.12 MIN: 1.09 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp=libomp -msse4.1 -fPIC -pie -lpthread -ldl
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode Clang 12.0 Clang 11.0 2 4 6 8 10 SE +/- 0.013, N = 5 SE +/- 0.002, N = 5 7.567 7.392 1. (CXX) g++ options: -O3 -march=native -logg -lm
Gcrypt Library Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Gcrypt Library 1.9 Clang 12.0 Clang 11.0 50 100 150 200 250 SE +/- 0.44, N = 3 SE +/- 0.28, N = 3 236.92 240.21 1. (CC) gcc options: -O3 -march=native -fvisibility=hidden
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 Clang 12.0 Clang 11.0 30 60 90 120 150 SE +/- 0.53, N = 3 SE +/- 0.06, N = 3 118.87 103.83 1. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 1.11, N = 6 SE +/- 1.37, N = 3 95.96 90.53 1. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Default Clang 12.0 Clang 11.0 0.6172 1.2344 1.8516 2.4688 3.086 SE +/- 0.027, N = 3 SE +/- 0.031, N = 3 2.739 2.743 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 75, Compression Effort 7 Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.10, N = 3 SE +/- 0.10, N = 3 109.53 109.64 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 95, Compression Effort 7 Clang 12.0 Clang 11.0 50 100 150 200 250 SE +/- 0.07, N = 3 SE +/- 0.66, N = 3 207.01 203.63 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 100, Compression Effort 5 Clang 12.0 Clang 11.0 2 4 6 8 10 SE +/- 0.006, N = 3 SE +/- 0.022, N = 3 6.690 7.366 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 100, Lossless Compression Clang 12.0 Clang 11.0 90 180 270 360 450 SE +/- 0.49, N = 3 SE +/- 0.17, N = 3 374.04 392.85 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 1 - Buffer Length: 256 - Filter Length: 57 Clang 12.0 Clang 11.0 12M 24M 36M 48M 60M SE +/- 790005.27, N = 3 SE +/- 40360.87, N = 3 55663000 56307000 1. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 32 - Buffer Length: 256 - Filter Length: 57 Clang 12.0 Clang 11.0 300M 600M 900M 1200M 1500M SE +/- 2255610.29, N = 3 SE +/- 1331665.62, N = 3 1564833333 1578400000 1. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 64 - Buffer Length: 256 - Filter Length: 57 Clang 12.0 Clang 11.0 700M 1400M 2100M 2800M 3500M SE +/- 6045475.81, N = 3 SE +/- 2452436.43, N = 3 3070633333 3051366667 1. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 128 - Buffer Length: 256 - Filter Length: 57 Clang 12.0 Clang 11.0 800M 1600M 2400M 3200M 4000M SE +/- 883804.91, N = 3 SE +/- 1559202.08, N = 3 3643766667 3596533333 1. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
FinanceBench FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better FinanceBench 2016-07-25 Benchmark: Repo OpenMP Clang 12.0 Clang 11.0 7K 14K 21K 28K 35K SE +/- 64.93, N = 3 SE +/- 0.81, N = 3 33246.84 33178.50 1. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.org ms, Fewer Is Better FinanceBench 2016-07-25 Benchmark: Bonds OpenMP Clang 12.0 Clang 11.0 11K 22K 33K 44K 55K SE +/- 10.95, N = 3 SE +/- 4.51, N = 3 51596.87 51900.43 1. (CXX) g++ options: -O3 -march=native -fopenmp
ViennaCL ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sCOPY Clang 12.0 Clang 11.0 110 220 330 440 550 SE +/- 15.30, N = 12 SE +/- 36.50, N = 15 471 495 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sAXPY Clang 12.0 Clang 11.0 90 180 270 360 450 SE +/- 15.69, N = 12 SE +/- 34.43, N = 15 357 412 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sDOT Clang 12.0 Clang 11.0 100 200 300 400 500 SE +/- 35.24, N = 12 SE +/- 38.96, N = 15 434 462 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dCOPY Clang 12.0 Clang 11.0 400 800 1200 1600 2000 SE +/- 15.32, N = 11 SE +/- 8.32, N = 15 604 1877 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dAXPY Clang 12.0 Clang 11.0 200 400 600 800 1000 SE +/- 20.06, N = 12 SE +/- 1.59, N = 15 878 1043 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dDOT Clang 12.0 Clang 11.0 200 400 600 800 1000 SE +/- 17.06, N = 12 SE +/- 1.49, N = 15 819 933 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-N Clang 12.0 Clang 11.0 15 30 45 60 75 SE +/- 2.22, N = 12 SE +/- 3.65, N = 15 69.1 51.2 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-T Clang 12.0 Clang 11.0 150 300 450 600 750 SE +/- 4.04, N = 12 SE +/- 1.41, N = 14 626 677 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NN Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.05, N = 12 SE +/- 0.06, N = 15 48.6 83.6 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NT Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.56, N = 12 SE +/- 0.03, N = 15 65.7 79.3 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TN Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.09, N = 12 SE +/- 0.02, N = 15 51.9 88.3 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TT Clang 12.0 Clang 11.0 20 40 60 80 100 SE +/- 0.07, N = 12 SE +/- 0.02, N = 14 73.0 84.0 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -rdynamic -lOpenCL
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium Clang 12.0 Clang 11.0 0.9013 1.8026 2.7039 3.6052 4.5065 SE +/- 0.0116, N = 3 SE +/- 0.0013, N = 3 4.0058 3.9837 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Thorough Clang 12.0 Clang 11.0 2 4 6 8 10 SE +/- 0.0028, N = 3 SE +/- 0.0026, N = 3 6.7647 6.7674 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Exhaustive Clang 12.0 Clang 11.0 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 18.99 19.03 1. (CXX) g++ options: -O3 -march=native -flto -pthread
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: yolov4 - Device: OpenMP CPU Clang 12.0 Clang 11.0 80 160 240 320 400 SE +/- 4.15, N = 4 SE +/- 1.42, N = 3 333 346 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: bertsquad-10 - Device: OpenMP CPU Clang 12.0 Clang 11.0 110 220 330 440 550 SE +/- 10.30, N = 12 SE +/- 5.55, N = 3 498 471 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: fcn-resnet101-11 - Device: OpenMP CPU Clang 12.0 Clang 11.0 30 60 90 120 150 SE +/- 0.50, N = 3 SE +/- 0.29, N = 3 112 108 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: shufflenet-v2-10 - Device: OpenMP CPU Clang 12.0 Clang 11.0 2K 4K 6K 8K 10K SE +/- 88.25, N = 12 SE +/- 102.76, N = 8 9904 9797 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: super-resolution-10 - Device: OpenMP CPU Clang 12.0 Clang 11.0 1000 2000 3000 4000 5000 SE +/- 126.29, N = 12 SE +/- 169.87, N = 9 4456 4523 1. (CXX) g++ options: -O3 -march=native -fopenmp=libomp -ffunction-sections -fdata-sections -ldl -lrt
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS Clang 12.0 Clang 11.0 60K 120K 180K 240K 300K SE +/- 1778.47, N = 3 SE +/- 407.86, N = 3 265204 260119 1. (CC) gcc options: -pedantic -O3
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Only Clang 12.0 Clang 11.0 5K 10K 15K 20K 25K SE +/- 303.43, N = 3 SE +/- 289.16, N = 3 24310 24943 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency Clang 12.0 Clang 11.0 0.0092 0.0184 0.0276 0.0368 0.046 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 0.041 0.040 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Write Clang 12.0 Clang 11.0 700 1400 2100 2800 3500 SE +/- 3.48, N = 3 SE +/- 14.62, N = 3 3281 3312 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency Clang 12.0 Clang 11.0 0.0686 0.1372 0.2058 0.2744 0.343 SE +/- 0.000, N = 3 SE +/- 0.002, N = 3 0.305 0.302 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only Clang 12.0 Clang 11.0 200K 400K 600K 800K 1000K SE +/- 720.87, N = 3 SE +/- 1740.88, N = 3 1069022 1069367 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency Clang 12.0 Clang 11.0 0.0212 0.0424 0.0636 0.0848 0.106 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.094 0.094 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only Clang 12.0 Clang 11.0 200K 400K 600K 800K 1000K SE +/- 6289.60, N = 3 SE +/- 13844.42, N = 3 1071209 1065506 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency Clang 12.0 Clang 11.0 0.0529 0.1058 0.1587 0.2116 0.2645 SE +/- 0.001, N = 3 SE +/- 0.003, N = 3 0.234 0.235 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write Clang 12.0 Clang 11.0 13K 26K 39K 52K 65K SE +/- 162.92, N = 3 SE +/- 400.92, N = 3 62319 61616 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency Clang 12.0 Clang 11.0 0.3659 0.7318 1.0977 1.4636 1.8295 SE +/- 0.004, N = 3 SE +/- 0.011, N = 3 1.607 1.626 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write Clang 12.0 Clang 11.0 12K 24K 36K 48K 60K SE +/- 702.52, N = 15 SE +/- 883.12, N = 3 56684 54488 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency Clang 12.0 Clang 11.0 1.0357 2.0714 3.1071 4.1428 5.1785 SE +/- 0.054, N = 15 SE +/- 0.074, N = 3 4.431 4.603 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
Clang 12.0 Processor: AMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads), Motherboard: Supermicro H12SSL-i v1.01 (2.0 BIOS), Chipset: AMD Starship/Matisse, Memory: 126GB, Disk: 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Network: 2 x Broadcom NetXtreme BCM5720 2-port PCIe
OS: Ubuntu 20.04, Kernel: 5.12.0-051200rc6daily20210408-generic (x86_64) 20210407, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: Clang 12.0.0-++20210409092622+fa0971b87fb2-1~exp1~20210409193326.73, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Processor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119Python Notes: Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 April 2021 12:16 by user phoronix.
Clang 11.0 Processor: AMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads), Motherboard: Supermicro H12SSL-i v1.01 (2.0 BIOS), Chipset: AMD Starship/Matisse, Memory: 126GB, Disk: 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Network: 2 x Broadcom NetXtreme BCM5720 2-port PCIe
OS: Ubuntu 20.04, Kernel: 5.12.0-051200rc6daily20210408-generic (x86_64) 20210407, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: Clang 11.0.0-2~ubuntu20.04.1, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Processor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119Python Notes: Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 11 April 2021 06:09 by user phoronix.
Clang 12.0 LTO Processor: AMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads), Motherboard: Supermicro H12SSL-i v1.01 (2.0 BIOS), Chipset: AMD Starship/Matisse, Memory: 126GB, Disk: 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Network: 2 x Broadcom NetXtreme BCM5720 2-port PCIe
OS: Ubuntu 20.04, Kernel: 5.12.0-051200rc6daily20210408-generic (x86_64) 20210407, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: Clang 12.0.0-++20210409092622+fa0971b87fb2-1~exp1~20210409193326.73, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"Processor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119Python Notes: Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 April 2021 09:50 by user phoronix.