Intel Core i3-10100 testing with a Gigabyte B460M DS3H (F2 BIOS) and Gigabyte Intel UHD 630 3GB on Ubuntu 20.04 via the Phoronix Test Suite.
Linux 5.9-rc1 Processor: Intel Core i3-10100 @ 4.30GHz (4 Cores / 8 Threads), Motherboard: Gigabyte B460M DS3H (F2 BIOS), Chipset: Intel Device 9b63, Memory: 16GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0, Graphics: Gigabyte Intel UHD 630 3GB (1100MHz), Audio: Realtek ALC887-VD, Monitor: G237HL, Network: Realtek RTL8111/8168/8411
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc1daily20200819-generic (x86_64) 20200818, Desktop: GNOME Shell 3.36.3, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xccPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc7daily20201002-generic (x86_64) 20201001, Desktop: GNOME Shell 3.36.3, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xccPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 3K 6K 9K 12K 15K SE +/- 16.47, N = 3 SE +/- 46.82, N = 3 SE +/- 10.04, N = 3 13482 12164 12128 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 2 4 6 8 10 SE +/- 0.010, N = 3 SE +/- 0.032, N = 3 SE +/- 0.007, N = 3 7.420 8.225 8.248 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
Kripke Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.4 Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 2M 4M 6M 8M 10M SE +/- 98710.99, N = 3 SE +/- 60699.02, N = 3 SE +/- 54228.54, N = 3 11231933 10449837 10123507 1. (CXX) g++ options: -O3 -fopenmp
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 0.1321 0.2642 0.3963 0.5284 0.6605 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 SE +/- 0.002, N = 3 0.538 0.583 0.587 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Write Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 400 800 1200 1600 2000 SE +/- 2.63, N = 3 SE +/- 4.47, N = 3 SE +/- 6.30, N = 3 1857 1716 1704 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Write Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 3K 6K 9K 12K 15K SE +/- 49.30, N = 3 SE +/- 83.62, N = 3 SE +/- 42.18, N = 3 14208 13047 13047 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 0.8624 1.7248 2.5872 3.4496 4.312 SE +/- 0.012, N = 3 SE +/- 0.012, N = 3 SE +/- 0.025, N = 3 3.520 3.833 3.833 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 0.0092 0.0184 0.0276 0.0368 0.046 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.039 0.040 0.041 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
KeyDB A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better KeyDB 6.0.16 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 110K 220K 330K 440K 550K SE +/- 837.19, N = 3 SE +/- 499.31, N = 3 SE +/- 2349.50, N = 3 535529.45 517377.47 510641.79 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
eSpeak-NG Speech Engine This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better eSpeak-NG Speech Engine 20200907 Text-To-Speech Synthesis Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 7 14 21 28 35 SE +/- 0.12, N = 4 SE +/- 0.09, N = 4 SE +/- 0.04, N = 4 29.95 30.25 31.28 1. (CC) gcc options: -O2 -std=c99
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Only Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 30K 60K 90K 120K 150K SE +/- 1544.88, N = 15 SE +/- 1149.67, N = 15 SE +/- 1891.40, N = 3 124644 122211 119863 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 0.1879 0.3758 0.5637 0.7516 0.9395 SE +/- 0.010, N = 15 SE +/- 0.007, N = 15 SE +/- 0.013, N = 3 0.804 0.820 0.835 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.26 Backend: Eigen Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 110 220 330 440 550 SE +/- 0.67, N = 3 SE +/- 4.10, N = 3 SE +/- 2.65, N = 3 530 519 513 1. (CXX) g++ options: -flto -pthread
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Only Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 6K 12K 18K 24K 30K SE +/- 183.92, N = 3 SE +/- 109.30, N = 3 SE +/- 11.07, N = 3 29132 28293 28241 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
AOM AV1 This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 8 Realtime Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 9 18 27 36 45 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 40.03 39.77 38.81 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Only Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 5K 10K 15K 20K 25K SE +/- 67.75, N = 3 SE +/- 160.82, N = 3 SE +/- 38.89, N = 3 25471 24842 24698 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 0.0079 0.0158 0.0237 0.0316 0.0395 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.034 0.035 0.035 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 0.0871 0.1742 0.2613 0.3484 0.4355 SE +/- 0.005, N = 3 SE +/- 0.006, N = 3 SE +/- 0.001, N = 3 0.376 0.382 0.387 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Only Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 30K 60K 90K 120K 150K SE +/- 1611.71, N = 3 SE +/- 1982.14, N = 3 SE +/- 380.85, N = 3 133033 130872 129303 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
VkFFT VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Benchmark Score, More Is Better VkFFT 2020-09-29 Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 200 400 600 800 1000 SE +/- 1.00, N = 3 SE +/- 0.33, N = 3 1113 1109 1082
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: alexnet Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 10 20 30 40 50 SE +/- 0.10, N = 3 SE +/- 0.35, N = 3 SE +/- 0.19, N = 3 43.40 43.91 44.47 MIN: 42.14 / MAX: 45.48 MIN: 42.42 / MAX: 47.16 MIN: 43.12 / MAX: 47.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: shufflenet-v2 Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.07, N = 3 SE +/- 0.21, N = 3 8.64 8.71 8.83 MIN: 8.3 / MAX: 8.88 MIN: 7.96 / MAX: 8.98 MIN: 8.49 / MAX: 9.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Write Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 500 1000 1500 2000 2500 SE +/- 2.55, N = 3 SE +/- 1.32, N = 3 SE +/- 0.43, N = 3 2448 2410 2399 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 20.43 20.75 20.84 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 4 8 12 16 20 SE +/- 0.05, N = 3 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 17.42 17.48 17.75 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 0.113 0.226 0.339 0.452 0.565 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 0.493 0.496 0.502 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Write Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 400 800 1200 1600 2000 SE +/- 8.21, N = 3 SE +/- 3.53, N = 3 SE +/- 2.10, N = 3 2027 2017 1991 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 0.1008 0.2016 0.3024 0.4032 0.504 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 0.441 0.445 0.448 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 11 22 33 44 55 SE +/- 0.13, N = 3 SE +/- 0.29, N = 3 SE +/- 0.53, N = 7 45.96 46.21 46.67 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
LibRaw LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpix/sec, More Is Better LibRaw 0.20 Post-Processing Benchmark Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 7 14 21 28 35 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 SE +/- 0.03, N = 3 31.96 31.82 31.47 1. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
AOM AV1 This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 6 Two-Pass Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 0.7425 1.485 2.2275 2.97 3.7125 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 3.30 3.28 3.25 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Write Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 500 1000 1500 2000 2500 SE +/- 6.25, N = 3 SE +/- 13.31, N = 3 SE +/- 23.77, N = 7 2177 2165 2145 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
FFTE FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better FFTE 7.0 N=256, 3D Complex FFT Routine Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 4K 8K 12K 16K 20K SE +/- 47.52, N = 3 SE +/- 38.47, N = 3 SE +/- 85.45, N = 3 17027.51 16977.21 16781.81 1. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Only Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 20K 40K 60K 80K 100K SE +/- 173.39, N = 3 SE +/- 237.20, N = 3 SE +/- 344.72, N = 3 113279 112480 111733 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: SqueezeNetV1.0 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 2 4 6 8 10 SE +/- 0.038, N = 3 SE +/- 0.054, N = 3 SE +/- 0.009, N = 3 7.885 7.934 7.989 MIN: 7.8 / MAX: 9.55 MIN: 7.8 / MAX: 20.69 MIN: 7.93 / MAX: 10.34 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.1.1 Bulk Size: 100 - Inserts: 1000 - Rounds: 24 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 30 60 90 120 150 SE +/- 0.46, N = 3 SE +/- 0.71, N = 3 SE +/- 0.84, N = 3 145.44 145.55 147.19 1. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 30K 60K 90K 120K 150K SE +/- 59.58, N = 3 SE +/- 54.03, N = 3 SE +/- 22.88, N = 3 119373 120100 120776 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
AOM AV1 This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 6 Realtime Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 4 8 12 16 20 SE +/- 0.09, N = 3 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 16.83 16.82 16.64 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 13K 26K 39K 52K 65K SE +/- 16.42, N = 3 SE +/- 22.93, N = 3 SE +/- 16.65, N = 3 59712 60061 60393 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
AOM AV1 This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 4 Two-Pass Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 0.4635 0.927 1.3905 1.854 2.3175 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 2.06 2.06 2.04 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: resnet-v2-50 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 10 20 30 40 50 SE +/- 0.16, N = 3 SE +/- 0.14, N = 3 SE +/- 0.03, N = 3 43.01 43.09 43.35 MIN: 42.74 / MAX: 58.11 MIN: 42.85 / MAX: 45.61 MIN: 43.19 / MAX: 56.37 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2020.3 Water Benchmark Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 0.1175 0.235 0.3525 0.47 0.5875 SE +/- 0.003, N = 3 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 0.522 0.519 0.518 1. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 70 140 210 280 350 SE +/- 0.19, N = 3 SE +/- 0.64, N = 3 SE +/- 1.83, N = 3 329.10 331.02 331.54 MIN: 327.92 / MAX: 336.27 MIN: 328.98 / MAX: 335.04 MIN: 328.47 / MAX: 335.84 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 10 20 30 40 50 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 43.30 43.36 43.61 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: shufflenet-v2 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 0.945 1.89 2.835 3.78 4.725 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 4.17 4.18 4.20 MIN: 4.14 / MAX: 5.3 MIN: 4.15 / MAX: 5.12 MIN: 4.16 / MAX: 5.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Dolfyn Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Dolfyn 0.527 Computational Fluid Dynamics Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 5 10 15 20 25 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 19.26 19.26 19.38
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: inception-v3 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 11 22 33 44 55 SE +/- 0.19, N = 3 SE +/- 0.19, N = 3 SE +/- 0.05, N = 3 48.70 48.83 49.01 MIN: 48.17 / MAX: 62.63 MIN: 48.32 / MAX: 58.14 MIN: 48.65 / MAX: 62.18 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 200K 400K 600K 800K 1000K SE +/- 1658.61, N = 3 SE +/- 1718.08, N = 3 SE +/- 4835.91, N = 3 1134964.5 1131288.1 1128771.5
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: vgg16 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 40 80 120 160 200 SE +/- 0.58, N = 3 SE +/- 0.45, N = 3 SE +/- 0.04, N = 3 199.92 200.89 200.92 MIN: 197.78 / MAX: 202.64 MIN: 199.13 / MAX: 202.69 MIN: 199.69 / MAX: 202.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
MPV MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better MPV Video Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 50 100 150 200 250 SE +/- 0.53, N = 3 SE +/- 0.17, N = 3 SE +/- 0.48, N = 3 215.32 215.24 214.31 MIN: 193.55 / MAX: 222.23 MIN: 193.55 / MAX: 222.23 MIN: 190.48 / MAX: 222.23 1. mpv 0.32.0
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 2 4 6 8 10 SE +/- 0.011, N = 3 SE +/- 0.013, N = 3 SE +/- 0.006, N = 3 7.169 7.177 7.202 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 20K 40K 60K 80K 100K SE +/- 789.44, N = 3 SE +/- 235.90, N = 3 SE +/- 381.08, N = 3 107131 106731 106651 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.26 Backend: BLAS Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 50 100 150 200 250 SE +/- 3.71, N = 3 SE +/- 1.20, N = 3 229 229 228 1. (CXX) g++ options: -flto -pthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: MobileNetV2_224 Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 1.1061 2.2122 3.3183 4.4244 5.5305 SE +/- 0.005, N = 3 SE +/- 0.002, N = 3 SE +/- 0.006, N = 3 4.895 4.914 4.916 MIN: 4.86 / MAX: 7.16 MIN: 4.88 / MAX: 7.28 MIN: 4.87 / MAX: 17.67 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 0.2111 0.4222 0.6333 0.8444 1.0555 SE +/- 0.007, N = 3 SE +/- 0.002, N = 3 SE +/- 0.003, N = 3 0.934 0.937 0.938 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 0.5369 1.0738 1.6107 2.1476 2.6845 SE +/- 0.001, N = 3 SE +/- 0.003, N = 3 SE +/- 0.004, N = 3 2.377 2.382 2.386 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU-v2-v2 - Model: mobilenet-v2 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 8.03 8.05 8.06 MIN: 7.95 / MAX: 9.23 MIN: 7.97 / MAX: 9.63 MIN: 7.97 / MAX: 9.75 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Monte Carlo Simulations of Ionised Nebulae Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Monte Carlo Simulations of Ionised Nebulae 2019-03-24 Input: Dust 2D tau100.0 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 60 120 180 240 300 SE +/- 0.67, N = 3 270 271 271 1. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: mobilenet Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 9 18 27 36 45 SE +/- 0.44, N = 3 SE +/- 0.36, N = 3 SE +/- 0.47, N = 3 38.20 38.26 38.34 MIN: 36.45 / MAX: 46.01 MIN: 36.04 / MAX: 46.85 MIN: 35.99 / MAX: 46.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 60K 120K 180K 240K 300K SE +/- 76.51, N = 3 SE +/- 18.52, N = 3 SE +/- 436.14, N = 3 289313 290054 290372 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 6 12 18 24 30 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 25.14 25.17 25.23 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 30K 60K 90K 120K 150K SE +/- 22.11, N = 3 SE +/- 20.43, N = 3 SE +/- 44.11, N = 3 144673 145001 145093 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: vgg16 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 30 60 90 120 150 SE +/- 0.02, N = 3 SE +/- 0.09, N = 3 SE +/- 0.03, N = 3 111.43 111.70 111.73 MIN: 111.01 / MAX: 123.94 MIN: 111.2 / MAX: 123.34 MIN: 111.37 / MAX: 122.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 0.7729 1.5458 2.3187 3.0916 3.8645 SE +/- 0.00509, N = 3 SE +/- 0.00499, N = 3 SE +/- 0.00319, N = 3 3.42570 3.42653 3.43492
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 300K 600K 900K 1200K 1500K SE +/- 238.09, N = 3 SE +/- 1127.04, N = 3 SE +/- 369.57, N = 3 1169460.9 1167644.9 1166424.5
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 200K 400K 600K 800K 1000K SE +/- 1466.20, N = 3 SE +/- 3995.71, N = 3 SE +/- 381.22, N = 3 1155511.6 1154840.1 1152691.2
RealSR-NCNN RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RealSR-NCNN 20200818 Scale: 4x - TAA: No Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 60 120 180 240 300 SE +/- 0.14, N = 3 SE +/- 0.23, N = 3 SE +/- 0.19, N = 3 258.09 258.51 258.69
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: mobilenet-v1-1.0 Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 2 4 6 8 10 SE +/- 0.005, N = 3 SE +/- 0.013, N = 3 SE +/- 0.012, N = 3 6.319 6.324 6.331 MIN: 6.28 / MAX: 7.95 MIN: 6.27 / MAX: 8.77 MIN: 6.28 / MAX: 19.69 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: efficientnet-b0 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 10.64 10.66 10.66 MIN: 10.59 / MAX: 11.36 MIN: 10.6 / MAX: 12.92 MIN: 10.58 / MAX: 21.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 12.42 12.43 12.44 MIN: 11.97 / MAX: 12.75 MIN: 11.96 / MAX: 12.74 MIN: 12.23 / MAX: 13.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
MPV MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better MPV Video Input: Big Buck Bunny Sunflower 4K - Decode: Software Only Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 20 40 60 80 100 SE +/- 0.12, N = 3 SE +/- 0.01, N = 3 SE +/- 0.13, N = 3 82.51 82.51 82.38 MIN: 79.47 / MAX: 83.92 MIN: 78.95 / MAX: 83.92 MIN: 79.47 / MAX: 83.92 1. mpv 0.32.0
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: squeezenet Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 6 12 18 24 30 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 25.69 25.70 25.73 MIN: 25.58 / MAX: 26.97 MIN: 25.6 / MAX: 26.49 MIN: 25.58 / MAX: 27.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: mobilenet Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 7 14 21 28 35 SE +/- 0.05, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 30.35 30.36 30.39 MIN: 30.19 / MAX: 41.39 MIN: 30.25 / MAX: 41.42 MIN: 30.25 / MAX: 41.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: alexnet Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 24.28 24.30 24.31 MIN: 24.19 / MAX: 26.61 MIN: 24.19 / MAX: 35.23 MIN: 24.17 / MAX: 32.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: squeezenet Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 Linux 5.9-rc7 10 20 30 40 50 SE +/- 0.11, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 43.79 43.82 43.84 MIN: 43.2 / MAX: 79.94 MIN: 43.27 / MAX: 45.47 MIN: 43.36 / MAX: 44.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: yolov4-tiny Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 20 40 60 80 100 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 79.96 80.00 80.05 MIN: 63.74 / MAX: 83.15 MIN: 78.65 / MAX: 84.67 MIN: 78.58 / MAX: 86.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: resnet18 Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 21.75 21.76 21.77 MIN: 21.66 / MAX: 24.09 MIN: 21.63 / MAX: 22.1 MIN: 21.62 / MAX: 23.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: googlenet Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 22.02 22.03 22.04 MIN: 21.94 / MAX: 23.51 MIN: 21.95 / MAX: 24.29 MIN: 21.95 / MAX: 23.97 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: resnet50 Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 10 20 30 40 50 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 45.89 45.93 45.93 MIN: 45.75 / MAX: 48.52 MIN: 45.76 / MAX: 56.57 MIN: 45.75 / MAX: 56.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 20.1 Input: Carbon Nanotube Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 200 400 600 800 1000 SE +/- 0.27, N = 3 SE +/- 1.33, N = 3 SE +/- 0.88, N = 3 992.73 992.94 993.46 1. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Default Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 0.3481 0.6962 1.0443 1.3924 1.7405 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 1.546 1.547 1.547 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: googlenet Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc1 8 16 24 32 40 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 34.94 34.94 34.96 MIN: 34.31 / MAX: 35.12 MIN: 34.66 / MAX: 35.22 MIN: 34.69 / MAX: 35.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: resnet50 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 16 32 48 64 80 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 74.16 74.18 74.20 MIN: 73.75 / MAX: 74.72 MIN: 73.92 / MAX: 74.41 MIN: 73.88 / MAX: 74.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: yolov4-tiny Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 10 20 30 40 50 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 42.05 42.07 42.07 MIN: 41.95 / MAX: 42.75 MIN: 41.96 / MAX: 44.26 MIN: 41.96 / MAX: 44.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: efficientnet-b0 Linux 5.9-rc7 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off 6 12 18 24 30 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 25.67 25.68 25.68 MIN: 25.03 / MAX: 25.83 MIN: 25.44 / MAX: 25.75 MIN: 25.45 / MAX: 26.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 70 140 210 280 350 SE +/- 0.10, N = 3 SE +/- 0.07, N = 3 SE +/- 0.12, N = 3 312.98 313.00 313.08 MIN: 312.05 / MAX: 313.86 MIN: 312.25 / MAX: 313.9 MIN: 312.16 / MAX: 314.14 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: resnet18 Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 7 14 21 28 35 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 30.26 30.26 30.27 MIN: 30.05 / MAX: 30.36 MIN: 30.05 / MAX: 30.38 MIN: 30 / MAX: 30.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Timed HMMer Search This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed HMMer Search 3.3.1 Pfam Database Search Linux 5.9-rc1 Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 20 40 60 80 100 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 110.41 110.44 110.44 1. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: mnasnet Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 12.62 12.62 12.62 MIN: 12.59 / MAX: 12.69 MIN: 12.1 / MAX: 13.18 MIN: 12.12 / MAX: 12.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 13.74 13.74 13.74 MIN: 13.52 / MAX: 13.99 MIN: 13.57 / MAX: 13.93 MIN: 13.52 / MAX: 14.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: blazeface Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 0.3758 0.7516 1.1274 1.5032 1.879 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 1.67 1.67 1.67 MIN: 1.65 / MAX: 1.71 MIN: 1.65 / MAX: 1.87 MIN: 1.65 / MAX: 1.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: mnasnet Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 6.35 6.35 6.35 MIN: 6.3 / MAX: 7.39 MIN: 6.31 / MAX: 7.4 MIN: 6.3 / MAX: 8.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU-v3-v3 - Model: mobilenet-v3 Linux 5.9-rc1 Linux 5.9-rc7 Linux 5.9-rc7 + mitigations=off 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 6.86 6.86 6.86 MIN: 6.79 / MAX: 8.38 MIN: 6.79 / MAX: 8.22 MIN: 6.79 / MAX: 8.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
AOM AV1 This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 0 Two-Pass Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 0.054 0.108 0.162 0.216 0.27 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.24 0.24 0.24 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: Vulkan GPU - Model: blazeface Linux 5.9-rc7 + mitigations=off Linux 5.9-rc7 Linux 5.9-rc1 0.6345 1.269 1.9035 2.538 3.1725 SE +/- 0.01, N = 3 SE +/- 0.14, N = 3 SE +/- 0.20, N = 3 2.67 2.72 2.82 MIN: 2.3 / MAX: 3.13 MIN: 2.4 / MAX: 3.3 MIN: 2.38 / MAX: 3.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Linux 5.9-rc1 Processor: Intel Core i3-10100 @ 4.30GHz (4 Cores / 8 Threads), Motherboard: Gigabyte B460M DS3H (F2 BIOS), Chipset: Intel Device 9b63, Memory: 16GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0, Graphics: Gigabyte Intel UHD 630 3GB (1100MHz), Audio: Realtek ALC887-VD, Monitor: G237HL, Network: Realtek RTL8111/8168/8411
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc1daily20200819-generic (x86_64) 20200818, Desktop: GNOME Shell 3.36.3, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xccPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 2 October 2020 06:48 by user phoronix.
Linux 5.9-rc7 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xccPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 2 October 2020 15:29 by user phoronix.
Linux 5.9-rc7 + mitigations=off Processor: Intel Core i3-10100 @ 4.30GHz (4 Cores / 8 Threads), Motherboard: Gigabyte B460M DS3H (F2 BIOS), Chipset: Intel Device 9b63, Memory: 16GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0, Graphics: Gigabyte Intel UHD 630 3GB (1100MHz), Audio: Realtek ALC887-VD, Monitor: G237HL, Network: Realtek RTL8111/8168/8411
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc7daily20201002-generic (x86_64) 20201001, Desktop: GNOME Shell 3.36.3, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xccPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 3 October 2020 05:41 by user phoronix.