Intel Xeon E3-1280 v5 testing with a MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS) and ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SP on Ubuntu 20.04 via the Phoronix Test Suite.
1 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xdcGraphics Notes: GLAMORPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
2 3 Processor: Intel Xeon E3-1280 v5 @ 4.00GHz (4 Cores / 8 Threads), Motherboard: MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS), Chipset: Intel Xeon E3-1200 v5/E3-1500, Memory: 32GB, Disk: 256GB TOSHIBA RD400, Graphics: ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SP, Audio: Realtek ALC1150, Monitor: VA2431, Network: Intel I219-V
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc2daily20200826-generic (x86_64) 20200825, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.5 Mesa 20.0.8 (LLVM 10.0.0), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency 1 2 3 90 180 270 360 450 SE +/- 1.58, N = 3 SE +/- 0.67, N = 3 SE +/- 3.22, N = 3 359.56 390.03 399.09 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Write 1 2 3 60 120 180 240 300 SE +/- 1.21, N = 3 SE +/- 0.44, N = 3 SE +/- 2.05, N = 3 278 256 251 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency 1 2 3 200 400 600 800 1000 SE +/- 2.51, N = 3 SE +/- 9.31, N = 3 SE +/- 15.27, N = 4 942.92 1019.73 1037.52 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Write 1 2 3 60 120 180 240 300 SE +/- 0.70, N = 3 SE +/- 2.26, N = 3 SE +/- 3.57, N = 4 265 245 241 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency 1 2 3 40 80 120 160 200 SE +/- 0.28, N = 3 SE +/- 0.31, N = 3 SE +/- 2.94, N = 3 175.93 183.65 190.86 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Write 1 2 3 60 120 180 240 300 SE +/- 0.45, N = 3 SE +/- 0.46, N = 3 SE +/- 4.01, N = 3 284 272 262 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Write 1 2 3 800 1600 2400 3200 4000 SE +/- 52.45, N = 4 SE +/- 51.71, N = 4 SE +/- 38.07, N = 3 3636 3459 3365 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency 1 2 3 4 8 12 16 20 SE +/- 0.20, N = 4 SE +/- 0.21, N = 4 SE +/- 0.17, N = 3 13.76 14.47 14.87 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Write 1 2 3 40 80 120 160 200 SE +/- 1.55, N = 3 SE +/- 1.54, N = 3 SE +/- 3.02, N = 3 196 191 184 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency 1 2 3 1.2204 2.4408 3.6612 4.8816 6.102 SE +/- 0.041, N = 3 SE +/- 0.042, N = 3 SE +/- 0.088, N = 3 5.109 5.231 5.424 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency 1 2 3 1.1392 2.2784 3.4176 4.5568 5.696 SE +/- 0.020, N = 3 SE +/- 0.039, N = 3 SE +/- 0.040, N = 3 4.796 4.987 5.063 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Write 1 2 3 50 100 150 200 250 SE +/- 0.84, N = 3 SE +/- 1.59, N = 3 SE +/- 1.56, N = 3 209 201 198 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only 3 2 1 20K 40K 60K 80K 100K SE +/- 936.11, N = 7 SE +/- 223.08, N = 3 SE +/- 1041.95, N = 3 83711 82741 81251 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Only 1 2 3 20K 40K 60K 80K 100K SE +/- 1362.67, N = 5 SE +/- 1227.66, N = 3 SE +/- 1157.60, N = 3 110927 109654 107694 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency 1 2 3 0.209 0.418 0.627 0.836 1.045 SE +/- 0.011, N = 5 SE +/- 0.010, N = 3 SE +/- 0.010, N = 3 0.902 0.912 0.929 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency 3 2 1 0.6928 1.3856 2.0784 2.7712 3.464 SE +/- 0.034, N = 7 SE +/- 0.008, N = 3 SE +/- 0.040, N = 3 2.990 3.022 3.079 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
FFTE FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better FFTE 7.0 N=256, 3D Complex FFT Routine 1 2 3 4K 8K 12K 16K 20K SE +/- 68.64, N = 3 SE +/- 25.77, N = 3 SE +/- 37.30, N = 3 20211.79 19804.66 19709.17 1. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency 2 1 3 0.011 0.022 0.033 0.044 0.055 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.048 0.049 0.049 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 1 2 3 200K 400K 600K 800K 1000K SE +/- 3581.92, N = 3 SE +/- 3852.84, N = 3 SE +/- 3829.04, N = 3 889455.4 876094.3 872007.4
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.1.1 Bulk Size: 100 - Inserts: 1000 - Rounds: 24 1 2 3 30 60 90 120 150 SE +/- 1.53, N = 3 SE +/- 1.00, N = 3 SE +/- 0.69, N = 3 147.52 149.15 150.37 1. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Only 1 3 2 20K 40K 60K 80K 100K SE +/- 830.93, N = 11 SE +/- 917.40, N = 15 SE +/- 1424.81, N = 3 92077 91934 90359 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency 1 3 2 0.623 1.246 1.869 2.492 3.115 SE +/- 0.024, N = 11 SE +/- 0.027, N = 15 SE +/- 0.043, N = 3 2.718 2.724 2.769 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency 1 2 3 0.0995 0.199 0.2985 0.398 0.4975 SE +/- 0.004, N = 3 SE +/- 0.005, N = 3 SE +/- 0.001, N = 3 0.436 0.437 0.442 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 2 3 1 70 140 210 280 350 SE +/- 0.07, N = 3 SE +/- 0.10, N = 3 SE +/- 5.07, N = 3 337.74 337.82 342.39 MIN: 336.52 / MAX: 340.1 MIN: 337.14 / MAX: 341.29 MIN: 335.31 / MAX: 400.55 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Only 1 2 3 20K 40K 60K 80K 100K SE +/- 1174.02, N = 3 SE +/- 1266.71, N = 3 SE +/- 158.67, N = 3 114854 114466 113306 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
KeyDB A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better KeyDB 6.0.16 1 2 3 80K 160K 240K 320K 400K SE +/- 668.35, N = 3 SE +/- 2020.02, N = 3 SE +/- 386.83, N = 3 385851.58 382973.15 382203.75 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency 1 2 3 0.2385 0.477 0.7155 0.954 1.1925 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 SE +/- 0.004, N = 3 1.050 1.058 1.060 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
Kripke Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Throughput FoM, More Is Better Kripke 1.2.4 1 2 3 4M 8M 12M 16M 20M SE +/- 104643.97, N = 3 SE +/- 52933.71, N = 3 SE +/- 52236.76, N = 3 16736247 16595780 16583477 1. (CXX) g++ options: -O3 -fopenmp
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Only 1 2 3 20K 40K 60K 80K 100K SE +/- 79.84, N = 3 SE +/- 150.99, N = 3 SE +/- 409.00, N = 3 95282 94493 94424 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Only 1 2 3 20K 40K 60K 80K 100K SE +/- 96.80, N = 3 SE +/- 53.15, N = 3 SE +/- 11.66, N = 3 97849 97079 97011 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless 1 2 3 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 18.80 18.88 18.95 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency 1 2 3 0.1159 0.2318 0.3477 0.4636 0.5795 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.511 0.515 0.515 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU-v2-v2 - Model: mobilenet-v2 2 3 1 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 7.78 7.78 7.83 MIN: 7.72 / MAX: 9.38 MIN: 7.71 / MAX: 9.57 MIN: 7.7 / MAX: 27.26 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: SqueezeNetV1.0 1 2 3 3 6 9 12 15 SE +/- 0.09, N = 11 SE +/- 0.15, N = 4 SE +/- 0.12, N = 5 10.04 10.06 10.10 MIN: 9.52 / MAX: 33.64 MIN: 9.56 / MAX: 33.37 MIN: 9.56 / MAX: 33.15 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: resnet-v2-50 2 3 1 13 26 39 52 65 SE +/- 0.38, N = 4 SE +/- 0.36, N = 5 SE +/- 0.32, N = 11 55.81 55.95 56.11 MIN: 54.55 / MAX: 78.78 MIN: 53.86 / MAX: 81.65 MIN: 54.43 / MAX: 198.77 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
MPV MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better MPV Video Input: Big Buck Bunny Sunflower 4K - Decode: Software Only 1 2 3 80 160 240 320 400 SE +/- 0.43, N = 3 SE +/- 0.96, N = 3 SE +/- 0.41, N = 3 377.56 377.03 375.64 MIN: 230.76 / MAX: 545.43 MIN: 235.28 / MAX: 545.43 MIN: 230.76 / MAX: 545.43 1. mpv 0.32.0
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 2 1 3 200K 400K 600K 800K 1000K SE +/- 1131.06, N = 3 SE +/- 2049.73, N = 3 SE +/- 1680.62, N = 3 1028323.2 1027433.3 1023332.0
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: blazeface 1 2 3 0.4725 0.945 1.4175 1.89 2.3625 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2.09 2.09 2.10 MIN: 2.07 / MAX: 2.19 MIN: 2.07 / MAX: 2.91 MIN: 2.08 / MAX: 2.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression 1 2 3 11 22 33 44 55 SE +/- 0.09, N = 3 SE +/- 0.11, N = 3 SE +/- 0.10, N = 3 46.86 46.99 47.08 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU-v3-v3 - Model: mobilenet-v3 1 3 2 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 6.50 6.52 6.53 MIN: 6.44 / MAX: 8.75 MIN: 6.44 / MAX: 8.06 MIN: 6.44 / MAX: 18.87 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: shufflenet-v2 1 2 3 1.0305 2.061 3.0915 4.122 5.1525 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 4.56 4.57 4.58 MIN: 4.51 / MAX: 7.21 MIN: 4.53 / MAX: 6.2 MIN: 4.52 / MAX: 6.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: yolov4-tiny 2 3 1 9 18 27 36 45 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 39.92 39.92 40.09 MIN: 39.8 / MAX: 41.77 MIN: 39.78 / MAX: 42.74 MIN: 39.88 / MAX: 53.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: mobilenet 2 3 1 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 28.76 28.76 28.87 MIN: 28.7 / MAX: 29.48 MIN: 28.67 / MAX: 30.45 MIN: 28.75 / MAX: 29.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
MPV MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better MPV Video Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only 3 1 2 300 600 900 1200 1500 SE +/- 2.51, N = 3 SE +/- 5.14, N = 3 SE +/- 2.75, N = 3 1181.91 1180.75 1177.43 MIN: 705.84 / MAX: 1999.9 MIN: 666.63 / MAX: 1999.92 MIN: 666.63 / MAX: 1999.92 1. mpv 0.32.0
Incompact3D Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Incompact3D 2020-09-17 Input: Cylinder 2 1 3 160 320 480 640 800 SE +/- 0.52, N = 3 SE +/- 0.92, N = 3 SE +/- 0.78, N = 3 734.33 734.54 737.05 1. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: MobileNetV2_224 2 3 1 1.2578 2.5156 3.7734 5.0312 6.289 SE +/- 0.007, N = 4 SE +/- 0.008, N = 5 SE +/- 0.011, N = 11 5.570 5.576 5.590 MIN: 5.5 / MAX: 10.38 MIN: 5.5 / MAX: 8.52 MIN: 5.47 / MAX: 28.96 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 1 2 3 200K 400K 600K 800K 1000K SE +/- 4142.78, N = 3 SE +/- 3135.08, N = 3 SE +/- 4099.63, N = 3 1016169.1 1013846.1 1012647.3
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 2 1 3 70K 140K 210K 280K 350K SE +/- 99.10, N = 3 SE +/- 117.16, N = 3 SE +/- 166.75, N = 3 322330 323152 323334 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
DeepSpeech Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better DeepSpeech 0.6 Acceleration: CPU 1 3 2 20 40 60 80 100 SE +/- 0.05, N = 3 SE +/- 0.05, N = 3 SE +/- 0.15, N = 3 77.70 77.70 77.94
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 1 2 3 6 12 18 24 30 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.05, N = 3 27.17 27.18 27.25 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 1 3 2 30K 60K 90K 120K 150K SE +/- 84.02, N = 3 SE +/- 79.44, N = 3 SE +/- 170.17, N = 3 130722 130891 131091 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: squeezenet 2 3 1 6 12 18 24 30 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.08, N = 3 24.86 24.89 24.92 MIN: 24.78 / MAX: 25.75 MIN: 24.77 / MAX: 37.72 MIN: 24.77 / MAX: 26.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 20.1 Input: Carbon Nanotube 1 3 2 130 260 390 520 650 SE +/- 0.46, N = 3 SE +/- 0.51, N = 3 SE +/- 0.26, N = 3 602.62 602.97 604.07 1. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.26 Backend: Random 1 2 3 40K 80K 120K 160K 200K SE +/- 165.24, N = 3 SE +/- 216.46, N = 3 SE +/- 346.16, N = 3 207336 207014 206946 1. (CXX) g++ options: -flto -pthread
Dolfyn Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Dolfyn 0.527 Computational Fluid Dynamics 3 2 1 5 10 15 20 25 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 20.91 20.92 20.95
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression 1 2 3 2 4 6 8 10 SE +/- 0.006, N = 3 SE +/- 0.011, N = 3 SE +/- 0.007, N = 3 7.972 7.985 7.986 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: googlenet 1 2 3 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 23.14 23.14 23.18 MIN: 23.04 / MAX: 34.42 MIN: 23.03 / MAX: 36.37 MIN: 23.07 / MAX: 33.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: mnasnet 1 2 3 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 6.56 6.56 6.57 MIN: 6.51 / MAX: 8.26 MIN: 6.51 / MAX: 9.26 MIN: 6.53 / MAX: 8.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 1 3 2 80 160 240 320 400 SE +/- 0.43, N = 3 SE +/- 0.32, N = 3 SE +/- 0.33, N = 3 357.32 357.36 357.85 MIN: 355.94 / MAX: 359.3 MIN: 356.03 / MAX: 359.25 MIN: 356.17 / MAX: 360.63 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 3 2 1 30K 60K 90K 120K 150K SE +/- 109.27, N = 3 SE +/- 51.19, N = 3 SE +/- 124.23, N = 3 161112 161133 161333 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
TSCP This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better TSCP 1.81 AI Chess Performance 2 3 1 200K 400K 600K 800K 1000K SE +/- 1252.48, N = 5 SE +/- 1204.17, N = 5 SE +/- 775.23, N = 5 1163456 1162474 1161980 1. (CC) gcc options: -O3 -march=native
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 2 1 3 0.6023 1.2046 1.8069 2.4092 3.0115 SE +/- 0.000, N = 3 SE +/- 0.004, N = 3 SE +/- 0.000, N = 3 2.674 2.677 2.677 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: resnet50 1 3 2 11 22 33 44 55 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 47.02 47.05 47.07 MIN: 46.88 / MAX: 49.96 MIN: 46.9 / MAX: 49.62 MIN: 46.89 / MAX: 57.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: efficientnet-b0 2 1 3 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 10.45 10.46 10.46 MIN: 10.41 / MAX: 13.2 MIN: 10.42 / MAX: 10.78 MIN: 10.42 / MAX: 11.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: resnet18 2 3 1 6 12 18 24 30 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 23.50 23.50 23.52 MIN: 23.38 / MAX: 24.04 MIN: 23.39 / MAX: 25.33 MIN: 23.4 / MAX: 25.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Default 1 2 3 0.3841 0.7682 1.1523 1.5364 1.9205 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 1.706 1.706 1.707 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms 1 3 2 0.8572 1.7144 2.5716 3.4288 4.286 SE +/- 0.00692, N = 3 SE +/- 0.01581, N = 3 SE +/- 0.00405, N = 3 3.80757 3.80919 3.80980
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 1 3 2 14K 28K 42K 56K 70K SE +/- 63.17, N = 3 SE +/- 55.92, N = 3 SE +/- 27.22, N = 3 65515 65520 65541 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: inception-v3 3 1 2 14 28 42 56 70 SE +/- 0.27, N = 5 SE +/- 0.22, N = 11 SE +/- 0.36, N = 4 61.63 61.63 61.65 MIN: 60.1 / MAX: 85.52 MIN: 60.24 / MAX: 85.44 MIN: 60.21 / MAX: 138.47 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: alexnet 1 2 3 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 22.64 22.64 22.64 MIN: 22.54 / MAX: 24.97 MIN: 22.56 / MAX: 24.94 MIN: 22.56 / MAX: 25.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency 1 2 3 0.011 0.022 0.033 0.044 0.055 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.049 0.049 0.049 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
Monte Carlo Simulations of Ionised Nebulae Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Monte Carlo Simulations of Ionised Nebulae 2019-03-24 Input: Dust 2D tau100.0 1 2 3 70 140 210 280 350 SE +/- 0.58, N = 3 SE +/- 0.33, N = 3 300 300 300 1. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20200916 Target: CPU - Model: vgg16 3 2 1 20 40 60 80 100 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 10.25, N = 3 93.29 93.30 103.72 MIN: 93.1 / MAX: 105.92 MIN: 93.07 / MAX: 106.12 MIN: 93.04 / MAX: 2721.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: mobilenet-v1-1.0 2 3 1 2 4 6 8 10 SE +/- 0.009, N = 4 SE +/- 0.005, N = 5 SE +/- 0.334, N = 11 7.674 7.683 8.022 MIN: 7.6 / MAX: 31 MIN: 7.64 / MAX: 12.03 MIN: 7.51 / MAX: 64.47 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
PostgreSQL pgbench This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency 1 2 3 13 26 39 52 65 SE +/- 0.16, N = 3 SE +/- 0.85, N = 14 SE +/- 0.95, N = 15 51.84 54.62 55.78 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write 1 2 3 1000 2000 3000 4000 5000 SE +/- 14.83, N = 3 SE +/- 69.14, N = 14 SE +/- 72.37, N = 15 4824 4592 4501 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency 1 2 3 6 12 18 24 30 SE +/- 0.49, N = 15 SE +/- 0.41, N = 15 SE +/- 0.52, N = 15 21.99 22.68 23.75 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 100 - Clients: 100 - Mode: Read Write 1 2 3 1000 2000 3000 4000 5000 SE +/- 93.25, N = 15 SE +/- 74.38, N = 15 SE +/- 93.27, N = 15 4579 4429 4240 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
1 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xdcGraphics Notes: GLAMORPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 30 September 2020 20:51 by user phoronix.
2 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xdcGraphics Notes: GLAMORPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 1 October 2020 07:30 by user phoronix.
3 Processor: Intel Xeon E3-1280 v5 @ 4.00GHz (4 Cores / 8 Threads), Motherboard: MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS), Chipset: Intel Xeon E3-1200 v5/E3-1500, Memory: 32GB, Disk: 256GB TOSHIBA RD400, Graphics: ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SP, Audio: Realtek ALC1150, Monitor: VA2431, Network: Intel I219-V
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc2daily20200826-generic (x86_64) 20200825, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.5 Mesa 20.0.8 (LLVM 10.0.0), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xdcGraphics Notes: GLAMORPython Notes: Python 3.8.2Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 1 October 2020 18:06 by user phoronix.