2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.
r1 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
r1a r2 r2a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
r2b r3 r4 r5 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1024x768
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 128 - Warehouses: 250 r1 12K 24K 36K 48K 60K SE +/- 857.30, N = 9 55415 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 128 - Warehouses: 500 r1a r1 40K 80K 120K 160K 200K SE +/- 1389.03, N = 9 SE +/- 2691.06, N = 9 173228 173288 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 128 - Warehouses: 500 r1 r1a 12K 24K 36K 48K 60K SE +/- 891.59, N = 9 SE +/- 484.29, N = 9 57190 57242 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 250 r1 40K 80K 120K 160K 200K SE +/- 2831.11, N = 9 191397 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 250 r1 14K 28K 42K 56K 70K SE +/- 937.55, N = 9 63279 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 250 r1 40K 80K 120K 160K 200K SE +/- 3390.81, N = 9 209254 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 250 r1 15K 30K 45K 60K 75K SE +/- 1078.76, N = 9 69054 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 16 - Warehouses: 500 r1 40K 80K 120K 160K 200K SE +/- 3159.46, N = 9 195258 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 16 - Warehouses: 500 r1 14K 28K 42K 56K 70K SE +/- 1031.07, N = 9 64477 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 500 r1 40K 80K 120K 160K 200K SE +/- 2885.40, N = 9 208419 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 500 r1 15K 30K 45K 60K 75K SE +/- 921.11, N = 9 68818 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 10.5.2 Clients: 512 r2b 40 80 120 160 200 SE +/- 0.87, N = 3 166 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 10.5.2 Clients: 128 r3 r2b 40 80 120 160 200 SE +/- 0.35, N = 3 SE +/- 0.65, N = 3 189 192 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
HammerDB - MariaDB This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 500 r1a r1 40K 80K 120K 160K 200K SE +/- 2084.32, N = 9 SE +/- 2149.33, N = 3 188761 194684 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 500 r1a r1 14K 28K 42K 56K 70K SE +/- 730.55, N = 9 SE +/- 620.04, N = 3 62311 64298 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: X3D-benchmarking input.i3d r4 r3 r1 r1a r2b 80 160 240 320 400 SE +/- 3.91, N = 9 SE +/- 4.39, N = 9 SE +/- 0.46, N = 3 SE +/- 0.12, N = 3 SE +/- 2.73, N = 9 389.70 386.39 313.92 311.96 307.62 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
GNU Radio GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Hilbert Transform r2b r4 r3 r1a r1 100 200 300 400 500 SE +/- 47.90, N = 3 SE +/- 24.71, N = 9 SE +/- 17.46, N = 9 SE +/- 1.66, N = 3 SE +/- 2.02, N = 3 357.4 373.8 408.0 459.1 459.3 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: FM Deemphasis Filter r3 r4 r2b r1a r1 160 320 480 640 800 SE +/- 31.57, N = 9 SE +/- 32.02, N = 9 SE +/- 53.33, N = 3 SE +/- 1.04, N = 3 SE +/- 1.94, N = 3 621.0 622.0 645.8 727.4 734.0 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: IIR Filter r3 r4 r2b r1a r1 130 260 390 520 650 SE +/- 26.49, N = 9 SE +/- 25.67, N = 9 SE +/- 45.07, N = 3 SE +/- 0.46, N = 3 SE +/- 0.38, N = 3 487.4 487.7 498.2 609.5 610.6 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: FIR Filter r2b r3 r4 r1 r1a 130 260 390 520 650 SE +/- 44.41, N = 3 SE +/- 16.19, N = 9 SE +/- 11.25, N = 9 SE +/- 1.45, N = 3 SE +/- 0.20, N = 3 470.0 502.0 515.6 603.0 604.8 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Signal Source (Cosine) r4 r2b r3 r1a r1 500 1000 1500 2000 2500 SE +/- 82.03, N = 9 SE +/- 168.17, N = 3 SE +/- 72.44, N = 9 SE +/- 2.24, N = 3 SE +/- 0.93, N = 3 1619.2 1684.4 1723.9 2175.3 2183.5 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Five Back to Back FIR Filters r2b r4 r3 r1a r1 200 400 600 800 1000 SE +/- 1.12, N = 3 SE +/- 48.36, N = 9 SE +/- 39.63, N = 9 SE +/- 2.30, N = 3 SE +/- 2.54, N = 3 111.2 487.9 580.5 1015.2 1024.3 1. 3.8.1.0
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K r2b r3 r4 r1a 0.9383 1.8766 2.8149 3.7532 4.6915 SE +/- 0.03, N = 3 SE +/- 0.02, N = 9 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 2.01 2.05 2.10 4.17 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 16 - Warehouses: 250 r1 14K 28K 42K 56K 70K SE +/- 880.35, N = 3 63757 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
LuaRadio LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Complex Phase r4 r3 r2b r1 r1a 120 240 360 480 600 SE +/- 4.50, N = 6 SE +/- 4.31, N = 6 SE +/- 3.61, N = 9 SE +/- 0.25, N = 3 SE +/- 0.71, N = 3 452.7 458.2 458.7 546.8 548.2
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Hilbert Transform r2b r3 r4 r1 r1a 20 40 60 80 100 SE +/- 0.41, N = 9 SE +/- 0.47, N = 6 SE +/- 0.61, N = 6 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 78.2 78.2 78.4 80.3 80.3
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: FM Deemphasis Filter r4 r2b r3 r1a r1 90 180 270 360 450 SE +/- 1.19, N = 6 SE +/- 5.30, N = 9 SE +/- 4.83, N = 6 SE +/- 1.40, N = 3 SE +/- 0.21, N = 3 368.0 370.1 370.3 409.6 410.0
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Five Back to Back FIR Filters r3 r4 r2b r1a r1 200 400 600 800 1000 SE +/- 74.31, N = 6 SE +/- 73.21, N = 6 SE +/- 22.87, N = 9 SE +/- 0.62, N = 3 SE +/- 2.24, N = 3 662.8 706.1 804.5 1094.5 1094.8
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 8 - Warehouses: 250 r1 20K 40K 60K 80K 100K SE +/- 675.05, N = 3 95768 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 8 - Warehouses: 500 r1 60K 120K 180K 240K 300K SE +/- 2338.98, N = 3 285984 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 8 - Warehouses: 500 r1 20K 40K 60K 80K 100K SE +/- 693.36, N = 3 94379 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K r3 r2b r4 r1 r1a 2 4 6 8 10 SE +/- 0.04, N = 3 SE +/- 0.03, N = 9 SE +/- 0.03, N = 5 SE +/- 0.09, N = 15 SE +/- 0.06, N = 3 3.20 3.22 3.23 7.37 7.55 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: inception-v3 r2b r4 12 24 36 48 60 SE +/- 1.54, N = 3 SE +/- 0.75, N = 12 53.07 52.23 MIN: 49.59 / MAX: 69.62 MIN: 47.47 / MAX: 94.69 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: mobilenet-v1-1.0 r4 r2b 0.7565 1.513 2.2695 3.026 3.7825 SE +/- 0.021, N = 12 SE +/- 0.089, N = 3 3.362 3.213 MIN: 2.98 / MAX: 6.66 MIN: 2.8 / MAX: 6.7 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: MobileNetV2_224 r4 r2b 0.9225 1.845 2.7675 3.69 4.6125 SE +/- 0.135, N = 12 SE +/- 0.333, N = 3 4.100 4.078 MIN: 2.97 / MAX: 12.98 MIN: 2.9 / MAX: 13.17 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: resnet-v2-50 r2b r4 11 22 33 44 55 SE +/- 2.59, N = 3 SE +/- 1.07, N = 12 48.73 48.04 MIN: 43.19 / MAX: 69.59 MIN: 42.13 / MAX: 145.2 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: SqueezeNetV1.0 r2b r4 2 4 6 8 10 SE +/- 0.002, N = 3 SE +/- 0.078, N = 12 7.174 7.170 MIN: 6.95 / MAX: 7.88 MIN: 6.38 / MAX: 9.97 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS r4 r3 r2b r1a r1 50K 100K 150K 200K 250K SE +/- 2769.20, N = 3 SE +/- 267.95, N = 3 SE +/- 84.15, N = 3 SE +/- 236.12, N = 3 SE +/- 234.37, N = 3 222747 225291 225343 225366 225412 1. (CC) gcc options: -pedantic -O3
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K r2b r4 r3 r1a 0.0428 0.0856 0.1284 0.1712 0.214 SE +/- 0.00, N = 12 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 5 0.14 0.14 0.15 0.19 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p r2b r3 r4 r1a 2 4 6 8 10 SE +/- 0.03, N = 3 SE +/- 0.04, N = 5 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 3.30 3.36 3.36 6.89 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: Orange Juice - Acceleration: CPU r3 r4 r1a r2b r1 4 8 12 16 20 SE +/- 0.12, N = 15 SE +/- 0.13, N = 15 SE +/- 0.21, N = 3 SE +/- 0.18, N = 3 SE +/- 0.13, N = 3 13.89 13.94 14.26 14.28 14.36 MIN: 11.08 / MAX: 17.77 MIN: 11.06 / MAX: 17.84 MIN: 11.6 / MAX: 19.3 MIN: 11.93 / MAX: 17.73 MIN: 11.58 / MAX: 19.44
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: DLSC - Acceleration: CPU r3 r4 r2b r1a r1 3 6 9 12 15 SE +/- 0.10, N = 3 SE +/- 0.09, N = 3 SE +/- 0.08, N = 15 SE +/- 0.09, N = 15 SE +/- 0.09, N = 3 9.24 9.25 9.27 9.61 9.70 MIN: 8.74 / MAX: 11.37 MIN: 8.59 / MAX: 11.4 MIN: 8.31 / MAX: 11.98 MIN: 8 / MAX: 12.27 MIN: 8.98 / MAX: 12.22
Intel Memory Latency Checker Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - Stream-Triad Like r1a r3 r2a r5 r4 r2b r1 70K 140K 210K 280K 350K SE +/- 11.61, N = 3 SE +/- 50.80, N = 3 SE +/- 53.08, N = 3 SE +/- 22.58, N = 3 SE +/- 7.71, N = 3 SE +/- 50.20, N = 3 SE +/- 25.05, N = 3 325184.58 325218.50 325260.41 325312.30 325314.62 325409.99 325766.94
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - 1:1 Reads-Writes r1 r5 r4 r3 r1a r2b r2a 90K 180K 270K 360K 450K SE +/- 821.19, N = 3 SE +/- 1051.98, N = 3 SE +/- 2322.32, N = 3 SE +/- 276.68, N = 3 SE +/- 1093.30, N = 3 SE +/- 3117.58, N = 3 SE +/- 1844.14, N = 3 439496.74 440205.22 440315.41 440939.22 441408.09 441732.77 442460.05
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - 2:1 Reads-Writes r2a r1a r3 r5 r4 r2b r1 100K 200K 300K 400K 500K SE +/- 54.98, N = 3 SE +/- 129.26, N = 3 SE +/- 89.89, N = 3 SE +/- 53.22, N = 3 SE +/- 8.60, N = 3 SE +/- 51.02, N = 3 SE +/- 33.49, N = 3 456545.88 456629.89 457141.24 458756.46 458790.96 459226.53 459455.38
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - 3:1 Reads-Writes r1a r2a r3 r5 r4 r2b r1 90K 180K 270K 360K 450K SE +/- 465.24, N = 3 SE +/- 392.90, N = 3 SE +/- 109.66, N = 3 SE +/- 133.64, N = 3 SE +/- 67.02, N = 3 SE +/- 71.38, N = 3 SE +/- 105.41, N = 3 424612.62 424818.83 424925.84 425467.51 425848.09 425997.22 426148.96
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - All Reads r1 r5 r2b r4 r3 r1a r2a 80K 160K 240K 320K 400K SE +/- 67.01, N = 3 SE +/- 46.23, N = 3 SE +/- 83.63, N = 3 SE +/- 83.70, N = 3 SE +/- 59.61, N = 3 SE +/- 142.76, N = 3 SE +/- 107.35, N = 3 357285.28 357550.82 357774.43 357925.98 358268.00 358364.56 358456.09
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K r2b r3 r4 r1 r1a 4 8 12 16 20 SE +/- 0.06, N = 3 SE +/- 0.07, N = 12 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.03, N = 3 5.97 5.97 6.00 15.09 15.19 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU r4 r1 r3 r1a r2b 200 400 600 800 1000 SE +/- 16.86, N = 14 SE +/- 7.01, N = 3 SE +/- 0.83, N = 3 SE +/- 1.56, N = 3 SE +/- 0.61, N = 3 811.94 804.39 793.92 793.36 791.70 MIN: 761.61 MIN: 763.49 MIN: 769 MIN: 765.14 MIN: 769.61 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K r3 r2b r4 r1a r1 7 14 21 28 35 SE +/- 0.12, N = 15 SE +/- 0.08, N = 15 SE +/- 0.17, N = 3 SE +/- 0.29, N = 5 SE +/- 0.19, N = 3 11.94 12.03 12.10 28.99 29.20 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
GNU GMP GMPbench GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GMPbench Score, More Is Better GNU GMP GMPbench 6.2.1 Total Time r3 r2b r4 r1 r1a 1000 2000 3000 4000 5000 4504.5 4524.5 4525.7 4642.1 4642.8 1. (CC) gcc options: -O3 -fomit-frame-pointer -lm
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Barbershop - Compute: CPU-Only r2b r4 20 40 60 80 100 SE +/- 0.18, N = 3 SE +/- 0.59, N = 3 110.02 109.96
Timed Node.js Compilation This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 15.11 Time To Compile r3 r4 r2b r1 r1a 30 60 90 120 150 SE +/- 0.68, N = 3 SE +/- 0.78, N = 3 SE +/- 0.50, N = 3 SE +/- 0.27, N = 3 SE +/- 0.29, N = 3 111.79 111.67 110.93 101.10 100.45
ViennaCL ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TT r2b r3 r4 r1 r1a 20 40 60 80 100 SE +/- 1.75, N = 15 SE +/- 2.33, N = 15 SE +/- 2.94, N = 15 SE +/- 1.45, N = 13 SE +/- 0.90, N = 3 54.7 61.7 63.7 76.3 77.2 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TN r2b r3 r4 r1 r1a 20 40 60 80 100 SE +/- 2.02, N = 15 SE +/- 1.88, N = 15 SE +/- 2.43, N = 14 SE +/- 1.67, N = 13 SE +/- 0.69, N = 3 62.3 66.9 67.6 76.0 77.4 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NT r2b r3 r4 r1 r1a 20 40 60 80 100 SE +/- 1.14, N = 15 SE +/- 1.99, N = 15 SE +/- 1.98, N = 15 SE +/- 1.88, N = 13 SE +/- 1.01, N = 3 59.8 68.9 72.4 75.6 76.8 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NN r2b r3 r4 r1a r1 16 32 48 64 80 SE +/- 2.06, N = 15 SE +/- 2.18, N = 15 SE +/- 1.95, N = 15 SE +/- 3.11, N = 3 SE +/- 1.42, N = 14 61.9 66.4 70.8 72.3 73.5 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-T r1a r2b r3 r4 r1 160 320 480 640 800 SE +/- 5.04, N = 3 SE +/- 27.49, N = 15 SE +/- 2.02, N = 15 SE +/- 3.20, N = 15 SE +/- 2.46, N = 13 319.0 389.9 647.0 647.0 719.0 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-N r2b r1a r3 r4 r1 16 32 48 64 80 SE +/- 3.75, N = 15 SE +/- 2.90, N = 3 SE +/- 3.93, N = 15 SE +/- 0.25, N = 15 SE +/- 0.36, N = 14 62.3 63.6 64.3 70.2 72.3 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dDOT r1a r2b r3 r1 r4 160 320 480 640 800 SE +/- 34.44, N = 3 SE +/- 34.40, N = 14 SE +/- 50.57, N = 15 SE +/- 6.43, N = 14 SE +/- 2.76, N = 15 371.00 447.65 713.47 720.00 765.00 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dAXPY r1a r2b r3 r1 r4 200 400 600 800 1000 SE +/- 23.02, N = 3 SE +/- 40.80, N = 15 SE +/- 82.34, N = 15 SE +/- 20.63, N = 14 SE +/- 5.62, N = 15 392.0 507.1 1024.2 1058.0 1158.0 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dCOPY r1a r2b r1 r3 r4 200 400 600 800 1000 SE +/- 29.90, N = 3 SE +/- 35.11, N = 15 SE +/- 25.47, N = 14 SE +/- 26.97, N = 15 SE +/- 9.73, N = 15 335.0 422.2 843.0 913.0 936.0 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sDOT r1a r2b r3 r4 r1 130 260 390 520 650 SE +/- 11.67, N = 3 SE +/- 5.60, N = 15 SE +/- 2.55, N = 15 SE +/- 2.45, N = 15 SE +/- 2.34, N = 14 277 349 532 535 620 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sAXPY r1a r2b r4 r3 r1 200 400 600 800 1000 SE +/- 15.25, N = 3 SE +/- 10.36, N = 15 SE +/- 11.35, N = 15 SE +/- 8.11, N = 15 SE +/- 6.62, N = 14 370 474 855 862 1003 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sCOPY r1a r2b r3 r4 r1 400 800 1200 1600 2000 SE +/- 4.10, N = 3 SE +/- 22.07, N = 15 SE +/- 51.32, N = 15 SE +/- 54.62, N = 15 SE +/- 16.63, N = 14 504 691 1135 1167 1834 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.12.1 Variant: Monero - Hash Count: 1M r1 r2b r1a r4 r3 4K 8K 12K 16K 20K SE +/- 23.28, N = 3 SE +/- 151.73, N = 3 SE +/- 20.55, N = 3 SE +/- 243.31, N = 15 SE +/- 245.77, N = 3 19299.5 19311.1 19452.0 20574.6 20652.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p r3 r4 r2b r1a 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.17, N = 3 7.38 7.43 7.45 21.25 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU r2b r4 50K 100K 150K 200K 250K SE +/- 247.29, N = 3 SE +/- 269.51, N = 3 214210.83 214241.34 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Pabellon Barcelona - Compute: CPU-Only r4 r2b 20 40 60 80 100 SE +/- 0.28, N = 3 SE +/- 0.08, N = 3 88.68 88.57
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 1.0.2 Time To Compile r2b r3 r4 r1 r1a 16 32 48 64 80 SE +/- 0.42, N = 3 SE +/- 0.66, N = 7 SE +/- 0.51, N = 3 SE +/- 0.22, N = 3 SE +/- 0.62, N = 3 71.93 71.13 70.76 62.16 61.93 1. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU r2b r1a r1 r3 r4 200 400 600 800 1000 SE +/- 9.76, N = 3 SE +/- 4.49, N = 3 SE +/- 7.46, N = 3 SE +/- 1.09, N = 3 SE +/- 2.67, N = 3 808.29 804.32 801.41 796.69 792.30 MIN: 767.97 MIN: 765.37 MIN: 767.38 MIN: 771.28 MIN: 763.96 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU r3 r1 r4 r1a r2b 200 400 600 800 1000 SE +/- 2.18, N = 3 SE +/- 2.07, N = 3 SE +/- 1.96, N = 3 SE +/- 3.65, N = 3 SE +/- 1.48, N = 3 793.08 792.83 792.05 791.93 789.84 MIN: 768.2 MIN: 763.76 MIN: 765.9 MIN: 765.01 MIN: 767.03 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU r3 r4 r2b r1a r1 0.2801 0.5602 0.8403 1.1204 1.4005 SE +/- 0.01066, N = 15 SE +/- 0.00891, N = 15 SE +/- 0.01174, N = 15 SE +/- 0.01126, N = 15 SE +/- 0.01080, N = 15 1.24508 1.24116 1.23796 1.22278 1.21594 MIN: 0.89 MIN: 0.85 MIN: 0.87 MIN: 0.85 MIN: 0.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU r3 r1 r1a r4 r2b 100 200 300 400 500 SE +/- 2.40, N = 3 SE +/- 0.58, N = 3 SE +/- 0.90, N = 3 SE +/- 1.10, N = 3 SE +/- 0.78, N = 3 450.65 447.97 447.31 446.54 446.39 MIN: 432.96 MIN: 433.22 MIN: 432.33 MIN: 429.71 MIN: 432.04 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU r4 r2b r3 r1a r1 100 200 300 400 500 SE +/- 3.51, N = 3 SE +/- 0.65, N = 3 SE +/- 1.24, N = 3 SE +/- 1.79, N = 3 SE +/- 0.58, N = 3 448.91 447.29 447.14 446.94 445.14 MIN: 431.33 MIN: 433.06 MIN: 432.42 MIN: 430.47 MIN: 431.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU r4 r2b r1a r3 r1 100 200 300 400 500 SE +/- 2.63, N = 3 SE +/- 1.13, N = 3 SE +/- 2.18, N = 3 SE +/- 0.04, N = 3 SE +/- 0.85, N = 3 447.96 447.70 447.44 446.92 445.52 MIN: 429.99 MIN: 433.04 MIN: 429.4 MIN: 433.64 MIN: 431.18 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K r3 r2b r4 r1a r1 8 16 24 32 40 SE +/- 0.18, N = 4 SE +/- 0.15, N = 15 SE +/- 0.08, N = 3 SE +/- 0.28, N = 3 SE +/- 0.28, N = 3 14.06 14.30 14.73 32.51 33.07 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Classroom - Compute: CPU-Only r4 r2b 16 32 48 64 80 SE +/- 0.13, N = 3 SE +/- 0.08, N = 3 72.29 71.78
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: UASTC 4 + Zstd Compression 19 r4 r2b 13 26 39 52 65 SE +/- 0.74, N = 3 SE +/- 0.68, N = 4 56.77 56.66
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU r4 r2b r3 r1a r1 7 14 21 28 35 SE +/- 0.38629, N = 12 SE +/- 0.31773, N = 13 SE +/- 0.30585, N = 15 SE +/- 0.01835, N = 3 SE +/- 0.02080, N = 3 28.46130 28.40230 28.18150 7.50059 7.49467 MIN: 14.76 MIN: 14.66 MIN: 14.34 MIN: 6.91 MIN: 6.98 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: Danish Mood - Acceleration: CPU r3 r4 r2b r1 r1a 2 4 6 8 10 SE +/- 0.07, N = 3 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 SE +/- 0.10, N = 3 5.65 5.68 5.73 7.42 7.55 MIN: 1.24 / MAX: 7.63 MIN: 1.26 / MAX: 7.6 MIN: 1.3 / MAX: 7.65 MIN: 3.2 / MAX: 8.74 MIN: 3.28 / MAX: 8.86
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: LuxCore Benchmark - Acceleration: CPU r2b r4 r3 r1 r1a 2 4 6 8 10 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 5.84 5.87 5.92 7.84 8.04 MIN: 1.16 / MAX: 7.97 MIN: 1.15 / MAX: 7.95 MIN: 1.15 / MAX: 7.98 MIN: 3.44 / MAX: 9.2 MIN: 3.51 / MAX: 9.33
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p r2b r3 r4 r1a 0.1148 0.2296 0.3444 0.4592 0.574 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.32 0.33 0.33 0.51 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: Rainbow Colors and Prism - Acceleration: CPU r1a r2b r4 r3 r1 4 8 12 16 20 SE +/- 0.47, N = 15 SE +/- 0.87, N = 13 SE +/- 0.79, N = 12 SE +/- 1.13, N = 12 SE +/- 1.05, N = 15 13.34 13.42 14.79 16.47 17.04 MIN: 10.32 / MAX: 17.45 MIN: 8.28 / MAX: 21.15 MIN: 9.85 / MAX: 20.95 MIN: 10.39 / MAX: 21.43 MIN: 11.27 / MAX: 22.05
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p r2b r3 r4 r1a 7 14 21 28 35 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 10.39 10.39 10.54 28.66 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Fishy Cat - Compute: CPU-Only r4 r2b 11 22 33 44 55 SE +/- 0.25, N = 3 SE +/- 0.15, N = 3 46.73 46.38
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU r3 r4 r2b r1a r1 0.6771 1.3542 2.0313 2.7084 3.3855 SE +/- 0.02478, N = 14 SE +/- 0.02449, N = 14 SE +/- 0.02287, N = 13 SE +/- 0.00276, N = 3 SE +/- 0.00128, N = 3 3.00929 3.00907 3.00464 2.96857 2.96135 MIN: 2.84 MIN: 2.84 MIN: 2.84 MIN: 2.84 MIN: 2.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
VOSK Speech Recognition Toolkit VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better VOSK Speech Recognition Toolkit 0.3.21 r2b r1 r3 r4 r1a 8 16 24 32 40 SE +/- 0.43, N = 3 SE +/- 0.32, N = 3 SE +/- 0.43, N = 3 SE +/- 0.32, N = 3 SE +/- 0.29, N = 8 36.42 35.92 35.58 35.50 35.01
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time r2b r1 r4 r1a r3 40M 80M 120M 160M 200M SE +/- 1982639.48, N = 3 SE +/- 1585265.68, N = 15 SE +/- 2183262.34, N = 4 SE +/- 2404481.41, N = 3 SE +/- 1924842.52, N = 3 181554218 181644819 186013261 186263552 189214499 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
srsLTE srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org UE Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test r2b r3 r1 r1a r4 20 40 60 80 100 SE +/- 0.38, N = 3 SE +/- 1.14, N = 3 SE +/- 0.76, N = 3 SE +/- 1.16, N = 3 SE +/- 0.62, N = 3 75.0 76.1 76.9 77.3 78.3 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test r2b r3 r1 r4 r1a 40 80 120 160 200 SE +/- 1.23, N = 3 SE +/- 2.42, N = 3 SE +/- 1.15, N = 3 SE +/- 0.58, N = 3 SE +/- 0.36, N = 3 181.6 181.6 183.4 183.7 184.2 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
srsLTE srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsLTE 20.10.1 Test: OFDM_Test r1a r1 r4 r2b r3 30M 60M 90M 120M 150M SE +/- 240370.09, N = 3 SE +/- 611010.09, N = 3 SE +/- 233333.33, N = 3 SE +/- 366666.67, N = 3 SE +/- 600925.21, N = 3 120133333 120300000 120666667 120733333 120833333 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/sec, More Is Better Sysbench 1.0.20 Test: RAM / Memory r2b r4 3K 6K 9K 12K 15K SE +/- 125.16, N = 15 SE +/- 118.72, N = 15 12510.56 12553.44 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 - Decrypt r4 r3 r2b r1 r1a 1200 2400 3600 4800 6000 SE +/- 12.66, N = 3 SE +/- 1.10, N = 3 SE +/- 0.94, N = 3 SE +/- 1.20, N = 3 SE +/- 0.12, N = 3 5650.14 5662.34 5662.76 5663.06 5663.61 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 r3 r2b r4 r1 r1a 1200 2400 3600 4800 6000 SE +/- 42.23, N = 3 SE +/- 55.60, N = 3 SE +/- 51.03, N = 3 SE +/- 0.92, N = 3 SE +/- 0.28, N = 3 5593.37 5606.97 5612.00 5669.70 5670.81 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
Basis Universal Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: ETC1S r4 r2b 8 16 24 32 40 SE +/- 0.42, N = 3 SE +/- 0.21, N = 3 34.42 34.24 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 0 r2b r4 3 6 9 12 15 SE +/- 0.08, N = 15 SE +/- 0.08, N = 3 11.25 11.23 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p r4 r2b r3 r1a 30 60 90 120 150 SE +/- 0.28, N = 3 SE +/- 0.49, N = 3 SE +/- 0.31, N = 15 SE +/- 0.82, N = 15 42.37 43.26 43.42 125.25 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: BMW27 - Compute: CPU-Only r4 r2b 7 14 21 28 35 SE +/- 0.32, N = 3 SE +/- 0.08, N = 3 29.69 29.56
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 - Decrypt r3 r2b r4 r1 r1a 130 260 390 520 650 SE +/- 3.74, N = 3 SE +/- 3.49, N = 3 SE +/- 2.81, N = 3 SE +/- 0.40, N = 3 SE +/- 0.57, N = 3 612.15 612.44 615.98 619.46 619.54 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 r2b r3 r4 r1a r1 130 260 390 520 650 SE +/- 3.48, N = 3 SE +/- 3.19, N = 3 SE +/- 2.98, N = 3 SE +/- 0.17, N = 3 SE +/- 0.03, N = 3 615.81 616.50 619.64 623.20 623.49 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish - Decrypt r2b r1 r4 r3 r1a 80 160 240 320 400 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 363.20 363.26 363.28 363.31 363.33 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish r3 r4 r2b r1 r1a 80 160 240 320 400 SE +/- 3.73, N = 3 SE +/- 3.51, N = 3 SE +/- 0.11, N = 3 SE +/- 0.56, N = 3 SE +/- 0.05, N = 3 359.45 359.57 362.93 363.04 363.62 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish - Decrypt r1a r2b r4 r1 r3 60 120 180 240 300 SE +/- 0.11, N = 3 SE +/- 0.12, N = 3 SE +/- 0.04, N = 3 SE +/- 0.14, N = 3 SE +/- 0.06, N = 3 292.37 292.40 292.61 292.74 292.83 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish r4 r3 r2b r1a r1 60 120 180 240 300 SE +/- 2.83, N = 3 SE +/- 2.66, N = 3 SE +/- 0.11, N = 3 SE +/- 0.14, N = 3 SE +/- 0.14, N = 3 286.00 286.18 288.56 288.85 289.13 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 - Decrypt r3 r1a r4 r1 r2b 30 60 90 120 150 SE +/- 0.35, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 115.72 116.07 116.07 116.07 116.08 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 r3 r4 r2b r1a r1 30 60 90 120 150 SE +/- 1.33, N = 3 SE +/- 1.17, N = 3 SE +/- 1.15, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 114.52 114.65 114.66 115.97 115.97 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI - Decrypt r2b r1a r4 r3 r1 20 40 60 80 100 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 74.28 74.29 74.29 74.31 74.32 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI r2b r4 r3 r1 r1a 20 40 60 80 100 SE +/- 1.01, N = 3 SE +/- 0.87, N = 3 SE +/- 0.77, N = 3 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 76.29 76.40 76.41 77.29 77.31 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.12.1 Variant: Wownero - Hash Count: 1M r1 r3 r2b r4 r1a 11K 22K 33K 44K 55K SE +/- 425.40, N = 7 SE +/- 358.18, N = 3 SE +/- 238.38, N = 3 SE +/- 235.04, N = 3 SE +/- 588.34, N = 3 48051.5 49813.4 49908.3 49937.3 50166.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Intel Memory Latency Checker Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - Stream-Triad Like r2a r1a r4 r2b r3 r5 r1 70K 140K 210K 280K 350K SE +/- 34.05, N = 3 SE +/- 38.10, N = 3 SE +/- 60.42, N = 3 SE +/- 12.95, N = 3 SE +/- 32.03, N = 3 SE +/- 55.81, N = 3 SE +/- 177.93, N = 3 323826.9 323924.2 324112.8 324209.8 324227.4 324234.5 324377.2
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - 1:1 Reads-Writes r2b r2a r1 r1a r4 r5 r3 100K 200K 300K 400K 500K SE +/- 314.54, N = 3 SE +/- 212.40, N = 3 SE +/- 1187.16, N = 3 SE +/- 148.63, N = 3 SE +/- 1601.80, N = 3 SE +/- 847.23, N = 3 SE +/- 138.13, N = 3 440454.7 442144.2 442422.3 442843.2 446396.0 448800.1 449554.1
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - 2:1 Reads-Writes r1a r2a r3 r5 r4 r1 r2b 100K 200K 300K 400K 500K SE +/- 130.28, N = 3 SE +/- 115.55, N = 3 SE +/- 73.04, N = 3 SE +/- 12.06, N = 3 SE +/- 36.24, N = 3 SE +/- 274.15, N = 3 SE +/- 64.32, N = 3 456260.3 456408.6 457190.5 458830.6 458941.9 459038.6 459309.8
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - 3:1 Reads-Writes r2a r1a r3 r5 r4 r2b r1 90K 180K 270K 360K 450K SE +/- 236.99, N = 3 SE +/- 94.95, N = 3 SE +/- 88.34, N = 3 SE +/- 23.30, N = 3 SE +/- 23.30, N = 3 SE +/- 25.04, N = 3 SE +/- 163.24, N = 3 424077.3 424096.6 424904.5 425508.1 425822.1 425925.6 425933.7
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - All Reads r1 r5 r2b r4 r2a r1a r3 80K 160K 240K 320K 400K SE +/- 709.43, N = 3 SE +/- 23.85, N = 3 SE +/- 14.54, N = 3 SE +/- 26.62, N = 3 SE +/- 37.47, N = 3 SE +/- 14.58, N = 3 SE +/- 24.95, N = 3 356476.2 357722.7 357742.9 358110.5 358269.7 358385.5 358463.7
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU r3 r2b r1a r4 r1 0.0769 0.1538 0.2307 0.3076 0.3845 SE +/- 0.003372, N = 6 SE +/- 0.003448, N = 5 SE +/- 0.002562, N = 3 SE +/- 0.004121, N = 3 SE +/- 0.000853, N = 3 0.341955 0.341893 0.341663 0.340243 0.338327 MIN: 0.31 MIN: 0.3 MIN: 0.31 MIN: 0.3 MIN: 0.3 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU r2b r3 r1 r4 r1a 0.0488 0.0976 0.1464 0.1952 0.244 SE +/- 0.001893, N = 8 SE +/- 0.002019, N = 7 SE +/- 0.000867, N = 3 SE +/- 0.001544, N = 12 SE +/- 0.000781, N = 3 0.216806 0.216586 0.215115 0.215085 0.213643 MIN: 0.19 MIN: 0.19 MIN: 0.19 MIN: 0.19 MIN: 0.19 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
libjpeg-turbo tjbench tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Megapixels/sec, More Is Better libjpeg-turbo tjbench 2.1.0 Test: Decompression Throughput r1a r3 r4 r2b r1 40 80 120 160 200 SE +/- 0.39, N = 3 SE +/- 1.04, N = 3 SE +/- 0.47, N = 3 SE +/- 0.07, N = 3 SE +/- 0.15, N = 3 156.97 159.19 159.24 160.26 161.63 1. (CC) gcc options: -O3 -rdynamic
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Thorough r4 r2b 3 6 9 12 15 SE +/- 0.0879, N = 7 SE +/- 0.0796, N = 8 9.3091 9.2907 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium r2b r4 2 4 6 8 10 SE +/- 0.0906, N = 15 SE +/- 0.0290, N = 3 7.1887 7.1472 1. (CXX) g++ options: -O3 -flto -pthread
Timed Mesa Compilation This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Mesa Compilation 21.0 Time To Compile r2b r3 r4 r1 r1a 5 10 15 20 25 SE +/- 0.04, N = 3 SE +/- 0.15, N = 3 SE +/- 0.11, N = 3 SE +/- 0.02, N = 3 SE +/- 0.12, N = 3 21.58 21.37 21.31 20.95 20.38
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU r3 r4 r1a r2b r1 0.8015 1.603 2.4045 3.206 4.0075 SE +/- 0.01280, N = 3 SE +/- 0.00650, N = 3 SE +/- 0.00732, N = 3 SE +/- 0.00854, N = 3 SE +/- 0.00193, N = 3 3.56224 3.54783 3.54367 3.53121 3.53026 MIN: 3.39 MIN: 3.37 MIN: 3.38 MIN: 3.37 MIN: 3.38 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU r3 r2b r4 r1 r1a 0.0915 0.183 0.2745 0.366 0.4575 SE +/- 0.003204, N = 10 SE +/- 0.004259, N = 4 SE +/- 0.002415, N = 14 SE +/- 0.001135, N = 3 SE +/- 0.001124, N = 3 0.406877 0.403409 0.402919 0.398282 0.395588 MIN: 0.37 MIN: 0.36 MIN: 0.36 MIN: 0.37 MIN: 0.36 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p r2b r4 r3 r1 r1a 9 18 27 36 45 SE +/- 0.09, N = 3 SE +/- 0.31, N = 3 SE +/- 0.14, N = 3 SE +/- 0.29, N = 3 SE +/- 0.24, N = 3 27.80 28.01 28.22 36.91 37.34 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p r3 r2b r4 r1a 20 40 60 80 100 SE +/- 0.26, N = 3 SE +/- 0.19, N = 3 SE +/- 0.27, N = 3 SE +/- 1.01, N = 15 36.06 36.20 36.35 103.92 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 160 - Buffer Length: 256 - Filter Length: 57 r2b r4 r3 r1 r1a 700M 1400M 2100M 2800M 3500M SE +/- 14685858.66, N = 3 SE +/- 16411005.79, N = 3 SE +/- 14901789.60, N = 3 SE +/- 17047384.94, N = 3 SE +/- 2062630.47, N = 3 3131866667 3140266667 3143300000 3144800000 3162066667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 128 - Buffer Length: 256 - Filter Length: 57 r1a r4 r2b r3 r1 700M 1400M 2100M 2800M 3500M SE +/- 38975091.76, N = 3 SE +/- 16537936.19, N = 3 SE +/- 14312737.14, N = 3 SE +/- 6896617.53, N = 3 SE +/- 8088331.79, N = 3 3352733333 3398800000 3400066667 3411000000 3415933333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 64 - Buffer Length: 256 - Filter Length: 57 r2b r3 r4 r1a r1 700M 1400M 2100M 2800M 3500M SE +/- 17049079.48, N = 3 SE +/- 14893734.70, N = 3 SE +/- 12876378.03, N = 3 SE +/- 2150193.79, N = 3 SE +/- 5206513.02, N = 3 3227433333 3232700000 3245666667 3263700000 3267133333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 32 - Buffer Length: 256 - Filter Length: 57 r4 r2b r3 r1 r1a 400M 800M 1200M 1600M 2000M SE +/- 6582552.70, N = 3 SE +/- 10121648.97, N = 3 SE +/- 3951371.07, N = 3 SE +/- 2515949.13, N = 3 1697500000 1699333333 1704500000 1735100000 1736800000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 r4 r2b r3 r1 r1a 200M 400M 600M 800M 1000M SE +/- 10609570.10, N = 3 SE +/- 3620722.76, N = 3 SE +/- 859903.10, N = 3 SE +/- 691953.76, N = 3 SE +/- 669162.00, N = 3 860046667 862890000 865410000 885320000 890273333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 8 - Buffer Length: 256 - Filter Length: 57 r2b r4 r3 r1 90M 180M 270M 360M 450M SE +/- 2458908.97, N = 3 SE +/- 2739929.03, N = 3 SE +/- 1240739.03, N = 3 SE +/- 422150.58, N = 3 428100000 432013333 432170000 441953333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 4 - Buffer Length: 256 - Filter Length: 57 r2b r3 r4 r1 50M 100M 150M 200M 250M SE +/- 824809.74, N = 3 SE +/- 1663583.82, N = 3 SE +/- 1956802.95, N = 3 SE +/- 1090112.12, N = 3 213203333 215343333 216773333 217643333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 2 - Buffer Length: 256 - Filter Length: 57 r4 r2b r1 r3 20M 40M 60M 80M 100M SE +/- 132035.35, N = 3 SE +/- 907677.13, N = 3 SE +/- 729984.78, N = 3 SE +/- 430348.70, N = 3 109430000 110173333 110713333 111510000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 1 - Buffer Length: 256 - Filter Length: 57 r4 r2b r3 r1 12M 24M 36M 48M 60M SE +/- 534784.17, N = 3 SE +/- 613156.95, N = 3 SE +/- 550708.74, N = 3 SE +/- 173700.89, N = 3 55251667 56230333 57197667 57792000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: Zstd Compression 19 r4 r2b 5 10 15 20 25 SE +/- 0.20, N = 3 SE +/- 0.22, N = 3 20.08 19.78
Basis Universal Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 3 r4 r2b 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 17.19 17.16 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU r3 r2b r4 r1a r1 0.0547 0.1094 0.1641 0.2188 0.2735 SE +/- 0.002507, N = 5 SE +/- 0.003187, N = 3 SE +/- 0.002245, N = 7 SE +/- 0.000662, N = 3 SE +/- 0.000856, N = 3 0.243308 0.243026 0.242450 0.240122 0.239989 MIN: 0.22 MIN: 0.22 MIN: 0.22 MIN: 0.23 MIN: 0.22 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p r2b r4 r3 r1 r1a 90 180 270 360 450 SE +/- 4.05, N = 12 SE +/- 0.65, N = 3 SE +/- 1.57, N = 3 SE +/- 15.40, N = 12 SE +/- 16.03, N = 12 182.26 184.07 185.53 386.29 393.46 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: UASTC 3 r2b r4 1.2744 2.5488 3.8232 5.0976 6.372 SE +/- 0.053, N = 15 SE +/- 0.008, N = 3 5.664 5.562
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU r2b r1a r1 r4 r3 0.282 0.564 0.846 1.128 1.41 SE +/- 0.00964, N = 3 SE +/- 0.01592, N = 15 SE +/- 0.00180, N = 3 SE +/- 0.01282, N = 3 SE +/- 0.01211, N = 3 1.25313 1.25267 1.24809 1.24222 1.24176 MIN: 1.2 MIN: 1.19 MIN: 1.2 MIN: 1.19 MIN: 1.18 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU r2b r4 r3 r1 r1a 0.2123 0.4246 0.6369 0.8492 1.0615 SE +/- 0.011253, N = 3 SE +/- 0.008450, N = 3 SE +/- 0.007264, N = 3 SE +/- 0.002101, N = 3 SE +/- 0.002111, N = 3 0.943624 0.940714 0.936941 0.918568 0.912279 MIN: 0.86 MIN: 0.86 MIN: 0.85 MIN: 0.85 MIN: 0.86 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Basis Universal Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 2 r4 r2b 4 8 12 16 20 SE +/- 0.15, N = 3 SE +/- 0.18, N = 3 14.16 13.98 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction r4 r3 r2b r1 r1a 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 14.66 14.60 11.56 11.36 11.27 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: UASTC 3 + Zstd Compression 19 r4 r2b 3 6 9 12 15 SE +/- 0.11, N = 5 SE +/- 0.06, N = 3 10.03 10.01
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU r3 r4 r1 r1a r2b 0.0491 0.0982 0.1473 0.1964 0.2455 SE +/- 0.003384, N = 15 SE +/- 0.004970, N = 15 SE +/- 0.002205, N = 15 SE +/- 0.001109, N = 3 SE +/- 0.004449, N = 15 0.218349 0.217941 0.210919 0.210728 0.210324 MIN: 0.19 MIN: 0.19 MIN: 0.19 MIN: 0.2 MIN: 0.18 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU r3 r2b r4 r1a r1 0.1355 0.271 0.4065 0.542 0.6775 SE +/- 0.004400, N = 3 SE +/- 0.004180, N = 3 SE +/- 0.003648, N = 3 SE +/- 0.000780, N = 3 SE +/- 0.001703, N = 3 0.602314 0.602122 0.602038 0.595661 0.593042 MIN: 0.56 MIN: 0.56 MIN: 0.56 MIN: 0.56 MIN: 0.56 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction r4 r3 r2b r1 r1a 0.8039 1.6078 2.4117 3.2156 4.0195 SE +/- 0.02850005, N = 15 SE +/- 0.03072276, N = 15 SE +/- 0.02799890, N = 3 SE +/- 0.00774937, N = 3 SE +/- 0.01532048, N = 3 3.57278153 3.56592774 3.02281992 2.74370996 2.73859096 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: Zstd Compression 9 r4 r2b 0.8318 1.6636 2.4954 3.3272 4.159 SE +/- 0.064, N = 15 SE +/- 0.003, N = 3 3.697 3.470
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU r4 r2b r3 r1a r1 0.8197 1.6394 2.4591 3.2788 4.0985 SE +/- 0.05617, N = 14 SE +/- 0.05421, N = 14 SE +/- 0.05675, N = 14 SE +/- 0.00795, N = 3 SE +/- 0.00924, N = 3 3.64319 3.64232 3.64033 3.57662 3.57247 MIN: 3.5 MIN: 3.51 MIN: 3.47 MIN: 3.5 MIN: 3.53 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU r4 r3 r2b r1 r1a 0.1972 0.3944 0.5916 0.7888 0.986 SE +/- 0.007461, N = 14 SE +/- 0.007890, N = 14 SE +/- 0.008361, N = 14 SE +/- 0.002419, N = 3 SE +/- 0.002055, N = 3 0.876227 0.874968 0.874080 0.864164 0.863214 MIN: 0.84 MIN: 0.84 MIN: 0.83 MIN: 0.84 MIN: 0.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Google Draco Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Google Draco 1.4.1 Model: Church Facade r4 r2b 1500 3000 4500 6000 7500 SE +/- 3.33, N = 3 SE +/- 20.01, N = 3 7082 7001 1. (CXX) g++ options: -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU r3 r4 r2b r1 r1a 0.4148 0.8296 1.2444 1.6592 2.074 SE +/- 0.02043, N = 3 SE +/- 0.00968, N = 3 SE +/- 0.01382, N = 3 SE +/- 0.00580, N = 3 SE +/- 0.00121, N = 3 1.84339 1.81913 1.81774 1.80046 1.79881 MIN: 1.67 MIN: 1.68 MIN: 1.69 MIN: 1.68 MIN: 1.69 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Google Draco Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Google Draco 1.4.1 Model: Lion r4 r2b 1300 2600 3900 5200 6500 SE +/- 21.15, N = 3 SE +/- 25.21, N = 3 6170 6126 1. (CXX) g++ options: -O3
OpenBenchmarking.org ms, Fewer Is Better toyBrot Fractal Generator 2020-11-18 Implementation: C++ Threads r3 r2b r4 r1 r1a 1500 3000 4500 6000 7500 SE +/- 98.76, N = 3 SE +/- 89.67, N = 3 SE +/- 76.94, N = 4 SE +/- 49.12, N = 3 SE +/- 29.96, N = 3 7203 7149 7141 7018 6980 1. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p r4 r2b r3 r1 r1a 70 140 210 280 350 SE +/- 1.59, N = 3 SE +/- 1.13, N = 3 SE +/- 1.63, N = 3 SE +/- 1.20, N = 3 SE +/- 1.10, N = 3 162.21 164.32 164.51 327.87 329.53 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p r4 r3 r2b r1 r1a 90 180 270 360 450 SE +/- 0.47, N = 3 SE +/- 2.25, N = 3 SE +/- 0.90, N = 3 SE +/- 1.44, N = 3 SE +/- 0.66, N = 3 179.13 181.52 182.17 401.29 408.24 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU r3 r1a r2b r4 r1 0.2578 0.5156 0.7734 1.0312 1.289 SE +/- 0.00975, N = 3 SE +/- 0.00124, N = 3 SE +/- 0.00330, N = 3 SE +/- 0.01182, N = 3 SE +/- 0.00274, N = 3 1.14578 1.12224 1.11874 1.11811 1.10991 MIN: 1.04 MIN: 1.02 MIN: 1.02 MIN: 1.02 MIN: 1.02 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU r3 r1a r1 r4 r2b 0.2029 0.4058 0.6087 0.8116 1.0145 SE +/- 0.006631, N = 3 SE +/- 0.003986, N = 3 SE +/- 0.006225, N = 3 SE +/- 0.005244, N = 3 SE +/- 0.004902, N = 3 0.901823 0.879137 0.877815 0.875421 0.869978 MIN: 0.84 MIN: 0.83 MIN: 0.82 MIN: 0.82 MIN: 0.82 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU r2b r3 r4 r1a r1 0.4764 0.9528 1.4292 1.9056 2.382 SE +/- 0.01980, N = 3 SE +/- 0.01943, N = 3 SE +/- 0.01801, N = 3 SE +/- 0.00168, N = 3 SE +/- 0.00138, N = 3 2.11712 2.10841 2.10837 2.08532 2.07944 MIN: 2.03 MIN: 2.03 MIN: 2.03 MIN: 2.03 MIN: 2.03 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p r4 r3 r2b r1a r1 110 220 330 440 550 SE +/- 1.14, N = 3 SE +/- 1.80, N = 10 SE +/- 2.64, N = 4 SE +/- 4.78, N = 3 SE +/- 3.80, N = 3 233.96 234.39 234.51 493.51 499.23 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p r4 r3 r2b r1a r1 60 120 180 240 300 SE +/- 1.22, N = 3 SE +/- 1.64, N = 3 SE +/- 1.76, N = 5 SE +/- 1.37, N = 3 SE +/- 1.68, N = 3 156.26 157.83 158.16 288.99 290.67 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
r1 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 April 2021 08:40 by user root.
r1a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 06:04 by user root.
r2 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 16:12 by user root.
r2a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 16:16 by user root.
r2b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 18:24 by user root.
r3 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 April 2021 08:26 by user root.
r4 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 April 2021 21:13 by user root.
r5 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 1 May 2021 07:03 by user root.