Tests for a future article. Intel Xeon E-2336 testing with a ASRockRack E3C252D4U (1.22 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: Intel Xeon E-2388G @ 3.20GHz (8 Cores / 16 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: Intel RocketLake-S [UHD ], Monitor: VA2431, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
d e Processor: Intel Xeon E-2336 @ 2.90GHz (6 Cores / 12 Threads) , Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: ASPEED , Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1024x768
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 e d a b c 40M 80M 120M 160M 200M 192950100 192808700 192619900 190611300 190589800 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 a b d e 40 80 120 160 200 148.00 149.70 183.66 184.80 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 a b e d 60 120 180 240 300 208.65 212.42 262.24 267.83 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 b a d e 200K 400K 600K 800K 1000K SE +/- 5696.93, N = 12 SE +/- 6733.33, N = 8 783425.31 780967.47 693352.96 671390.07
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 b a d e 4 8 12 16 20 SE +/- 0.18, N = 12 SE +/- 0.18, N = 8 11.82 11.83 14.42 15.45 MAX: 788.57 MAX: 845.33 MAX: 871.6 MAX: 841.24
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 a b e d 300K 600K 900K 1200K 1500K SE +/- 17375.90, N = 3 SE +/- 14317.27, N = 6 1447958.54 1446859.96 1349022.28 1342250.61
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 a b e d 6 12 18 24 30 SE +/- 0.33, N = 3 SE +/- 0.32, N = 6 20.68 20.85 22.80 23.25 MAX: 843.04 MAX: 789.31 MAX: 895.66 MAX: 840.31
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 b a d e 300K 600K 900K 1200K 1500K SE +/- 1722.69, N = 3 SE +/- 2691.26, N = 3 1187104.23 1175572.13 1082993.14 1080456.40
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 b a e d 3 6 9 12 15 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 9.97 10.09 11.39 11.48 MAX: 637.17 MAX: 670.1 MAX: 661.12 MAX: 654.25
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 a b e d 400K 800K 1200K 1600K 2000K SE +/- 9885.78, N = 3 SE +/- 15232.42, N = 3 1869315.57 1868505.59 1779490.51 1768171.84
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 b a e d 5 10 15 20 25 SE +/- 0.19, N = 3 SE +/- 0.16, N = 3 19.80 19.82 21.05 21.11 MAX: 675.64 MAX: 689.69 MAX: 687.47 MAX: 729.87
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 a b e d 400K 800K 1200K 1600K 2000K SE +/- 22284.36, N = 3 SE +/- 14334.70, N = 3 1810868.13 1767214.52 1699283.73 1609911.22
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 a b e d 3 6 9 12 15 SE +/- 0.14, N = 3 SE +/- 0.10, N = 3 8.08 8.39 8.86 9.44 MAX: 829.62 MAX: 837.13 MAX: 875.98 MAX: 857.31
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 a b e d 500K 1000K 1500K 2000K 2500K SE +/- 30745.57, N = 4 SE +/- 22735.34, N = 7 2558694.08 2554316.08 2166865.16 2139318.30
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 b a e d 5 10 15 20 25 SE +/- 0.15, N = 7 SE +/- 0.22, N = 4 16.29 16.36 19.55 20.38 MAX: 874.18 MAX: 861.67 MAX: 928.08 MAX: 917.41
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 b a e d 7M 14M 21M 28M 35M SE +/- 309841.40, N = 3 SE +/- 306044.35, N = 6 31446324.62 31169657.42 27941009.12 27658814.36
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 b a e d 13 26 39 52 65 SE +/- 0.45, N = 3 SE +/- 0.61, N = 6 48.76 49.28 55.42 56.44 MAX: 1002.85 MAX: 1102.28 MAX: 1292.79 MAX: 1369.78
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 b a d e 8M 16M 24M 32M 40M SE +/- 40579.48, N = 3 SE +/- 206780.32, N = 3 35881779.03 35731058.31 31204403.42 30519342.37
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 b a d e 30 60 90 120 150 SE +/- 0.14, N = 3 SE +/- 0.60, N = 3 122.59 122.86 139.93 144.80 MAX: 1193.3 MAX: 1107.59 MAX: 1287.5 MAX: 1580.7
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 b a d e 7M 14M 21M 28M 35M SE +/- 236903.12, N = 3 SE +/- 308105.07, N = 3 33835269.57 33440678.10 29837035.03 29645835.71
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 b a e d 13 26 39 52 65 SE +/- 0.46, N = 3 SE +/- 0.51, N = 3 51.04 51.82 58.71 58.92 MAX: 1204.88 MAX: 1143.96 MAX: 1143 MAX: 1277.7
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 b a e d 8M 16M 24M 32M 40M SE +/- 103626.85, N = 3 SE +/- 244332.82, N = 3 36624182.35 36535518.62 30665605.41 30570884.25
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 b a d e 30 60 90 120 150 SE +/- 0.82, N = 3 SE +/- 0.77, N = 3 127.24 128.26 152.42 152.54 MAX: 1175.96 MAX: 1123.23 MAX: 1686.91 MAX: 1269.54
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 b a e d 8M 16M 24M 32M 40M SE +/- 239793.06, N = 3 SE +/- 243373.38, N = 3 38375799.87 37703435.51 30686841.47 30418152.44
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 b a d e 14 28 42 56 70 SE +/- 0.32, N = 3 SE +/- 0.09, N = 3 48.04 48.28 60.56 60.80 MAX: 1030.49 MAX: 1114.29 MAX: 1126.74 MAX: 1125.57
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 b a d e 7M 14M 21M 28M 35M SE +/- 243572.43, N = 3 SE +/- 306835.51, N = 3 32929010.53 32700015.96 26725280.14 26463397.35
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 b a d e 40 80 120 160 200 SE +/- 1.53, N = 3 SE +/- 1.26, N = 3 144.40 145.58 179.03 181.42 MAX: 1247.59 MAX: 1258.59 MAX: 1288.79 MAX: 1854.32
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time a b e d 1.0058 2.0116 3.0174 4.0232 5.029 3.40 3.41 4.36 4.47
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP a b e d 40 80 120 160 200 165.02 164.75 160.26 159.49 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding d e a b 200 400 600 800 1000 978.58 976.79 965.57 964.26 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding b a d e 300 600 900 1200 1500 1353.70 1353.70 1340.21 1338.53 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding a b e d 300 600 900 1200 1500 1482.46 1474.13 1192.71 1178.42 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding a b e d 300 600 900 1200 1500 1441.73 1433.86 1229.98 1222.34 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding b a e d 200 400 600 800 1000 1052.40 1044.14 975.30 971.74 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding b a e d 400 800 1200 1600 2000 1799.03 1763.28 1643.56 1613.67 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast b a e d 30 60 90 120 150 154.35 154.04 116.02 115.97 1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only b a d e 70 140 210 280 350 SE +/- 0.15, N = 3 215.41 215.45 312.67 312.98
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b d e 30K 60K 90K 120K 150K 130412 129573 89349 88988 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache b a e d 20 40 60 80 100 108.92 108.20 99.39 95.87 MIN: 5.72 / MAX: 8571.43 MIN: 5.7 / MAX: 8571.43 MIN: 3.98 / MAX: 7500 MIN: 3.96 / MAX: 7500
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run a b d e 30 60 90 120 150 113.63 112.91 103.13 102.31 MIN: 5.82 / MAX: 8571.43 MIN: 5.81 / MAX: 8571.43 MIN: 4 / MAX: 7500 MIN: 4.01 / MAX: 8571.43
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run a b e d 30 60 90 120 150 113.53 111.97 103.64 103.15 MIN: 5.79 / MAX: 8571.43 MIN: 5.77 / MAX: 7500 MIN: 4.01 / MAX: 7500 MIN: 4 / MAX: 8571.43
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi c b a d e 70 140 210 280 350 334.68 334.68 334.52 237.94 233.12 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin b c a e d 2K 4K 6K 8K 10K 8215.17 8152.24 8128.92 5850.18 5748.68 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin b a c d e 400 800 1200 1600 2000 1686.70 1685.16 1683.88 1203.63 1190.30 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S c b a e d 130K 260K 390K 520K 650K 594430 593650 592320 417510 417080 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin c a b d e 800 1600 2400 3200 4000 3516.83 3305.88 3273.74 2610.22 2493.12 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl a b c e d 6K 12K 18K 24K 30K 27670 27390 27240 19560 19530 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits c b a d e 12K 24K 36K 48K 60K 54450 54340 54340 39570 38470 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite a c b e d 30K 60K 90K 120K 150K 127140 124050 123930 88000 87680 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin b c a d e 40K 80K 120K 160K 200K 181190 180730 177750 127480 127470 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 1:1 a b e d 700K 1400K 2100K 2800K 3500K 3442545.53 3386413.69 2436674.73 2416447.01 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 1:5 b a e d 700K 1400K 2100K 2800K 3500K 3407584.22 3390692.31 2452945.72 2447513.39 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 5:1 b a e d 800K 1600K 2400K 3200K 4000K 3551639.05 3509365.98 2513794.98 2500811.71 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:5 b a d e 600K 1200K 1800K 2400K 3000K 3012041.19 3000846.79 2037833.52 2031408.76 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:5 b a d e 800K 1600K 2400K 3200K 4000K 3580150.29 3569222.70 2548641.80 2419203.43 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:5 a b d e 700K 1400K 2100K 2800K 3500K 3406857.67 3403187.91 2455266.37 2449997.94 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:5 a b d e 700K 1400K 2100K 2800K 3500K 3339072.06 3331423.51 2412723.51 2411587.54 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:10 b a d e 600K 1200K 1800K 2400K 3000K 3009409.13 2955396.42 2109010.30 2055869.52 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:10 b a d e 800K 1600K 2400K 3200K 4000K 3582493.35 3296080.94 2565892.85 2544125.94 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:10 b a d e 700K 1400K 2100K 2800K 3500K 3418062.57 3310390.02 2468187.63 2466340.57 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:10 b a d e 700K 1400K 2100K 2800K 3500K 3335026.97 3331550.72 2424431.95 2423927.40 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:100 a b e d 600K 1200K 1800K 2400K 3000K 3004715.33 3004192.87 2117511.76 2084872.04 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:100 a b e d 800K 1600K 2400K 3200K 4000K 3654927.90 3457130.11 2595364.56 2571654.79 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:100 b a d e 700K 1400K 2100K 2800K 3500K 3454075.39 3453155.48 2503115.23 2500272.78 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:100 b a e d 700K 1400K 2100K 2800K 3500K 3389154.48 3382163.21 2462268.97 2458737.20 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
EnCodec EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps b a d e 7 14 21 28 35 25.89 25.89 29.29 29.85
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live a b c e d 10 20 30 40 50 38.05 38.17 38.25 45.61 45.84 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live a b c e d 30 60 90 120 150 132.73 132.30 132.03 110.72 110.17 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload a b c d e 30 60 90 120 150 113.49 113.53 113.77 138.66 138.67 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload a b c e d 5 10 15 20 25 22.25 22.24 22.19 18.21 18.21 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform c a b e d 40 80 120 160 200 169.84 170.20 170.46 204.47 204.50 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform c a b e d 10 20 30 40 50 44.60 44.51 44.44 37.05 37.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand c b a e d 40 80 120 160 200 170.09 170.37 170.63 204.68 204.80 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand c b a e d 10 20 30 40 50 44.54 44.46 44.39 37.01 36.99 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Geekbench This is a benchmark of Geekbench 6 Pro. The test profile automates the execution of Geekbench 6 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 6 Pro. THIS TEST PROFILE WILL NOT WORK WITHOUT A VALID GEEKBENCH 6 PRO LICENSE KEY; test automation / CLI support is only available with the paid version of Geekbench. Learn more via the OpenBenchmarking.org test page.
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2023 Implementation: MPI CPU - Input: water_GMX50_bare b a d e 0.1782 0.3564 0.5346 0.7128 0.891 0.792 0.788 0.671 0.667 1. (CXX) g++ options: -O3
Kvazaar This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Medium b c a d e 2 4 6 8 10 6.79 6.74 6.72 4.77 4.76 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Very Fast b c a e d 4 8 12 16 20 17.04 17.03 16.91 12.12 12.11 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Super Fast b a c e d 5 10 15 20 25 21.23 21.14 21.05 14.99 14.96 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Ultra Fast a b c d e 7 14 21 28 35 29.37 29.35 29.31 20.92 20.89 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 e d a c b 13 26 39 52 65 56.6 56.4 55.8 55.4 55.4 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 e a d c b 20 40 60 80 100 110.4 110.2 110.0 109.5 109.4 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 b a d e 12M 24M 36M 48M 60M 54017000 54005000 50850000 50829000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 b a e d 20M 40M 60M 80M 100M 86235000 86164000 81272000 81257000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 b a e d 40M 80M 120M 160M 200M 200200000 199620000 177750000 177370000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 b a d e 60M 120M 180M 240M 300M 265830000 260980000 232090000 227420000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 b a e d 70M 140M 210M 280M 350M 331500000 331140000 277180000 276670000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 b a d e 90M 180M 270M 360M 450M 419540000 409640000 299170000 299160000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 a b d e 5M 10M 15M 20M 25M 21614000 21593000 20307000 20269000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 b a d e 110M 220M 330M 440M 550M 513440000 502810000 361340000 361220000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 b a d e 90M 180M 270M 360M 450M 433120000 427490000 308360000 308330000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 b a d e 20M 40M 60M 80M 100M 79964000 79299000 71110000 71098000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 b a d e 30M 60M 90M 120M 150M 132110000 131760000 100700000 99730000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 b a e d 40M 80M 120M 160M 200M 165290000 162520000 115710000 115700000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 b a d e 20M 40M 60M 80M 100M SE +/- 90000.00, N = 3 105040000 104940000 98855000 98751000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 a b e d 30M 60M 90M 120M 150M SE +/- 57831.17, N = 3 147466667 144810000 136420000 136380000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 b a e d 9M 18M 27M 36M 45M SE +/- 4630.81, N = 3 44040000 44031333 41172000 41039000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: DLSC - Acceleration: CPU b c a e d 0.4118 0.8236 1.2354 1.6472 2.059 1.83 1.79 1.79 1.27 1.25 MIN: 1.68 / MAX: 2.12 MIN: 1.66 / MAX: 2.06 MIN: 1.66 / MAX: 2.06 MIN: 1.16 / MAX: 1.57 MIN: 1.16 / MAX: 1.53
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Danish Mood - Acceleration: CPU a c b d e 0.261 0.522 0.783 1.044 1.305 1.16 1.09 1.08 0.66 0.63 MIN: 0.32 / MAX: 1.44 MIN: 0.29 / MAX: 1.39 MIN: 0.26 / MAX: 1.38 MIN: 0.14 / MAX: 0.9 MIN: 0.13 / MAX: 0.87
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Orange Juice - Acceleration: CPU b a c e d 0.6413 1.2826 1.9239 2.5652 3.2065 2.85 2.85 2.84 1.97 1.97 MIN: 2.72 / MAX: 3.29 MIN: 2.71 / MAX: 3.3 MIN: 2.71 / MAX: 3.28 MIN: 1.87 / MAX: 2.4 MIN: 1.87 / MAX: 2.4
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: LuxCore Benchmark - Acceleration: CPU a c b e d 0.2925 0.585 0.8775 1.17 1.4625 1.30 1.29 1.29 0.78 0.78 MIN: 0.38 / MAX: 1.6 MIN: 0.37 / MAX: 1.59 MIN: 0.37 / MAX: 1.58 MIN: 0.19 / MAX: 1.01 MIN: 0.19 / MAX: 1.03
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Rainbow Colors and Prism - Acceleration: CPU b a c d e 2 4 6 8 10 7.63 7.63 7.49 5.18 5.07 MIN: 6.88 / MAX: 8.28 MIN: 6.75 / MAX: 8.34 MIN: 6.78 / MAX: 8.19 MIN: 4.7 / MAX: 6.12 MIN: 4.62 / MAX: 6.1
OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 128 a b e d 200 400 600 800 1000 1117 1111 857 842 1. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -lz -lm -lssl -lcrypto -lpthread -ldl
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 b a e d 500K 1000K 1500K 2000K 2500K 2359355.53 2339109.34 1640940.11 1624284.67 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 b a e d 500K 1000K 1500K 2000K 2500K 2294635.17 2257435.18 1574207.88 1565861.27 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 b a e d 500K 1000K 1500K 2000K 2500K 2212508.18 2205533.41 1560732.04 1555313.89 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a b c e d 80 160 240 320 400 381.97 380.11 377.48 260.86 259.43 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a b c e d 4 8 12 16 20 15.28 15.20 15.10 10.43 10.38 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms a b c e d 0.5972 1.1944 1.7916 2.3888 2.986 1.84476 1.84751 1.84863 2.64431 2.65403
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet a b e d 4 8 12 16 20 SE +/- 0.06, N = 3 14.02 14.02 15.83 15.85 MIN: 13.85 / MAX: 16.02 MIN: 13.9 / MAX: 15.7 MIN: 15.7 / MAX: 16.1 MIN: 15.74 / MAX: 17.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a b d e 0.8955 1.791 2.6865 3.582 4.4775 SE +/- 0.00, N = 3 3.52 3.54 3.96 3.98 MIN: 3.41 / MAX: 3.81 MIN: 3.43 / MAX: 3.83 MIN: 3.84 / MAX: 4.3 MIN: 3.88 / MAX: 4.25 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b d e 0.7088 1.4176 2.1264 2.8352 3.544 SE +/- 0.01, N = 3 2.84 2.85 3.14 3.14 MIN: 2.8 / MAX: 3.29 MIN: 2.82 / MAX: 3.21 MIN: 3.12 / MAX: 3.34 MIN: 3.11 / MAX: 3.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a b e d 0.5355 1.071 1.6065 2.142 2.6775 SE +/- 0.01, N = 3 2.18 2.18 2.30 2.38 MIN: 2.15 / MAX: 3.92 MIN: 2.16 / MAX: 2.43 MIN: 2.28 / MAX: 2.59 MIN: 2.28 / MAX: 10.72 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a b d e 0.621 1.242 1.863 2.484 3.105 SE +/- 0.00, N = 3 2.54 2.54 2.76 2.76 MIN: 2.5 / MAX: 2.86 MIN: 2.51 / MAX: 2.66 MIN: 2.72 / MAX: 3.22 MIN: 2.72 / MAX: 2.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a b e d 1.1948 2.3896 3.5844 4.7792 5.974 SE +/- 0.01, N = 3 4.29 4.30 5.14 5.31 MIN: 4.24 / MAX: 5.21 MIN: 4.26 / MAX: 4.55 MIN: 4.87 / MAX: 6.62 MIN: 4.86 / MAX: 6.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface b a d e 0.189 0.378 0.567 0.756 0.945 SE +/- 0.02, N = 3 0.75 0.79 0.82 0.83 MAX: 1.13 MIN: 0.74 / MAX: 1.14 MIN: 0.8 / MAX: 0.88 MAX: 0.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet b a e d 3 6 9 12 15 SE +/- 0.13, N = 3 10.07 10.30 11.48 11.55 MIN: 9.97 / MAX: 10.38 MIN: 9.97 / MAX: 10.91 MIN: 11.37 / MAX: 11.79 MIN: 11.39 / MAX: 20.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a b e d 14 28 42 56 70 SE +/- 0.16, N = 3 59.79 59.88 61.31 61.38 MIN: 59.11 / MAX: 68.65 MIN: 59.57 / MAX: 60.2 MIN: 61.13 / MAX: 63.09 MIN: 61.23 / MAX: 62.51 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 b a e d 3 6 9 12 15 SE +/- 0.09, N = 3 7.82 8.01 9.00 9.03 MIN: 7.75 / MAX: 8.12 MIN: 7.78 / MAX: 16.95 MIN: 8.92 / MAX: 9.28 MIN: 8.94 / MAX: 9.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet b a e d 2 4 6 8 10 SE +/- 0.18, N = 3 7.85 8.06 8.52 8.55 MIN: 7.76 / MAX: 8 MIN: 7.76 / MAX: 8.77 MIN: 8.47 / MAX: 8.81 MIN: 8.47 / MAX: 10.27 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 b a d e 5 10 15 20 25 SE +/- 0.11, N = 3 16.36 16.49 20.65 20.67 MIN: 16.25 / MAX: 16.67 MIN: 16.29 / MAX: 17.14 MIN: 20.52 / MAX: 21.15 MIN: 20.38 / MAX: 30 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny a b e d 6 12 18 24 30 SE +/- 0.04, N = 3 23.76 23.81 26.35 26.39 MIN: 23.59 / MAX: 24.18 MIN: 23.64 / MAX: 31.75 MIN: 26.24 / MAX: 27.01 MIN: 26.24 / MAX: 35.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a b d e 3 6 9 12 15 SE +/- 0.05, N = 3 10.34 10.41 11.90 11.92 MIN: 10.17 / MAX: 19.27 MIN: 10.31 / MAX: 10.73 MIN: 11.8 / MAX: 12.29 MIN: 11.82 / MAX: 12.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m b a d e 2 4 6 8 10 SE +/- 0.09, N = 3 6.68 6.82 7.72 7.75 MIN: 6.64 / MAX: 6.98 MIN: 6.6 / MAX: 7.62 MIN: 7.61 / MAX: 8.99 MIN: 7.65 / MAX: 8.43 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b d e 20 40 60 80 100 SE +/- 0.12, N = 3 72.05 72.07 89.85 90.05 MIN: 70.65 / MAX: 99.29 MIN: 71.66 / MAX: 80.49 MIN: 89.35 / MAX: 91.15 MIN: 89.52 / MAX: 92.76 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet b a d e 0.9 1.8 2.7 3.6 4.5 SE +/- 0.03, N = 3 3.51 3.52 3.97 4.00 MIN: 3.37 / MAX: 3.78 MIN: 3.35 / MAX: 3.83 MIN: 3.9 / MAX: 4.21 MIN: 3.94 / MAX: 4.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 b a e d 15K 30K 45K 60K 75K 68779.84 68098.55 47003.02 47001.04 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 b a e d 13K 26K 39K 52K 65K 61686.55 61494.71 43482.87 43258.70 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 b a e d 11K 22K 33K 44K 55K 53009.35 52883.38 40726.68 40695.40 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 b a d e 10K 20K 30K 40K 50K 48357.65 47841.37 38750.55 38521.51 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenCV This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Core b a d e 13K 26K 39K 52K 65K 55182 55345 58795 59340 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Stitching b a d e 60K 120K 180K 240K 300K 269533 269695 290303 290774 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Image Processing a b e d 20K 40K 60K 80K 100K 80674 80783 99596 100448 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a c b e d 15 30 45 60 75 58.46 58.48 58.53 67.38 67.40 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a c b e d 160 320 480 640 800 708.09 710.07 710.27 749.44 749.90 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time a c b e d 110 220 330 440 550 439.65 441.50 441.61 496.10 496.74 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time a b c e d 1300 2600 3900 5200 6500 5743.30 5770.72 5770.77 5877.55 5889.87 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam a b c d e 60 120 180 240 300 210.90 211.14 211.84 264.81 267.00
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA256 c a b e d 1600M 3200M 4800M 6400M 8000M 7670067840 7666522930 7571848920 5371686670 5363079590 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA512 a b c d e 600M 1200M 1800M 2400M 3000M 2982380940 2971930440 2958467610 2090214690 2073192130 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 b c a e d 1000 2000 3000 4000 5000 4803.9 4781.2 4750.7 3353.4 3304.7 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 b c a e d 30K 60K 90K 120K 150K 155963.2 155507.5 155418.1 109117.9 109044.0 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20 b a d e 11000M 22000M 33000M 44000M 55000M 53408719120 53275943460 37684059340 37631001400 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-128-GCM a b e d 20000M 40000M 60000M 80000M 100000M 104474526240 103818760740 74137558490 73615414070 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-256-GCM b a e d 20000M 40000M 60000M 80000M 100000M 92618860950 92098198730 65455871040 65168081990 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20-Poly1305 b a e d 8000M 16000M 24000M 32000M 40000M 36372738320 36073985910 25784432780 25772756170 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU a b d e 0.3015 0.603 0.9045 1.206 1.5075 1.34 1.33 0.99 0.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU a b e d 900 1800 2700 3600 4500 2973.15 2976.94 4021.01 4033.28 MIN: 2488.55 / MAX: 3154.09 MIN: 2489.57 / MAX: 3136.21 MIN: 3277.03 / MAX: 4252.51 MIN: 3327.28 / MAX: 4221.11 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU a b e d 3 6 9 12 15 9.16 9.14 6.54 6.51 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU a b e d 130 260 390 520 650 435.99 436.28 610.66 612.57 MIN: 252.02 / MAX: 599.26 MIN: 352.34 / MAX: 460.22 MIN: 445.57 / MAX: 645.59 MIN: 320.73 / MAX: 641.45 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU b a d e 90 180 270 360 450 419.36 415.14 311.26 309.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU b a d e 3 6 9 12 15 9.53 9.62 12.83 12.92 MIN: 5.85 / MAX: 24.48 MIN: 6.57 / MAX: 24.31 MIN: 5.69 / MAX: 29.68 MIN: 7.1 / MAX: 29.13 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b e d 6 12 18 24 30 27.40 27.33 20.64 20.62 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b e d 40 80 120 160 200 145.90 146.26 193.60 193.94 MIN: 89.12 / MAX: 161.2 MIN: 121.53 / MAX: 161.59 MIN: 122.33 / MAX: 206.18 MIN: 150.32 / MAX: 215.81 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a e d 200 400 600 800 1000 914.26 907.43 648.80 647.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a e d 3 6 9 12 15 8.73 8.79 9.23 9.24 MIN: 4.62 / MAX: 37 MIN: 4.51 / MAX: 68.51 MIN: 4.94 / MAX: 18.86 MIN: 4.83 / MAX: 22.73 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b d e 50 100 150 200 250 214.40 213.56 172.78 171.38 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU a b d e 6 12 18 24 30 18.64 18.71 23.13 23.32 MIN: 11.82 / MAX: 40.83 MIN: 13.18 / MAX: 24.78 MIN: 10.66 / MAX: 43.31 MIN: 10.72 / MAX: 42.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b d e 2K 4K 6K 8K 10K 8184.72 8117.06 5067.31 5049.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b d e 0.2633 0.5266 0.7899 1.0532 1.3165 0.97 0.98 1.16 1.17 MIN: 0.5 / MAX: 13.7 MIN: 0.48 / MAX: 13.47 MIN: 0.53 / MAX: 14.34 MIN: 0.64 / MAX: 15.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark ISPC b c a e d 30 60 90 120 150 155 154 154 117 117 MIN: 18 / MAX: 2419 MIN: 18 / MAX: 2421 MIN: 18 / MAX: 2431 MIN: 13 / MAX: 1914 MIN: 13 / MAX: 1922
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark Scalar c b a e d 15 30 45 60 75 69 69 69 52 52 MIN: 8 / MAX: 1000 MIN: 8 / MAX: 1000 MIN: 8 / MAX: 1000 MIN: 6 / MAX: 990 MIN: 6 / MAX: 989
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time b c a e d 0.8483 1.6966 2.5449 3.3932 4.2415 3.77037 3.74212 3.72924 2.67891 2.67678
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time b c a e d 0.8454 1.6908 2.5362 3.3816 4.227 3.75720 3.75544 3.75234 2.66344 2.64427
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time b c a e d 30 60 90 120 150 148.97 148.50 148.26 117.26 117.07
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time b c a e d 0.7271 1.4542 2.1813 2.9084 3.6355 3.23141 3.22055 3.17345 2.25929 2.22151
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time c b a d e 0.7183 1.4366 2.1549 2.8732 3.5915 3.19263 3.18652 3.17488 2.22361 2.21786
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time b a c d e 0.8509 1.7018 2.5527 3.4036 4.2545 3.78168 3.77067 3.77000 2.66212 2.65809
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer c a b e d 800 1600 2400 3200 4000 2691 2696 2704 3884 3894 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer a b c d e 1000 2000 3000 4000 5000 3256 3263 3264 4702 4708 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer a c b d e 14K 28K 42K 56K 70K 42834 42853 43011 65121 65391 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer b c a d e 30K 60K 90K 120K 150K 88710 88742 88765 126949 127177 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer c b a e d 20K 40K 60K 80K 100K 51903 52002 52071 78221 78324 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer c b a d e 30K 60K 90K 120K 150K 106444 106674 107002 153142 153814 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency a b e d 0.3071 0.6142 0.9213 1.2284 1.5355 0.862 0.914 1.260 1.275 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only a b e d 110K 220K 330K 440K 550K 516884 431809 322751 312923 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency a b e d 0.6111 1.2222 1.8333 2.4444 3.0555 1.548 1.853 2.151 2.557 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write b a d e 5K 10K 15K 20K 25K 23725 23466 17896 17875 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency b a d e 7 14 21 28 35 21.08 21.31 27.94 27.96 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write a b d e 4K 8K 12K 16K 20K 17926 17895 13903 13796 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency b a d e 13 26 39 52 65 44.35 44.41 57.54 57.99 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency a b d e 120K 240K 360K 480K 600K 580068 546888 392256 389396 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency a b e d 90K 180K 270K 360K 450K 421150 413607 371978 294566 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency a b e d 5K 10K 15K 20K 25K 23394 23377 17884 17766 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency b a d e 4K 8K 12K 16K 20K 18040 18016 13811 13786 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times b a d e 150 300 450 600 750 662 664 705 711
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.16 Input: Li2_STO_ae c b a e d 90 180 270 360 450 296.75 300.84 301.65 408.56 411.70 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 a b e d 600K 1200K 1800K 2400K 3000K SE +/- 4575.83, N = 3 2630980.74 2475916.43 2188722.61 2145215.42 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 a b e d 600K 1200K 1800K 2400K 3000K SE +/- 27602.28, N = 3 2627627.80 2625534.74 2325921.50 2265510.42 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 a b e d 500K 1000K 1500K 2000K 2500K SE +/- 17758.85, N = 15 2447990.86 2391013.39 2131409.25 2058810.43 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 b a d e 500K 1000K 1500K 2000K 2500K SE +/- 8021.75, N = 3 2565838.51 2544206.44 2218941.46 2209577.35 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 a b d e 500K 1000K 1500K 2000K 2500K SE +/- 13714.16, N = 3 2493512.81 2464830.47 2092415.77 2091970.43 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 b a e d 600K 1200K 1800K 2400K 3000K 2857194.00 2854095.50 2637178.25 2633566.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Update Random b a e d 110K 220K 330K 440K 550K 493105 487163 424647 422807 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read While Writing a b d e 400K 800K 1200K 1600K 2000K 1716478 1714690 1195846 1181409 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read Random Write Random b a e d 300K 600K 900K 1200K 1500K 1541034 1527745 1133720 1133541 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS c a b d e 70K 140K 210K 280K 350K 341479 341310 341164 321452 320661 1. (CC) gcc options: -pedantic -O3
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya c b a e d 0.9045 1.809 2.7135 3.618 4.5225 4.02 4.01 4.00 3.83 3.83 1. (CXX) g++ options: -O3
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens c b a e d 30 60 90 120 150 89.10 90.65 91.74 116.95 119.62 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace c a b d e 70 140 210 280 350 232.39 238.30 240.62 308.79 314.22 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model a b c e d 30 60 90 120 150 87.21 88.22 90.71 117.07 117.92 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace c b a e d 30 60 90 120 150 111.02 111.56 113.86 148.18 149.67 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace a c b e d 60 120 180 240 300 231.64 233.71 234.44 291.60 292.05 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
srsRAN Project srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: Downlink Processor Benchmark c b a d e 200 400 600 800 1000 936.4 934.8 932.7 877.9 874.0 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total a c b e d 200 400 600 800 1000 1159.4 1135.6 1083.3 893.4 859.1 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread b c a d e 70 140 210 280 350 317.9 317.1 312.5 299.4 296.0 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache a b d e 600K 1200K 1800K 2400K 3000K 2781513.07 2662853.59 2102565.10 2101251.45 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math a b e d 110K 220K 330K 440K 550K 535128.11 534062.66 385865.21 382456.62 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching a b d e 800K 1600K 2400K 3200K 4000K 3654011.84 3638488.52 2531776.40 2527236.02 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions a b e d 1.5M 3M 4.5M 6M 7.5M 7013916.66 7009619.95 4856545.20 4792580.04 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing a b e d 3M 6M 9M 12M 15M 13144909.56 13127961.92 9823844.73 9811611.36 1. (CXX) g++ options: -O2 -std=gnu99 -lc
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 4 - Input: Bosphorus 4K c a b d e 0.709 1.418 2.127 2.836 3.545 3.151 3.141 3.126 2.332 2.328 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 4K c b a e d 0.3668 0.7336 1.1004 1.4672 1.834 1.63 1.62 1.62 1.12 1.12 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K c a b d e 8 16 24 32 40 33.42 33.42 33.09 24.34 23.99 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K b a c d e 15 30 45 60 75 65.85 65.83 65.60 50.63 50.49 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K c b a d e 10 20 30 40 50 43.07 42.84 42.65 37.09 36.22 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K a b c d e 11 22 33 44 55 48.42 48.26 48.18 40.38 40.10 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K b a c d e 9 18 27 36 45 39.73 39.72 39.71 30.42 30.41 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet a b e d 20 40 60 80 100 96.89 96.85 78.48 78.27
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile a c b d e 20 40 60 80 100 61.38 62.08 62.11 81.03 81.73 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 4K - Video Preset: Medium b c a d e 1.0733 2.1466 3.2199 4.2932 5.3665 4.77 4.76 4.75 3.34 3.32
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 1080p - Video Preset: Super Fast b a c e d 20 40 60 80 100 74.90 74.64 74.48 50.14 50.03
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 1080p - Video Preset: Ultra Fast a b c d e 20 40 60 80 100 96.33 95.58 93.03 62.01 61.59
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a b c e d 0.794 1.588 2.382 3.176 3.97 3.529 3.527 3.513 2.578 2.577 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster c b a d e 2 4 6 8 10 7.472 7.458 7.432 5.662 5.637 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast c b a d e 3 6 9 12 15 11.257 11.236 11.094 8.313 8.306 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster b c a d e 6 12 18 24 30 25.47 25.42 25.34 19.62 19.47 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression b a c d e 0.9653 1.9306 2.8959 3.8612 4.8265 4.29 4.29 4.28 4.05 4.03 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless, Highest Compression c b a e d 0.162 0.324 0.486 0.648 0.81 0.72 0.71 0.71 0.64 0.64 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p b c a e d 13 26 39 52 65 59.97 59.70 58.45 50.47 50.06 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction a c b e d 11 22 33 44 55 44.70 44.89 44.90 46.80 48.47 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M a b c e d 400 800 1200 1600 2000 1946.8 1934.8 1933.8 1798.3 1792.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M c a b e d 700 1400 2100 2800 3500 3418.8 3410.3 3349.9 2809.5 2773.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 2.smt2 c b a e d 20 40 60 80 100 71.35 71.44 72.46 76.57 77.37 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed c a b e d 400 800 1200 1600 2000 1643.8 1638.7 1636.5 1514.3 1507.9 1. (CC) gcc options: -O3 -pthread -lz -llzma
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 August 2023 20:23 by user phoronix.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 August 2023 10:29 by user phoronix.
c Processor: Intel Xeon E-2388G @ 3.20GHz (8 Cores / 16 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: Intel RocketLake-S [UHD ], Monitor: VA2431, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 18 August 2023 07:42 by user phoronix.
d Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 20 August 2023 07:43 by user phoronix.
e Processor: Intel Xeon E-2336 @ 2.90GHz (6 Cores / 12 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: ASPEED, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 August 2023 05:01 by user phoronix.