Intel Xeon Ice Lake Mitigation Comparison

Intel CPU Xeon Scalable Ice Lake mitigation benchmark impact by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105232-IB-XEONICELA80
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 4 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 7 Tests
Creator Workloads 2 Tests
Database Test Suite 6 Tests
HPC - High Performance Computing 2 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 2 Tests
Multi-Core 5 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 5 Tests
Server 6 Tests
Server CPU Tests 5 Tests
Telephony 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Default
May 21 2021
  8 Hours, 36 Minutes
mitigations=off
May 22 2021
  9 Hours, 23 Minutes
Invert Hiding All Results Option
  8 Hours, 59 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel Xeon Ice Lake Mitigation ComparisonOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998504GB800GB INTEL SSDPF21Q800GBASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 21.045.11.0-17-generic (x86_64)GNOME Shell 3.38.4X ServerGCC 10.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionIntel Xeon Ice Lake Mitigation Comparison BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2)- Python 3.9.4- Default: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - mitigations=off: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected

Default vs. mitigations=off ComparisonPhoronix Test SuiteBaseline+0.8%+0.8%+1.6%+1.6%+2.4%+2.4%+3.2%+3.2%3.1%2.6%2.5%2.3%2%Rand Read - 13.2%Rand Read - 13.2%Scala DottyRand Write - 12.7%SqueezeNetV1.0SETRand ForestOPTIONS, StatelessApache HBaseApache HBaseRenaissanceApache HBaseMobile Neural NetworkRedisRenaissancePJSIPDefaultmitigations=off

Intel Xeon Ice Lake Mitigation Comparisonhbase: Increment - 1hbase: Increment - 1hbase: Rand Read - 1hbase: Rand Read - 1hbase: Rand Write - 1hbase: Rand Write - 1hbase: Seq Read - 1hbase: Seq Read - 1hbase: Seq Write - 1hbase: Seq Write - 1chia-vdf: Square Plain C++ctx-clock: Context Switch Timedacapobench: H2dacapobench: Tradebeansinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000keydb: mnn: SqueezeNetV1.0mnn: inception-v3pjsip: INVITEpjsip: OPTIONS, Statefulpjsip: OPTIONS, Statelesspgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencypostmark: Disk Transaction Performanceredis: SETrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: Apache Spark ALSrenaissance: Twitter HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futuressockperf: Throughputsockperf: Latency Ping Pongsqlite-speedtest: Timed Time - Size 1,000tensorflow-lite: SqueezeNettensorflow-lite: Inception V4build-godot: Time To Compilebuild-linux-kernel: Time To Compilebuild-mesa: Time To Compilebuild-nodejs: Time To Compilevosk: Defaultmitigations=off566176290710972567037922108130564311399672281075417045721644.51188389.4516052.164.86825.90425883821404829579660.263289048.68262141720526.951791.6692991.8473448.8566118.77718901.31630002.4326677.1453155578.34759.56447669.769166874.87724.78018.961101.31228.618565176787911322580438909109630602311402002281071016796727924.41196123.7508188.294.74325.46526233813412919583450.262288658.69561991762944.931737.2452925.4373422.7396172.60419055.47330102.5226637.1553209398.36260.18347385.968010374.94024.76219.014102.10728.402OpenBenchmarking.org

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Defaultmitigations=off120240360480600SE +/- 6.03, N = 3SE +/- 2.85, N = 3566565
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Defaultmitigations=off100200300400500Min: 559 / Avg: 566 / Max: 578Min: 559 / Avg: 564.67 / Max: 568

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Defaultmitigations=off400800120016002000SE +/- 19.34, N = 3SE +/- 8.21, N = 317621767
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Defaultmitigations=off30060090012001500Min: 1724 / Avg: 1762.33 / Max: 1786Min: 1757 / Avg: 1766.67 / Max: 1783

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Defaultmitigations=off2004006008001000SE +/- 6.56, N = 3SE +/- 7.51, N = 3907879
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Defaultmitigations=off160320480640800Min: 899 / Avg: 907 / Max: 920Min: 871 / Avg: 879 / Max: 894

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Defaultmitigations=off2004006008001000SE +/- 7.75, N = 3SE +/- 9.68, N = 310971132
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Defaultmitigations=off2004006008001000Min: 1082 / Avg: 1097.33 / Max: 1107Min: 1113 / Avg: 1132.33 / Max: 1143

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1mitigations=offDefault6K12K18K24K30KSE +/- 320.28, N = 14SE +/- 345.85, N = 152580425670
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1mitigations=offDefault4K8K12K16K20KMin: 22230 / Avg: 25804.29 / Max: 26918Min: 22364 / Avg: 25669.87 / Max: 26692

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Defaultmitigations=off918273645SE +/- 0.43, N = 15SE +/- 0.57, N = 143738
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Defaultmitigations=off816243240Min: 36 / Avg: 37.33 / Max: 43Min: 36 / Avg: 37.5 / Max: 44

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1Defaultmitigations=off2004006008001000SE +/- 10.74, N = 3SE +/- 9.65, N = 4922909
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1Defaultmitigations=off160320480640800Min: 902 / Avg: 921.67 / Max: 939Min: 885 / Avg: 909 / Max: 927

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1Defaultmitigations=off2004006008001000SE +/- 12.77, N = 3SE +/- 11.92, N = 410811096
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1Defaultmitigations=off2004006008001000Min: 1060 / Avg: 1080.67 / Max: 1104Min: 1074 / Avg: 1096 / Max: 1126

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1mitigations=offDefault7K14K21K28K35KSE +/- 418.18, N = 15SE +/- 432.13, N = 153060230564
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1mitigations=offDefault5K10K15K20K25KMin: 25268 / Avg: 30601.53 / Max: 32026Min: 25192 / Avg: 30564.27 / Max: 31923

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1Defaultmitigations=off714212835SE +/- 0.52, N = 15SE +/- 0.51, N = 153131
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1Defaultmitigations=off714212835Min: 30 / Avg: 31.47 / Max: 38Min: 30 / Avg: 31.47 / Max: 38

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++mitigations=offDefault30K60K90K120K150KSE +/- 400.00, N = 3SE +/- 835.33, N = 31402001399671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++mitigations=offDefault20K40K60K80K100KMin: 139400 / Avg: 140200 / Max: 140600Min: 138300 / Avg: 139966.67 / Max: 1409001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeDefaultmitigations=off50100150200250228228

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2mitigations=offDefault2K4K6K8K10KSE +/- 34.81, N = 4SE +/- 64.56, N = 41071010754
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2mitigations=offDefault2K4K6K8K10KMin: 10620 / Avg: 10709.5 / Max: 10770Min: 10584 / Avg: 10753.75 / Max: 10893

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansmitigations=offDefault4K8K12K16K20KSE +/- 139.58, N = 4SE +/- 156.28, N = 71679617045
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansmitigations=offDefault3K6K9K12K15KMin: 16478 / Avg: 16795.5 / Max: 17035Min: 16507 / Avg: 17044.57 / Max: 17713

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000mitigations=offDefault160K320K480K640K800KSE +/- 2881.97, N = 3SE +/- 1793.11, N = 3727924.4721644.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000mitigations=offDefault130K260K390K520K650KMin: 724999.7 / Avg: 727924.37 / Max: 733688.1Min: 718362 / Avg: 721644.53 / Max: 724536.6

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000mitigations=offDefault300K600K900K1200K1500KSE +/- 1536.16, N = 3SE +/- 4126.24, N = 31196123.71188389.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000mitigations=offDefault200K400K600K800K1000KMin: 1193765.4 / Avg: 1196123.73 / Max: 1199008.2Min: 1182664.9 / Avg: 1188389.43 / Max: 1196399.5

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Defaultmitigations=off110K220K330K440K550KSE +/- 3720.26, N = 3SE +/- 7304.04, N = 15516052.16508188.291. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Defaultmitigations=off90K180K270K360K450KMin: 509855.47 / Avg: 516052.16 / Max: 522717.27Min: 454916.45 / Avg: 508188.29 / Max: 551130.461. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0mitigations=offDefault1.09532.19063.28594.38125.4765SE +/- 0.016, N = 3SE +/- 0.007, N = 34.7434.868MIN: 4.53 / MAX: 11.47MIN: 4.49 / MAX: 18.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0mitigations=offDefault246810Min: 4.71 / Avg: 4.74 / Max: 4.77Min: 4.86 / Avg: 4.87 / Max: 4.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3mitigations=offDefault612182430SE +/- 0.20, N = 3SE +/- 0.12, N = 325.4725.90MIN: 23.79 / MAX: 74.89MIN: 24.95 / MAX: 57.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3mitigations=offDefault612182430Min: 25.13 / Avg: 25.47 / Max: 25.83Min: 25.7 / Avg: 25.9 / Max: 26.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEmitigations=offDefault6001200180024003000SE +/- 3.79, N = 3SE +/- 31.32, N = 3262325881. (CC) gcc options: -lstdc++ -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEmitigations=offDefault5001000150020002500Min: 2616 / Avg: 2623 / Max: 2629Min: 2527 / Avg: 2588.33 / Max: 26301. (CC) gcc options: -lstdc++ -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulDefaultmitigations=off8001600240032004000SE +/- 10.39, N = 3SE +/- 3.79, N = 3382138131. (CC) gcc options: -lstdc++ -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulDefaultmitigations=off7001400210028003500Min: 3803 / Avg: 3821 / Max: 3839Min: 3807 / Avg: 3813 / Max: 38201. (CC) gcc options: -lstdc++ -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Statelessmitigations=offDefault9K18K27K36K45KSE +/- 329.47, N = 3SE +/- 197.93, N = 341291404821. (CC) gcc options: -lstdc++ -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Statelessmitigations=offDefault7K14K21K28K35KMin: 40927 / Avg: 41291.33 / Max: 41949Min: 40093 / Avg: 40482 / Max: 407401. (CC) gcc options: -lstdc++ -lssl -lcrypto -luuid -lm -lrt -lpthread -lasound -O2

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Onlymitigations=offDefault200K400K600K800K1000KSE +/- 8471.25, N = 15SE +/- 17109.96, N = 139583459579661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Onlymitigations=offDefault170K340K510K680K850KMin: 899091.92 / Avg: 958345.44 / Max: 1022540.13Min: 856577.88 / Avg: 957965.69 / Max: 1032173.991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latencymitigations=offDefault0.05920.11840.17760.23680.296SE +/- 0.002, N = 15SE +/- 0.005, N = 130.2620.2631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latencymitigations=offDefault12345Min: 0.25 / Avg: 0.26 / Max: 0.28Min: 0.24 / Avg: 0.26 / Max: 0.291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteDefaultmitigations=off6K12K18K24K30KSE +/- 79.55, N = 3SE +/- 54.70, N = 328904288651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteDefaultmitigations=off5K10K15K20K25KMin: 28757.75 / Avg: 28903.62 / Max: 29031.56Min: 28786.56 / Avg: 28865.46 / Max: 28970.551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyDefaultmitigations=off246810SE +/- 0.022, N = 3SE +/- 0.018, N = 38.6828.6951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyDefaultmitigations=off3691215Min: 8.65 / Avg: 8.68 / Max: 8.72Min: 8.66 / Avg: 8.69 / Max: 8.721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceDefaultmitigations=off13002600390052006500SE +/- 74.77, N = 4SE +/- 51.00, N = 3621461991. (CC) gcc options: -O3
OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceDefaultmitigations=off11002200330044005500Min: 6097 / Avg: 6213.5 / Max: 6410Min: 6097 / Avg: 6199 / Max: 62501. (CC) gcc options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETmitigations=offDefault400K800K1200K1600K2000KSE +/- 21191.99, N = 15SE +/- 10529.94, N = 31762944.931720526.951. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETmitigations=offDefault300K600K900K1200K1500KMin: 1653292.88 / Avg: 1762944.93 / Max: 1913881.75Min: 1699845.62 / Avg: 1720526.95 / Max: 1734310.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala Dottymitigations=offDefault400800120016002000SE +/- 32.02, N = 20SE +/- 48.82, N = 251737.251791.67
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala Dottymitigations=offDefault30060090012001500Min: 1635.52 / Avg: 1737.24 / Max: 2185.25Min: 1627.98 / Avg: 1791.67 / Max: 2657.3

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random Forestmitigations=offDefault6001200180024003000SE +/- 20.17, N = 5SE +/- 24.69, N = 52925.442991.85
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random Forestmitigations=offDefault5001000150020002500Min: 2866.81 / Avg: 2925.44 / Max: 2984.3Min: 2925.22 / Avg: 2991.85 / Max: 3068.34

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSmitigations=offDefault7001400210028003500SE +/- 30.59, N = 5SE +/- 26.99, N = 53422.743448.86
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSmitigations=offDefault6001200180024003000Min: 3330.24 / Avg: 3422.74 / Max: 3490.93Min: 3350.07 / Avg: 3448.86 / Max: 3513.37

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsDefaultmitigations=off13002600390052006500SE +/- 57.24, N = 5SE +/- 36.16, N = 56118.786172.60
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsDefaultmitigations=off11002200330044005500Min: 6006.62 / Avg: 6118.78 / Max: 6339.16Min: 6080.55 / Avg: 6172.6 / Max: 6299.13

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutDefaultmitigations=off4K8K12K16K20KSE +/- 171.28, N = 7SE +/- 197.89, N = 1518901.3219055.47
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutDefaultmitigations=off3K6K9K12K15KMin: 18214.25 / Avg: 18901.32 / Max: 19474.53Min: 17522.12 / Avg: 19055.47 / Max: 20456.66

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeDefaultmitigations=off6K12K18K24K30KSE +/- 213.94, N = 15SE +/- 325.81, N = 1530002.4330102.52
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeDefaultmitigations=off5K10K15K20K25KMin: 28768.44 / Avg: 30002.43 / Max: 31440.18Min: 28352.58 / Avg: 30102.52 / Max: 32899.33

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Genetic Algorithm Using Jenetics + Futuresmitigations=offDefault14002800420056007000SE +/- 60.53, N = 5SE +/- 59.27, N = 76637.166677.15
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Genetic Algorithm Using Jenetics + Futuresmitigations=offDefault12002400360048006000Min: 6499.02 / Avg: 6637.15 / Max: 6836.12Min: 6343.85 / Avg: 6677.14 / Max: 6804.79

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughputmitigations=offDefault70K140K210K280K350KSE +/- 2916.53, N = 5SE +/- 5074.55, N = 253209393155571. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughputmitigations=offDefault60K120K180K240K300KMin: 312135 / Avg: 320939.2 / Max: 329587Min: 251266 / Avg: 315556.84 / Max: 3413671. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongDefaultmitigations=off246810SE +/- 0.085, N = 5SE +/- 0.091, N = 58.3478.3621. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongDefaultmitigations=off3691215Min: 8.19 / Avg: 8.35 / Max: 8.67Min: 8.15 / Avg: 8.36 / Max: 8.661. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Defaultmitigations=off1326395265SE +/- 0.21, N = 3SE +/- 0.09, N = 359.5660.181. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Defaultmitigations=off1224364860Min: 59.35 / Avg: 59.56 / Max: 59.98Min: 60.02 / Avg: 60.18 / Max: 60.351. (CC) gcc options: -O2 -ldl -lz -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetmitigations=offDefault10K20K30K40K50KSE +/- 403.24, N = 3SE +/- 202.00, N = 347385.947669.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetmitigations=offDefault8K16K24K32K40KMin: 46579.4 / Avg: 47385.87 / Max: 47791.4Min: 47281.4 / Avg: 47669.73 / Max: 47960.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4mitigations=offDefault150K300K450K600K750KSE +/- 8121.20, N = 3SE +/- 4490.09, N = 3680103691668
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4mitigations=offDefault120K240K360K480K600KMin: 666187 / Avg: 680103.33 / Max: 694315Min: 682963 / Avg: 691668 / Max: 697931

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileDefaultmitigations=off20406080100SE +/- 0.55, N = 3SE +/- 0.45, N = 374.8874.94
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileDefaultmitigations=off1428425670Min: 73.83 / Avg: 74.88 / Max: 75.71Min: 74.05 / Avg: 74.94 / Max: 75.4

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To Compilemitigations=offDefault612182430SE +/- 0.37, N = 15SE +/- 0.34, N = 1324.7624.78
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To Compilemitigations=offDefault612182430Min: 23.98 / Avg: 24.76 / Max: 29.75Min: 24.11 / Avg: 24.78 / Max: 28.81

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileDefaultmitigations=off510152025SE +/- 0.08, N = 3SE +/- 0.06, N = 318.9619.01
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileDefaultmitigations=off510152025Min: 18.84 / Avg: 18.96 / Max: 19.12Min: 18.92 / Avg: 19.01 / Max: 19.14

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To CompileDefaultmitigations=off20406080100SE +/- 0.54, N = 3SE +/- 0.24, N = 3101.31102.11
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To CompileDefaultmitigations=off20406080100Min: 100.27 / Avg: 101.31 / Max: 102.07Min: 101.64 / Avg: 102.11 / Max: 102.42

VOSK Speech Recognition Toolkit

VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21mitigations=offDefault714212835SE +/- 0.40, N = 3SE +/- 0.38, N = 328.4028.62
OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21mitigations=offDefault612182430Min: 27.92 / Avg: 28.4 / Max: 29.19Min: 27.86 / Avg: 28.62 / Max: 29