Intel Kernel Scaling Optimizations On AMD Genoa

AMD EPYC 9654 benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2304019-NE-INTELKERN69
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

C/C++ Compiler Tests 3 Tests
CPU Massive 4 Tests
Database Test Suite 5 Tests
HPC - High Performance Computing 3 Tests
Common Kernel Benchmarks 3 Tests
Multi-Core 5 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 2 Tests
Server 5 Tests
Server CPU Tests 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Clear Linux: 3 Threads
March 30 2023
  8 Hours, 59 Minutes
Clear Linux: 6 Threads
April 01 2023
  7 Hours, 46 Minutes
Clear Linux: 12 Threads
April 01 2023
  6 Hours, 13 Minutes
Clear Linux: 24 Threads
March 31 2023
  6 Hours, 39 Minutes
Clear Linux: 48 Threads
March 31 2023
  8 Hours, 29 Minutes
Clear Linux: 96 Threads
March 30 2023
  6 Hours, 48 Minutes
Clear Linux: 192 Threads
March 30 2023
  10 Hours, 54 Minutes
Clear Linux: 384 Threads
March 29 2023
  7 Hours, 57 Minutes
Invert Hiding All Results Option
  7 Hours, 58 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel Kernel Scaling Optimizations On AMD GenoaOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 9654 96-Core @ 2.40GHz (3 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (6 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (12 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (24 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (48 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (96 Cores)2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores)2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a41520GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDBroadcom NetXtreme BCM5720 PCIeClear Linux OS 386606.2.8-1293.native (x86_64)X ServerGCC 12.2.1 20230323 releases/gcc-12.2.0-616-g1b6b7f214c + Clang 15.0.7 + LLVM 15.0.7ext4800x600ProcessorsMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDisplay ServerCompilerFile-SystemScreen ResolutionIntel Kernel Scaling Optimizations On AMD Genoa BenchmarksSystem Logs- Transparent Huge Pages: always- FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags -std=gnu++17" FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx"" - --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=sapphirerapids --with-zstd - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- Python 3.11.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

stress-ng: Pollstress-ng: Mallocstress-ng: Semaphoresstress-ng: Context Switchingonnx: GPT-2 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardopenvkl: vklBenchmark ISPCblosc: blosclz shuffleblosc: blosclz bitshufflegromacs: MPI CPU - water_GMX50_barerocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randmemcached: 1:5memcached: 1:10memcached: 1:100dragonflydb: 50 - 1:1dragonflydb: 50 - 1:5dragonflydb: 200 - 1:1dragonflydb: 200 - 1:5mysqlslap: 512mysqlslap: 1024mysqlslap: 2048mysqlslap: 4096mysqlslap: 8192daphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clusterpgbench: 1000 - 500 - Read Onlypgbench: 1000 - 800 - Read Onlypgbench: 1000 - 1000 - Read Onlypgbench: 1000 - 500 - Read Writepgbench: 1000 - 800 - Read Writepgbench: 1000 - 1000 - Read Writeonnx: GPT-2 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardpgbench: 1000 - 500 - Read Only - Average Latencypgbench: 1000 - 800 - Read Only - Average Latencypgbench: 1000 - 1000 - Read Only - Average Latencypgbench: 1000 - 500 - Read Write - Average Latencypgbench: 1000 - 800 - Read Write - Average Latencypgbench: 1000 - 1000 - Read Write - Average Latencybuild-llvm: NinjaClear Linux 3 Threads 6 Threads 12 Threads 24 Threads 48 Threads 96 Threads 192 Threads 384 Threads207642.31746208.28274742.511598345.8398.36610.63823313.037142.05024814864.86726.10.57780031225468762350254384607648929756342.52737199.55731869.931675279.581846162.521395674.141528577.225452772682642581497.7255251.8460077731580.0514414513151812350813926121071133210.16361566.8376.702423.78033.4696.0838.09835.90966.08088.2504281.647433822.352245686.96568067.973157050.1096.10681.2220522.759180.15599219957.99456.51.03011630095056662262796289640012134061525644.941444643.691460141.392940540.923122983.642838943.132991716.595465003633073041693.3656002.3221564041784.3029199427674926666926276219082038510.402818.29243.936912.47501.7122.8913.75019.02936.52649.0592301.146855199.578631852.171140550.306465487.69121.6112.1486126.1130118.48817724813.712388.21.987875021101456043615374156739016948292036420.912231958.842418070.753466341.223577457.303344564.183544883.156906696144974031845.2549536.0795348491876.905217784791764583793987934768314828.22016465.41438.29348.439050.9581.6702.18212.53923.01031.7661191.6121741737.9929798829.202441085.4412990582.76132.4053.6797630.1860122.64132630906.416925.64.0791092205198930081746843281653625566483318242.603257852.773753535.614403726.764558138.924338063.674485799.549418807785414641936.4641767.7007082961908.9810032379016998450186113751593472327.55121271.75533.20218.153410.4990.8871.1848.18015.50721.172642.7573427214.56101730074.724869808.4125991362.22130.7365.5176229.8837120.73657332904.218569.17.1931271973392040947965084435382134225433988403.904907076.234227589.885423657.305646481.605763835.975965009.409489348715744501625.8637128.0392505751901.431482735138771212925217876073560689487.64840181.23633.52518.282490.3380.5770.7756.34810.87614.505374.5266738430.40259187800.489675139.2748227338.57128.1286.1671631.9389121.14986430049.020160.110.82813576096365486891210840578718336521654000099.844894915.904533164.726281327.926525049.879019068485254371101.0732428.1720708941860.381101748103209310780868756282701807607.80381162.15131.31538.254770.4560.7760.9285.7109.67512.386270.04913429013.79455696317.9918664446.0397933563.12122.36944.8650422.9787122.386115419541.312893.219.3555236301291741570512007710640620199522591881.064273958.344723725.636070240.206279681.05663663670652538872.1721042.2048971571737.149970799238338583795959560114554078.27066205.55043.51988.170740.5040.8711.1678.47513.42918.148202.86527368220.78605020044.7838002306.16128197099.8096.91384.6193615.971493.7191139615348.211685.319.37648887711337840724766351218322517671852621772.864517240.608688154.17677676705672613903.8414661.9421912601667.14101190089790591346172418670116471610.3167216.48162.618710.67370.4980.8951.1006.90511.93915.453192.727OpenBenchmarking.org

Stress-NG

MinAvgMaxClear Linux207642677516027368221OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Poll12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads7M14M21M28M35M

MinAvgMaxClear Linux746208182882102605020045OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Malloc12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads160M320M480M640M800M

MinAvgMaxClear Linux274743945451838002306OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Semaphores12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads10M20M30M40M50M

MinAvgMaxClear Linux159834640570104128197100OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Context Switching12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads40M80M120M160M200M

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux96.1115.8132.4OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads4080120160200

MinAvgMaxClear Linux0.63.66.2OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads3691215

MinAvgMaxClear Linux13.024.131.9OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads918273645

MinAvgMaxClear Linux42.1102.7122.6OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads4080120160200

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux485791396OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads400800120016002000

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux148652354832904OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffle12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads8K16K24K32K40K

MinAvgMaxClear Linux67261360120160OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffle12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads5K10K15K20K25K

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux0.68.119.4OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bare12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads612182430

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux4888779465801357609OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads400K800K1200K1600K2000K

MinAvgMaxClear Linux254687624788170981291741570OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads400M800M1200M1600M2000M

MinAvgMaxClear Linux3502546881251210840OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads400K800K1200K1600K2000K

MinAvgMaxClear Linux384607438694612183225OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads4M8M12M16M20M

MinAvgMaxClear Linux64892921219573652165OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads1000K2000K3000K4000K5000K

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux75634326048514000100OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:512 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads1.1M2.2M3.3M4.4M5.5M

MinAvgMaxClear Linux73720032831064907076OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:1012 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads1.3M2.6M3.9M5.2M6.5M

MinAvgMaxClear Linux73187038170328688154OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10012 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads2M4M6M8M10M

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux167528043230166281328OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:112 Threads192 Threads24 Threads3 Threads48 Threads6 Threads96 Threads1.6M3.2M4.8M6.4M8M

MinAvgMaxClear Linux184616345079946525050OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:512 Threads192 Threads24 Threads3 Threads48 Threads6 Threads96 Threads2M4M6M8M10M

MinAvgMaxClear Linux139567435362165763836OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:112 Threads24 Threads3 Threads48 Threads6 Threads1.4M2.8M4.2M5.6M7M

MinAvgMaxClear Linux152857737031975965009OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:512 Threads24 Threads3 Threads48 Threads6 Threads1.6M3.2M4.8M6.4M8M

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux545739948OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 51212 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads2004006008001000

MinAvgMaxClear Linux277688934OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 102412 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads2004006008001000

MinAvgMaxClear Linux268640871OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 204812 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads2004006008001000

MinAvgMaxClear Linux264504672OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 409612 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads2004006008001000

MinAvgMaxClear Linux258.0433.4613.0OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 819212 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads160320480640800

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux87214341936OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: NDT Mapping12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads5001000150020002500

MinAvgMaxClear Linux146623847756002OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Points2Image12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads14K28K42K56K70K

MinAvgMaxClear Linux158017901909OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Euclidean Cluster12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads5001000150020002500

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux1441458193271482735OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Only12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads400K800K1200K1600K2000K

MinAvgMaxClear Linux1315187538361387712OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Only12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads400K800K1200K1600K2000K

MinAvgMaxClear Linux1235087295031292521OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads400K800K1200K1600K2000K

MinAvgMaxClear Linux139265494487562OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Write12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads20K40K60K80K100K

MinAvgMaxClear Linux121075047082701OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Write12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads20K40K60K80K100K

MinAvgMaxClear Linux113324753380760OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads20K40K60K80K100K

MinAvgMaxClear Linux0.31.13.5OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads1.813.625.437.249.05

MinAvgMaxClear Linux0.61.86.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads3691215

MinAvgMaxClear Linux0.82.48.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads3691215

MinAvgMaxClear Linux5.712.935.9OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads1020304050

MinAvgMaxClear Linux9.723.466.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads20406080100

MinAvgMaxClear Linux12.431.388.3OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads20406080100

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

MinAvgMaxClear Linux19311824282OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninja12 Threads192 Threads24 Threads3 Threads384 Threads48 Threads6 Threads96 Threads11002200330044005500