Intel Kernel Scaling Optimizations On AMD Genoa

AMD EPYC 9654 benchmarks by Michael Larabel for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2304019-NE-INTELKERN69&gru&sor.

ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDisplay ServerCompilerFile-SystemScreen ResolutionClear Linux 3 Threads 6 Threads 12 Threads 24 Threads 48 Threads 96 Threads 192 Threads 384 ThreadsAMD EPYC 9654 96-Core @ 2.40GHz (3 Cores)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a41520GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDBroadcom NetXtreme BCM5720 PCIeClear Linux OS 386606.2.8-1293.native (x86_64)X ServerGCC 12.2.1 20230323 releases/gcc-12.2.0-616-g1b6b7f214c + Clang 15.0.7 + LLVM 15.0.7ext4800x600AMD EPYC 9654 96-Core @ 2.40GHz (6 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (12 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (24 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (48 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (96 Cores)2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores)2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysEnvironment Details- FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags -std=gnu++17" FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx"" Compiler Details- --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=sapphirerapids --with-zstd Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Python Details- Python 3.11.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

stress-ng: Pollstress-ng: Mallocstress-ng: Semaphoresstress-ng: Context Switchingonnx: GPT-2 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardopenvkl: vklBenchmark ISPCblosc: blosclz shuffleblosc: blosclz bitshufflegromacs: MPI CPU - water_GMX50_barerocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randmemcached: 1:5memcached: 1:10memcached: 1:100dragonflydb: 50 - 1:1dragonflydb: 50 - 1:5dragonflydb: 200 - 1:1dragonflydb: 200 - 1:5mysqlslap: 512mysqlslap: 1024mysqlslap: 2048mysqlslap: 4096mysqlslap: 8192daphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clusterpgbench: 1000 - 500 - Read Onlypgbench: 1000 - 800 - Read Onlypgbench: 1000 - 1000 - Read Onlypgbench: 1000 - 500 - Read Writepgbench: 1000 - 800 - Read Writepgbench: 1000 - 1000 - Read Writeonnx: GPT-2 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardpgbench: 1000 - 500 - Read Only - Average Latencypgbench: 1000 - 800 - Read Only - Average Latencypgbench: 1000 - 1000 - Read Only - Average Latencypgbench: 1000 - 500 - Read Write - Average Latencypgbench: 1000 - 800 - Read Write - Average Latencypgbench: 1000 - 1000 - Read Write - Average Latencybuild-llvm: NinjaClear Linux 3 Threads 6 Threads 12 Threads 24 Threads 48 Threads 96 Threads 192 Threads 384 Threads207642.31746208.28274742.511598345.8398.36610.63823313.037142.05024814864.86726.10.57780031225468762350254384607648929756342.52737199.55731869.931675279.581846162.521395674.141528577.225452772682642581497.7255251.8460077731580.0514414513151812350813926121071133210.16361566.8376.702423.78033.4696.0838.09835.90966.08088.2504281.647433822.352245686.96568067.973157050.1096.10681.2220522.759180.15599219957.99456.51.03011630095056662262796289640012134061525644.941444643.691460141.392940540.923122983.642838943.132991716.595465003633073041693.3656002.3221564041784.3029199427674926666926276219082038510.402818.29243.936912.47501.7122.8913.75019.02936.52649.0592301.146855199.578631852.171140550.306465487.69121.6112.1486126.1130118.48817724813.712388.21.987875021101456043615374156739016948292036420.912231958.842418070.753466341.223577457.303344564.183544883.156906696144974031845.2549536.0795348491876.905217784791764583793987934768314828.22016465.41438.29348.439050.9581.6702.18212.53923.01031.7661191.6121741737.9929798829.202441085.4412990582.76132.4053.6797630.1860122.64132630906.416925.64.0791092205198930081746843281653625566483318242.603257852.773753535.614403726.764558138.924338063.674485799.549418807785414641936.4641767.7007082961908.9810032379016998450186113751593472327.55121271.75533.20218.153410.4990.8871.1848.18015.50721.172642.7573427214.56101730074.724869808.4125991362.22130.7365.5176229.8837120.73657332904.218569.17.1931271973392040947965084435382134225433988403.904907076.234227589.885423657.305646481.605763835.975965009.409489348715744501625.8637128.0392505751901.431482735138771212925217876073560689487.64840181.23633.52518.282490.3380.5770.7756.34810.87614.505374.5266738430.40259187800.489675139.2748227338.57128.1286.1671631.9389121.14986430049.020160.110.82813576096365486891210840578718336521654000099.844894915.904533164.726281327.926525049.879019068485254371101.0732428.1720708941860.381101748103209310780868756282701807607.80381162.15131.31538.254770.4560.7760.9285.7109.67512.386270.04913429013.79455696317.9918664446.0397933563.12122.36944.8650422.9787122.386115419541.312893.219.3555236301291741570512007710640620199522591881.064273958.344723725.636070240.206279681.05663663670652538872.1721042.2048971571737.149970799238338583795959560114554078.27066205.55043.51988.170740.5040.8711.1678.47513.42918.148202.86527368220.78605020044.7838002306.16128197099.8096.91384.6193615.971493.7191139615348.211685.319.37648887711337840724766351218322517671852621772.864517240.608688154.17677676705672613903.8414661.9421912601667.14101190089790591346172418670116471610.3167216.48162.618710.67370.4980.8951.1006.90511.93915.453192.727OpenBenchmarking.org

Stress-NG

Test: Poll

MinAvgMaxClear Linux207642677516027368221OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Poll384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads7M14M21M28M35M

Stress-NG

Test: Malloc

MinAvgMaxClear Linux746208182882102605020045OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Malloc384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads160M320M480M640M800M

Stress-NG

Test: Semaphores

MinAvgMaxClear Linux274743945451838002306OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Semaphores384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads10M20M30M40M50M

Stress-NG

Test: Context Switching

MinAvgMaxClear Linux159834640570104128197100OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Context Switching384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads40M80M120M160M200M

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux96.1115.8132.4OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard24 Threads48 Threads96 Threads192 Threads12 Threads3 Threads384 Threads6 Threads4080120160200

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux0.63.66.2OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard96 Threads48 Threads192 Threads384 Threads24 Threads12 Threads6 Threads3 Threads3691215

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux13.024.131.9OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard96 Threads24 Threads48 Threads12 Threads192 Threads6 Threads384 Threads3 Threads918273645

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux42.1102.7122.6OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard24 Threads192 Threads96 Threads48 Threads12 Threads384 Threads6 Threads3 Threads4080120160200

OpenVKL

Benchmark: vklBenchmark ISPC

MinAvgMaxClear Linux485791396OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads400800120016002000

C-Blosc

Test: blosclz shuffle

MinAvgMaxClear Linux148652354832904OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffle48 Threads24 Threads96 Threads12 Threads6 Threads192 Threads384 Threads3 Threads8K16K24K32K40K

C-Blosc

Test: blosclz bitshuffle

MinAvgMaxClear Linux67261360120160OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffle96 Threads48 Threads24 Threads192 Threads12 Threads384 Threads6 Threads3 Threads5K10K15K20K25K

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

MinAvgMaxClear Linux0.68.119.4OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bare384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads612182430

RocksDB

Test: Random Fill

MinAvgMaxClear Linux4888779465801357609OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill96 Threads48 Threads6 Threads24 Threads12 Threads3 Threads192 Threads384 Threads400K800K1200K1600K2000K

RocksDB

Test: Random Read

MinAvgMaxClear Linux254687624788170981291741570OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read192 Threads384 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads400M800M1200M1600M2000M

RocksDB

Test: Update Random

MinAvgMaxClear Linux3502546881251210840OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random96 Threads48 Threads24 Threads6 Threads12 Threads192 Threads384 Threads3 Threads400K800K1200K1600K2000K

RocksDB

Test: Read While Writing

MinAvgMaxClear Linux384607438694612183225OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads4M8M12M16M20M

RocksDB

Test: Read Random Write Random

MinAvgMaxClear Linux64892921219573652165OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random96 Threads48 Threads24 Threads192 Threads384 Threads12 Threads6 Threads3 Threads1000K2000K3000K4000K5000K

Memcached

Set To Get Ratio: 1:5

MinAvgMaxClear Linux75634326048514000100OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:596 Threads48 Threads24 Threads384 Threads192 Threads12 Threads6 Threads3 Threads1.1M2.2M3.3M4.4M5.5M

Memcached

Set To Get Ratio: 1:10

MinAvgMaxClear Linux73720032831064907076OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:1048 Threads96 Threads384 Threads192 Threads24 Threads12 Threads6 Threads3 Threads1.3M2.6M3.9M5.2M6.5M

Memcached

Set To Get Ratio: 1:100

MinAvgMaxClear Linux73187038170328688154OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads2M4M6M8M10M

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

MinAvgMaxClear Linux167528043230166281328OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:196 Threads192 Threads48 Threads24 Threads12 Threads6 Threads3 Threads1.6M3.2M4.8M6.4M8M

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:5

MinAvgMaxClear Linux184616345079946525050OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:596 Threads192 Threads48 Threads24 Threads12 Threads6 Threads3 Threads2M4M6M8M10M

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:1

MinAvgMaxClear Linux139567435362165763836OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:148 Threads24 Threads12 Threads6 Threads3 Threads1.4M2.8M4.2M5.6M7M

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:5

MinAvgMaxClear Linux152857737031975965009OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:548 Threads24 Threads12 Threads6 Threads3 Threads1.6M3.2M4.8M6.4M8M

MariaDB

Clients: 512

MinAvgMaxClear Linux545739948OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 51248 Threads24 Threads96 Threads12 Threads384 Threads192 Threads6 Threads3 Threads2004006008001000

MariaDB

Clients: 1024

MinAvgMaxClear Linux277688934OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 102448 Threads96 Threads24 Threads384 Threads12 Threads192 Threads6 Threads3 Threads2004006008001000

MariaDB

Clients: 2048

MinAvgMaxClear Linux268640871OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 204848 Threads96 Threads24 Threads384 Threads192 Threads12 Threads6 Threads3 Threads2004006008001000

MariaDB

Clients: 4096

MinAvgMaxClear Linux264504672OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 4096384 Threads192 Threads48 Threads24 Threads96 Threads12 Threads6 Threads3 Threads2004006008001000

MariaDB

Clients: 8192

MinAvgMaxClear Linux258.0433.4613.0OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 8192384 Threads192 Threads24 Threads48 Threads96 Threads12 Threads6 Threads3 Threads160320480640800

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: NDT Mapping

MinAvgMaxClear Linux87214341936OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: NDT Mapping24 Threads12 Threads6 Threads48 Threads3 Threads96 Threads384 Threads192 Threads5001000150020002500

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: Points2Image

MinAvgMaxClear Linux146623847756002OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Points2Image6 Threads3 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads14K28K42K56K70K

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: Euclidean Cluster

MinAvgMaxClear Linux158017901909OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Euclidean Cluster24 Threads48 Threads12 Threads96 Threads6 Threads192 Threads384 Threads3 Threads5001000150020002500

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Only

MinAvgMaxClear Linux1441458193271482735OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Only48 Threads96 Threads384 Threads24 Threads192 Threads12 Threads6 Threads3 Threads400K800K1200K1600K2000K

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Only

MinAvgMaxClear Linux1315187538361387712OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Only48 Threads96 Threads192 Threads24 Threads384 Threads12 Threads6 Threads3 Threads400K800K1200K1600K2000K

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only

MinAvgMaxClear Linux1235087295031292521OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only48 Threads96 Threads384 Threads192 Threads24 Threads12 Threads6 Threads3 Threads400K800K1200K1600K2000K

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Write

MinAvgMaxClear Linux139265494487562OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Write96 Threads48 Threads384 Threads24 Threads192 Threads12 Threads6 Threads3 Threads20K40K60K80K100K

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Write

MinAvgMaxClear Linux121075047082701OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Write96 Threads48 Threads384 Threads192 Threads24 Threads12 Threads6 Threads3 Threads20K40K60K80K100K

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write

MinAvgMaxClear Linux113324753380760OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write96 Threads48 Threads384 Threads192 Threads24 Threads12 Threads6 Threads3 Threads20K40K60K80K100K

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux7.68.810.4OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard24 Threads48 Threads96 Threads12 Threads192 Threads3 Threads384 Threads6 Threads48121620

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux1624861567OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard96 Threads48 Threads192 Threads384 Threads24 Threads12 Threads6 Threads3 Threads400800120016002000

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux31.345.476.7OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard96 Threads24 Threads48 Threads12 Threads192 Threads6 Threads384 Threads3 Threads20406080100

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux8.211.023.8OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard24 Threads192 Threads96 Threads48 Threads12 Threads384 Threads6 Threads3 Threads816243240

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency

MinAvgMaxClear Linux0.31.13.5OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency48 Threads96 Threads384 Threads24 Threads192 Threads12 Threads6 Threads3 Threads1.813.625.437.249.05

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency

MinAvgMaxClear Linux0.61.86.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency48 Threads96 Threads192 Threads24 Threads384 Threads12 Threads6 Threads3 Threads3691215

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency

MinAvgMaxClear Linux0.82.48.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency48 Threads96 Threads384 Threads192 Threads24 Threads12 Threads6 Threads3 Threads3691215

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency

MinAvgMaxClear Linux5.712.935.9OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency96 Threads48 Threads384 Threads24 Threads192 Threads12 Threads6 Threads3 Threads1020304050

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency

MinAvgMaxClear Linux9.723.466.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency96 Threads48 Threads384 Threads192 Threads24 Threads12 Threads6 Threads3 Threads20406080100

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency

MinAvgMaxClear Linux12.431.388.3OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency96 Threads48 Threads384 Threads192 Threads24 Threads12 Threads6 Threads3 Threads20406080100

Timed LLVM Compilation

Build System: Ninja

MinAvgMaxClear Linux19311824282OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninja384 Threads192 Threads96 Threads48 Threads24 Threads12 Threads6 Threads3 Threads11002200330044005500


Phoronix Test Suite v10.8.4