Intel Kernel Scaling Optimizations On AMD Genoa

AMD EPYC 9654 benchmarks by Michael Larabel for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2304019-NE-INTELKERN69&grr.

ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDisplay ServerCompilerFile-SystemScreen ResolutionClear Linux 3 Threads 6 Threads 12 Threads 24 Threads 48 Threads 96 Threads 192 Threads 384 ThreadsAMD EPYC 9654 96-Core @ 2.40GHz (3 Cores)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a41520GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDBroadcom NetXtreme BCM5720 PCIeClear Linux OS 386606.2.8-1293.native (x86_64)X ServerGCC 12.2.1 20230323 releases/gcc-12.2.0-616-g1b6b7f214c + Clang 15.0.7 + LLVM 15.0.7ext4800x600AMD EPYC 9654 96-Core @ 2.40GHz (6 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (12 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (24 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (48 Cores)AMD EPYC 9654 96-Core @ 2.40GHz (96 Cores)2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores)2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysEnvironment Details- FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags -std=gnu++17" FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx"" Compiler Details- --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=sapphirerapids --with-zstd Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Python Details- Python 3.11.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

build-llvm: Ninjapgbench: 1000 - 500 - Read Only - Average Latencypgbench: 1000 - 500 - Read Onlypgbench: 1000 - 800 - Read Only - Average Latencypgbench: 1000 - 800 - Read Onlyopenvkl: vklBenchmark ISPCpgbench: 1000 - 1000 - Read Only - Average Latencypgbench: 1000 - 1000 - Read Onlymysqlslap: 2048mysqlslap: 8192pgbench: 1000 - 800 - Read Write - Average Latencypgbench: 1000 - 800 - Read Writepgbench: 1000 - 500 - Read Write - Average Latencypgbench: 1000 - 500 - Read Writemysqlslap: 1024mysqlslap: 4096pgbench: 1000 - 1000 - Read Write - Average Latencypgbench: 1000 - 1000 - Read Writeonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardrocksdb: Read While Writingmysqlslap: 512onnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standarddaphne: OpenMP - Points2Imagedragonflydb: 200 - 1:1memcached: 1:100onnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardgromacs: MPI CPU - water_GMX50_bareblosc: blosclz bitshufflememcached: 1:10dragonflydb: 200 - 1:5dragonflydb: 50 - 1:1dragonflydb: 50 - 1:5memcached: 1:5rocksdb: Update Randrocksdb: Rand Fillrocksdb: Read Rand Write Randrocksdb: Rand Readdaphne: OpenMP - NDT Mappingblosc: blosclz shufflestress-ng: Mallocstress-ng: Pollstress-ng: Context Switchingstress-ng: Semaphoresdaphne: OpenMP - Euclidean ClusterClear Linux 3 Threads 6 Threads 12 Threads 24 Threads 48 Threads 96 Threads 192 Threads 384 Threads4281.6473.4691441456.083131518488.09812350826825866.0801210735.9091392627726488.2501133276.702413.037138460754510.163698.366155251.8460077731395674.14731869.931566.830.63823323.780342.05020.5776726.1737199.551528577.221675279.581846162.52756342.52350254800312648929254687621497.7214864.8746208.28207642.311598345.83274742.511580.052301.1461.7122919942.891276749923.75026666936330436.5262190819.0292627650030749.0592038543.936922.759189640054610.40296.106856002.3221564042838943.131460141.39818.2921.2220512.475080.15591.0309456.51444643.692991716.592940540.923122983.641525644.9462796211630091213406505666221693.3619957.92245686.96433822.353157050.10568067.971784.301191.6120.9585217781.6704791761772.18245837961440323.0103476812.5393987966949731.7663148238.293426.113015673906908.22016121.61149536.0795348493344564.182418070.75465.4142.148618.43905118.4881.98712388.22231958.843544883.153466341.223577457.302036420.9161537487502116948291014560431845.2524813.78631852.17855199.576465487.691140550.301876.90642.7570.49910032370.8879016993261.18484501877846415.507515938.1806113788054121.1724723233.202130.186028165369417.55121132.40541767.7007082964338063.673753535.61271.7553.679768.15341122.6414.07916925.63257852.774485799.544403726.764558138.923318242.60746843109220525566481989300811936.4630906.429798829.201741737.9912990582.762441085.441908.98374.5260.33814827350.57713877125730.775129252187145010.876735606.3487876093457414.5056894833.525129.883743538219487.64840130.73637128.0392505755763835.974227589.88181.2365.517628.28249120.7367.19318569.14907076.235965009.405423657.305646481.603988403.90965084127197334225433920409471625.8632904.2101730074.723427214.5625991362.224869808.411901.43270.0490.45611017480.77610320938640.92810780868484379.675827015.7108756290652512.3868076031.315331.938957871839017.80381128.12832428.1720708944533164.72162.1516.167168.25477121.14910.82820160.14894915.906281327.926525049.874000099.841210840135760936521656365486891101.0730049.0259187800.486738430.4048227338.579675139.271860.38202.8650.5049970790.87192383311541.16785837967053813.429601148.4755959566365218.1485540743.519822.978771064066638.27066122.369421042.2048971574723725.63205.5504.865048.17074122.38619.35512893.24273958.346070240.206279681.052591881.0651200752363020199521291741570872.1719541.3455696317.9913429013.7997933563.1218664446.031737.14192.7270.49810119000.89589790513961.10091346170561311.939670116.9057241867667215.4536471662.618715.97141218322567710.316796.913814661.9421912608688154.17216.4814.6193610.673793.719119.37611685.34517240.602621772.8647663548887717671851133784072903.8415348.2605020044.7827368220.78128197099.8038002306.161667.14OpenBenchmarking.org

Timed LLVM Compilation

Build System: Ninja

MinAvgMaxClear Linux19311824282OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninja3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads11002200330044005500

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency

MinAvgMaxClear Linux0.31.13.5OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads1.813.625.437.249.05

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Only

MinAvgMaxClear Linux1441458193271482735OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Only3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400K800K1200K1600K2000K

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency

MinAvgMaxClear Linux0.61.86.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads3691215

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Only

MinAvgMaxClear Linux1315187538361387712OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Only3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400K800K1200K1600K2000K

OpenVKL

Benchmark: vklBenchmark ISPC

MinAvgMaxClear Linux485791396OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400800120016002000

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency

MinAvgMaxClear Linux0.82.48.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads3691215

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only

MinAvgMaxClear Linux1235087295031292521OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400K800K1200K1600K2000K

MariaDB

Clients: 2048

MinAvgMaxClear Linux268640871OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 20483 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads2004006008001000

MariaDB

Clients: 8192

MinAvgMaxClear Linux258.0433.4613.0OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 81923 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads160320480640800

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency

MinAvgMaxClear Linux9.723.466.1OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads20406080100

PostgreSQL

Scaling Factor: 1000 - Clients: 800 - Mode: Read Write

MinAvgMaxClear Linux121075047082701OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 800 - Mode: Read Write3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads20K40K60K80K100K

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency

MinAvgMaxClear Linux5.712.935.9OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads1020304050

PostgreSQL

Scaling Factor: 1000 - Clients: 500 - Mode: Read Write

MinAvgMaxClear Linux139265494487562OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 500 - Mode: Read Write3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads20K40K60K80K100K

MariaDB

Clients: 1024

MinAvgMaxClear Linux277688934OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 10243 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads2004006008001000

MariaDB

Clients: 4096

MinAvgMaxClear Linux264504672OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 40963 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads2004006008001000

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency

MinAvgMaxClear Linux12.431.388.3OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads20406080100

PostgreSQL

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write

MinAvgMaxClear Linux113324753380760OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads20K40K60K80K100K

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux31.345.476.7OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads20406080100

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux13.024.131.9OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads918273645

RocksDB

Test: Read While Writing

MinAvgMaxClear Linux384607438694612183225OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads4M8M12M16M20M

MariaDB

Clients: 512

MinAvgMaxClear Linux545739948OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 5123 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads2004006008001000

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux7.68.810.4OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads48121620

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux96.1115.8132.4OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads4080120160200

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: Points2Image

MinAvgMaxClear Linux146623847756002OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Points2Image3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads14K28K42K56K70K

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:1

MinAvgMaxClear Linux139567435362165763836OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:13 Threads6 Threads12 Threads24 Threads48 Threads1.4M2.8M4.2M5.6M7M

Memcached

Set To Get Ratio: 1:100

MinAvgMaxClear Linux73187038170328688154OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:1003 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads2M4M6M8M10M

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux1624861567OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400800120016002000

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux0.63.66.2OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads3691215

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux8.211.023.8OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads816243240

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

MinAvgMaxClear Linux42.1102.7122.6OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standard3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads4080120160200

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

MinAvgMaxClear Linux0.68.119.4OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bare3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads612182430

C-Blosc

Test: blosclz bitshuffle

MinAvgMaxClear Linux67261360120160OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffle3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads5K10K15K20K25K

Memcached

Set To Get Ratio: 1:10

MinAvgMaxClear Linux73720032831064907076OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:103 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads1.3M2.6M3.9M5.2M6.5M

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:5

MinAvgMaxClear Linux152857737031975965009OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:53 Threads6 Threads12 Threads24 Threads48 Threads1.6M3.2M4.8M6.4M8M

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

MinAvgMaxClear Linux167528043230166281328OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:13 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads1.6M3.2M4.8M6.4M8M

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:5

MinAvgMaxClear Linux184616345079946525050OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:53 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads2M4M6M8M10M

Memcached

Set To Get Ratio: 1:5

MinAvgMaxClear Linux75634326048514000100OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:53 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads1.1M2.2M3.3M4.4M5.5M

RocksDB

Test: Update Random

MinAvgMaxClear Linux3502546881251210840OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400K800K1200K1600K2000K

RocksDB

Test: Random Fill

MinAvgMaxClear Linux4888779465801357609OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400K800K1200K1600K2000K

RocksDB

Test: Read Random Write Random

MinAvgMaxClear Linux64892921219573652165OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads1000K2000K3000K4000K5000K

RocksDB

Test: Random Read

MinAvgMaxClear Linux254687624788170981291741570OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads400M800M1200M1600M2000M

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: NDT Mapping

MinAvgMaxClear Linux87214341936OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: NDT Mapping3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads5001000150020002500

C-Blosc

Test: blosclz shuffle

MinAvgMaxClear Linux148652354832904OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffle3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads8K16K24K32K40K

Stress-NG

Test: Malloc

MinAvgMaxClear Linux746208182882102605020045OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Malloc3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads160M320M480M640M800M

Stress-NG

Test: Poll

MinAvgMaxClear Linux207642677516027368221OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Poll3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads7M14M21M28M35M

Stress-NG

Test: Context Switching

MinAvgMaxClear Linux159834640570104128197100OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Context Switching3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads40M80M120M160M200M

Stress-NG

Test: Semaphores

MinAvgMaxClear Linux274743945451838002306OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Semaphores3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads10M20M30M40M50M

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: Euclidean Cluster

MinAvgMaxClear Linux158017901909OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Euclidean Cluster3 Threads6 Threads12 Threads24 Threads48 Threads96 Threads192 Threads384 Threads5001000150020002500


Phoronix Test Suite v10.8.4