Threadripper 3960X Serve

AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Sapphire AMD Radeon RX 5500/5500M / Pro 5500M on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2003124-PTS-THREADRI09
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Threadripper 3960X
March 11 2020
  12 Hours, 46 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Threadripper 3960X ServeOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads)MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS)AMD Starship/Matisse32GB1000GB Sabrent Rocket 4.0 1TBSapphire AMD Radeon RX 5500/5500M / Pro 5500MAMD Navi 10 HDMI AudioAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.045.3.0-24-generic (x86_64)GNOME Shell 3.34.3X Server 1.20.7modesetting 1.20.7GCC 9.2.1 20200203ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionThreadripper 3960X Serve BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025- OpenJDK Runtime Environment (build 11.0.6+10-post-Ubuntu-2ubuntu2)- + Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + tsx_async_abort: Not affected

Threadripper 3960X Servenpb: BT.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: LU.Cnpb: MG.Cnpb: SP.Bamg: lulesh: arrayfire: BLAS CPUarrayfire: Conjugate Gradient CPUdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitcompress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19leveldb: Hot Readleveldb: Fill Syncleveldb: Fill Syncleveldb: Overwriteleveldb: Overwriteleveldb: Rand Fillleveldb: Rand Fillleveldb: Rand Readleveldb: Seek Randleveldb: Rand Deleteleveldb: Seq Fillleveldb: Seq Fillintel-mpi: IMB-P2P PingPongintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 PingPongintel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 Sendrecvmcperf: Add - 1mcperf: Add - 4mcperf: Get - 1mcperf: Get - 4mcperf: Set - 1mcperf: Set - 4mcperf: Add - 16mcperf: Add - 32mcperf: Get - 16mcperf: Get - 32mcperf: Set - 16mcperf: Set - 32mcperf: Append - 1mcperf: Append - 4mcperf: Delete - 1mcperf: Delete - 4mcperf: Append - 16mcperf: Append - 32mcperf: Delete - 16mcperf: Delete - 32mcperf: Prepend - 1mcperf: Prepend - 4mcperf: Replace - 1mcperf: Replace - 4mcperf: Prepend - 16mcperf: Prepend - 32mcperf: Replace - 16mcperf: Replace - 32hbase: Increment - 1hbase: Increment - 1hbase: Increment - 32hbase: Increment - 32hbase: Rand Read - 1hbase: Rand Read - 1hbase: Rand Read - 32hbase: Rand Read - 32hbase: Rand Write - 1hbase: Rand Write - 1hbase: Rand Write - 32hbase: Rand Write - 32hbase: Seq Read - 1hbase: Seq Read - 1hbase: Seq Read - 32hbase: Seq Read - 32hbase: Seq Write - 1hbase: Seq Write - 1hbase: Async Rand Read - 1hbase: Async Rand Read - 1hbase: Seq Write - 32hbase: Seq Write - 32hbase: Async Rand Read - 32hbase: Async Rand Read - 32hbase: Async Rand Write - 1hbase: Async Rand Write - 1hbase: Async Rand Write - 32hbase: Async Rand Write - 32Threadripper 3960X53732.572228.602214.9717787.5151758.6425031.7823617.5325659.299.8792948679.04825.74599.97306.00705.32129.7010.43731.2872.71920.63925.3209.69225.3210.09531.21748.886201.19026.0204.01510065840.4782614872.71163.38052084711.083708.9399.2444030.644235.975360.975558.345948.946764.245946.745428.873352.373844.645556.045214.848558.847623.873414.572396.347450.149217.973663.674603.746565.048089.248226.247646.647283.946803.147158.547257.6885511290187352100479822271614176185122438502101273977177096179950521012797774542478996892328526218952670611OpenBenchmarking.org

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CThreadripper 3960X12K24K36K48K60KSE +/- 713.82, N = 353732.571. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CThreadripper 3960X5001000150020002500SE +/- 1.29, N = 32228.601. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DThreadripper 3960X5001000150020002500SE +/- 3.55, N = 32214.971. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CThreadripper 3960X4K8K12K16K20KSE +/- 58.59, N = 317787.511. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CThreadripper 3960X11K22K33K44K55KSE +/- 171.84, N = 351758.641. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CThreadripper 3960X5K10K15K20K25KSE +/- 180.52, N = 325031.781. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BThreadripper 3960X5K10K15K20K25KSE +/- 213.06, N = 1523617.531. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.2

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid BenchmarkThreadripper 3960X5K10K15K20K25KSE +/- 7.68, N = 325659.291. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Threadripper 3960X3691215SE +/- 0.0027561, N = 39.87929481. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

ArrayFire

ArrayFire is an GPU and CPU numeric processing library, this test uses the built-in CPU and OpenCL ArrayFire benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterArrayFire 3.7Test: BLAS CPUThreadripper 3960X150300450600750SE +/- 3.55, N = 3679.051. (CXX) g++ options: -rdynamic

OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient CPUThreadripper 3960X612182430SE +/- 0.04, N = 325.741. (CXX) g++ options: -rdynamic

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.6.0Video Input: Chimera 1080pThreadripper 3960X130260390520650SE +/- 1.23, N = 3599.97MIN: 446.11 / MAX: 736.511. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.6.0Video Input: Summer Nature 4KThreadripper 3960X70140210280350SE +/- 0.09, N = 3306.00MIN: 196.7 / MAX: 324.811. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.6.0Video Input: Summer Nature 1080pThreadripper 3960X150300450600750SE +/- 1.59, N = 3705.32MIN: 438.58 / MAX: 775.331. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.6.0Video Input: Chimera 1080p 10-bitThreadripper 3960X306090120150SE +/- 0.38, N = 3129.70MIN: 88.56 / MAX: 2391. (CC) gcc options: -pthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Threadripper 3960X3691215SE +/- 0.10, N = 310.441. (CC) gcc options: -O3 -pthread -lz -llzma

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadThreadripper 3960X714212835SE +/- 0.33, N = 1331.291. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncThreadripper 3960X0.60751.2151.82252.433.0375SE +/- 0.00, N = 32.71. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncThreadripper 3960X400800120016002000SE +/- 6.49, N = 31920.641. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteThreadripper 3960X612182430SE +/- 0.06, N = 325.31. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteThreadripper 3960X50100150200250SE +/- 0.44, N = 3209.691. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillThreadripper 3960X612182430SE +/- 0.09, N = 325.31. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillThreadripper 3960X50100150200250SE +/- 0.50, N = 3210.101. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadThreadripper 3960X714212835SE +/- 0.08, N = 331.221. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomThreadripper 3960X1122334455SE +/- 0.12, N = 348.891. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteThreadripper 3960X4080120160200SE +/- 0.40, N = 3201.191. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillThreadripper 3960X612182430SE +/- 0.03, N = 326.01. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillThreadripper 3960X4080120160200SE +/- 0.25, N = 3204.021. (CXX) g++ options: -O3 -lsnappy -lpthread

Intel MPI Benchmarks

Intel MPI Benchmarks for stressing MPI implementations. At this point the test profile aggregates results for some common MPI functionality. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Msg/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-P2P PingPongThreadripper 3960X2M4M6M8M10MSE +/- 33485.31, N = 310065840.48MIN: 4142 / MAX: 268301911. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 ExchangeThreadripper 3960X10002000300040005000SE +/- 19.02, N = 34872.71MAX: 20374.681. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 ExchangeThreadripper 3960X4080120160200SE +/- 0.26, N = 3163.38MIN: 1.13 / MAX: 3586.051. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 PingPongThreadripper 3960X10002000300040005000SE +/- 162.37, N = 124711.08MIN: 4.14 / MAX: 17926.181. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 SendrecvThreadripper 3960X8001600240032004000SE +/- 52.00, N = 33708.93MAX: 16680.961. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 SendrecvThreadripper 3960X20406080100SE +/- 0.98, N = 399.24MIN: 0.71 / MAX: 1816.51. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Memcached mcperf

This is a test of twmperf/mcperf with memcached, a distributed memory object caching system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Add - Connections: 1Threadripper 3960X9K18K27K36K45KSE +/- 607.88, N = 444030.61. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Add - Connections: 4Threadripper 3960X9K18K27K36K45KSE +/- 611.03, N = 344235.91. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Get - Connections: 1Threadripper 3960X16K32K48K64K80KSE +/- 932.18, N = 1575360.91. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Get - Connections: 4Threadripper 3960X16K32K48K64K80KSE +/- 1054.34, N = 475558.31. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Set - Connections: 1Threadripper 3960X10K20K30K40K50KSE +/- 1500.42, N = 1545948.91. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Set - Connections: 4Threadripper 3960X10K20K30K40K50KSE +/- 1225.03, N = 1246764.21. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Add - Connections: 16Threadripper 3960X10K20K30K40K50KSE +/- 712.79, N = 345946.71. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Add - Connections: 32Threadripper 3960X10K20K30K40K50KSE +/- 497.07, N = 345428.81. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Get - Connections: 16Threadripper 3960X16K32K48K64K80KSE +/- 425.21, N = 373352.31. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Get - Connections: 32Threadripper 3960X16K32K48K64K80KSE +/- 340.92, N = 373844.61. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Set - Connections: 16Threadripper 3960X10K20K30K40K50KSE +/- 582.52, N = 445556.01. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Set - Connections: 32Threadripper 3960X10K20K30K40K50KSE +/- 539.06, N = 345214.81. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Append - Connections: 1Threadripper 3960X10K20K30K40K50KSE +/- 1258.42, N = 1548558.81. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Append - Connections: 4Threadripper 3960X10K20K30K40K50KSE +/- 604.05, N = 1547623.81. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Delete - Connections: 1Threadripper 3960X16K32K48K64K80KSE +/- 667.09, N = 1573414.51. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Delete - Connections: 4Threadripper 3960X16K32K48K64K80KSE +/- 565.75, N = 372396.31. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Append - Connections: 16Threadripper 3960X10K20K30K40K50KSE +/- 528.16, N = 747450.11. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Append - Connections: 32Threadripper 3960X11K22K33K44K55KSE +/- 681.52, N = 349217.91. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Delete - Connections: 16Threadripper 3960X16K32K48K64K80KSE +/- 870.48, N = 373663.61. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Delete - Connections: 32Threadripper 3960X16K32K48K64K80KSE +/- 204.33, N = 374603.71. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Prepend - Connections: 1Threadripper 3960X10K20K30K40K50KSE +/- 666.78, N = 1546565.01. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Prepend - Connections: 4Threadripper 3960X10K20K30K40K50KSE +/- 627.54, N = 1548089.21. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Replace - Connections: 1Threadripper 3960X10K20K30K40K50KSE +/- 1308.17, N = 1548226.21. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Replace - Connections: 4Threadripper 3960X10K20K30K40K50KSE +/- 745.41, N = 1547646.61. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Prepend - Connections: 16Threadripper 3960X10K20K30K40K50KSE +/- 498.40, N = 847283.91. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Prepend - Connections: 32Threadripper 3960X10K20K30K40K50KSE +/- 521.43, N = 346803.11. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Replace - Connections: 16Threadripper 3960X10K20K30K40K50KSE +/- 189.80, N = 347158.51. (CC) gcc options: -O2 -lm -rdynamic

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Replace - Connections: 32Threadripper 3960X10K20K30K40K50KSE +/- 570.08, N = 347257.61. (CC) gcc options: -O2 -lm -rdynamic

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Threadripper 3960X2K4K6K8K10KSE +/- 136.15, N = 128855

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Threadripper 3960X306090120150SE +/- 1.79, N = 12112

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 32Threadripper 3960X20K40K60K80K100KSE +/- 784.04, N = 390187

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 32Threadripper 3960X80160240320400SE +/- 2.40, N = 3352

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Threadripper 3960X2K4K6K8K10KSE +/- 172.34, N = 310047

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Threadripper 3960X2040608010098

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32Threadripper 3960X50K100K150K200K250KSE +/- 2131.09, N = 3222716

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32Threadripper 3960X306090120150SE +/- 1.45, N = 3141

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Threadripper 3960X16K32K48K64K80KSE +/- 847.34, N = 1376185

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Threadripper 3960X3691215SE +/- 0.17, N = 1312

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 32Threadripper 3960X50K100K150K200K250KSE +/- 45663.62, N = 12243850

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 32Threadripper 3960X50100150200250SE +/- 46.80, N = 12210

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1Threadripper 3960X3K6K9K12K15KSE +/- 176.52, N = 1512739

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 1Threadripper 3960X20406080100SE +/- 1.10, N = 1577

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 32Threadripper 3960X40K80K120K160K200KSE +/- 2490.63, N = 3177096

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Read - Clients: 32Threadripper 3960X4080120160200SE +/- 2.33, N = 3179

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1Threadripper 3960X20K40K60K80K100KSE +/- 1241.50, N = 1595052

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 1Threadripper 3960X3691215SE +/- 0.21, N = 1510

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 1Threadripper 3960X3K6K9K12K15KSE +/- 265.89, N = 1512797

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 1Threadripper 3960X20406080100SE +/- 1.66, N = 1577

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 32Threadripper 3960X100K200K300K400K500KSE +/- 17595.01, N = 12454247

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Sequential Write - Clients: 32Threadripper 3960X20406080100SE +/- 12.11, N = 1289

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 32Threadripper 3960X20K40K60K80K100KSE +/- 891.29, N = 396892

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Read - Clients: 32Threadripper 3960X70140210280350SE +/- 2.89, N = 3328

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 1Threadripper 3960X11002200330044005500SE +/- 64.36, N = 155262

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 1Threadripper 3960X4080120160200SE +/- 2.32, N = 15189

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 32Threadripper 3960X11K22K33K44K55KSE +/- 1452.47, N = 1252670

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Async Random Write - Clients: 32Threadripper 3960X130260390520650SE +/- 20.54, N = 12611

90 Results Shown

NAS Parallel Benchmarks:
  BT.C
  EP.C
  EP.D
  FT.C
  LU.C
  MG.C
  SP.B
Algebraic Multi-Grid Benchmark
LULESH
ArrayFire:
  BLAS CPU
  Conjugate Gradient CPU
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
Zstd Compression
LevelDB:
  Hot Read
  Fill Sync
  Fill Sync
  Overwrite
  Overwrite
  Rand Fill
  Rand Fill
  Rand Read
  Seek Rand
  Rand Delete
  Seq Fill
  Seq Fill
Intel MPI Benchmarks:
  IMB-P2P PingPong
  IMB-MPI1 Exchange
  IMB-MPI1 Exchange
  IMB-MPI1 PingPong
  IMB-MPI1 Sendrecv
  IMB-MPI1 Sendrecv
Memcached mcperf:
  Add - 1
  Add - 4
  Get - 1
  Get - 4
  Set - 1
  Set - 4
  Add - 16
  Add - 32
  Get - 16
  Get - 32
  Set - 16
  Set - 32
  Append - 1
  Append - 4
  Delete - 1
  Delete - 4
  Append - 16
  Append - 32
  Delete - 16
  Delete - 32
  Prepend - 1
  Prepend - 4
  Replace - 1
  Replace - 4
  Prepend - 16
  Prepend - 32
  Replace - 16
  Replace - 32
Apache HBase:
  Increment - 1:
    Rows Per Second
    Microseconds - Average Latency
  Increment - 32:
    Rows Per Second
    Microseconds - Average Latency
  Rand Read - 1:
    Rows Per Second
    Microseconds - Average Latency
  Rand Read - 32:
    Rows Per Second
    Microseconds - Average Latency
  Rand Write - 1:
    Rows Per Second
    Microseconds - Average Latency
  Rand Write - 32:
    Rows Per Second
    Microseconds - Average Latency
  Seq Read - 1:
    Rows Per Second
    Microseconds - Average Latency
  Seq Read - 32:
    Rows Per Second
    Microseconds - Average Latency
  Seq Write - 1:
    Rows Per Second
    Microseconds - Average Latency
  Async Rand Read - 1:
    Rows Per Second
    Microseconds - Average Latency
  Seq Write - 32:
    Rows Per Second
    Microseconds - Average Latency
  Async Rand Read - 32:
    Rows Per Second
    Microseconds - Average Latency
  Async Rand Write - 1:
    Rows Per Second
    Microseconds - Average Latency
  Async Rand Write - 32:
    Rows Per Second
    Microseconds - Average Latency