void-musl-init

Void Musl Baseline II

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007283-AL-VOIDMUSLI68
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Database Test Suite 2 Tests
Server 4 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Virtual disk - VMware VMXNET3 - 4 x Intel Xeon Gold
July 22 2020
  25 Minutes
Void Linux Musl Baseline
July 22 2020
  6 Hours, 8 Minutes
Void Musl Init Run II
July 23 2020
  5 Hours, 15 Minutes
Invert Hiding All Results Option
  3 Hours, 56 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


void-musl-initProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionSystem LayerVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II4 x Intel Xeon Gold 5218R (7 Cores)Intel 440BX (VMW71.00V.13989454.B64.1906190538 BIOS)Intel 440BX/ZX/DX16GB27GB Virtual diskVMware SVGA IIVMware VMXNET3VoidLinux rolling5.4.52_1 (x86_64)GCC 9.3.0f2fs1176x885VMware32GB5.7.10_1 (x86_64)OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-musl --disable-libsanitizer --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-multilib --disable-nls --disable-symvers --disable-target-libiberty --disable-werror --enable-__cxa_atexit --enable-checking=release --enable-default-pie --enable-default-ssp --enable-fast-character --enable-languages=c,c++,objc,obj-c++,fortran,lto,go,ada --enable-lto --enable-plugins --enable-serial-configure --enable-shared --enable-threads=posix --enable-vtable-verify --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu Disk Details- MQ-DEADLINE / acl,active_logs=6,alloc_mode=default,background_gc=on,discard,extent_cache,flush_merge,fsync_mode=posix,inline_data,inline_dentry,inline_xattr,lazytime,mode=adaptive,no_heap,relatime,rw,user_xattr Processor Details- CPU Microcode: 0x500002cJava Details- OpenJDK Runtime Environment (build 1.8.0_252-b09)Security Details- itlb_multihit: KVM: Vulnerable + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Virtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run IIResult OverviewPhoronix Test Suite100%112%124%136%RedisBlogBenchRedisBlogBenchRedisRedisRedisLPOPWriteGETReadSADDLPUSHSET

void-musl-initblogbench: Readblogbench: Writeredis: LPOPredis: SADDredis: LPUSHredis: GETredis: SETnginx: Static Web Page Servinghbase: Increment - 1hbase: Increment - 1hbase: Increment - 4hbase: Increment - 4hbase: Increment - 16hbase: Increment - 16hbase: Increment - 32hbase: Increment - 32hbase: Increment - 64hbase: Increment - 64hbase: Rand Read - 1hbase: Rand Read - 1hbase: Rand Read - 4hbase: Rand Read - 4hbase: Rand Read - 16hbase: Rand Read - 16hbase: Rand Read - 32hbase: Rand Read - 32hbase: Rand Read - 64hbase: Rand Read - 64hbase: Rand Write - 1hbase: Rand Write - 1Virtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II94405916841458291.081186532.07464889.501492740.751088408.6396381617241380023.381172609.53471773.721332773.131089324.7454565.5653561861186133616713955198611609205263114806212327833142451883535036863352405121754339189967431945987136.271215920.51469089.321396977.371100579.8858413.075368185122943241749791221937145720866306584211172792014247044339532545995829710955331718OpenBenchmarking.org

BlogBench

BlogBench is designed to replicate the load of a real-world busy file server by stressing the file-system with multiple threads of random reads, writes, and rewrites. The behavior is mimicked of that of a blog by creating blogs with content and pictures, modifying blog posts, adding comments to these blogs, and then reading the content of the blogs. All of these blogs generated are created locally with fake content and pictures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II200K400K600K800K1000KSE +/- 3911.40, N = 3SE +/- 8675.15, N = 3SE +/- 12752.01, N = 39440599638169967431. (CC) gcc options: -O2
OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II200K400K600K800K1000KMin: 939584 / Avg: 944058.67 / Max: 951853Min: 952078 / Avg: 963815.67 / Max: 980750Min: 971318 / Avg: 996742.67 / Max: 10111961. (CC) gcc options: -O2

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: WriteVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II400800120016002000SE +/- 51.40, N = 3SE +/- 43.51, N = 3SE +/- 58.14, N = 31684172419451. (CC) gcc options: -O2
OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: WriteVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II30060090012001500Min: 1582 / Avg: 1684.33 / Max: 1744Min: 1671 / Avg: 1723.67 / Max: 1810Min: 1829 / Avg: 1945 / Max: 20101. (CC) gcc options: -O2

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: LPOPVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II300K600K900K1200K1500KSE +/- 20173.73, N = 3SE +/- 74903.92, N = 12SE +/- 12005.99, N = 31458291.081380023.38987136.271. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: LPOPVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II300K600K900K1200K1500KMin: 1418439.75 / Avg: 1458291.08 / Max: 1483679.5Min: 915750.94 / Avg: 1380023.38 / Max: 1587301.62Min: 964320.19 / Avg: 987136.27 / Max: 1005025.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SADDVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II300K600K900K1200K1500KSE +/- 10505.34, N = 12SE +/- 13233.23, N = 15SE +/- 13241.04, N = 151186532.071172609.531215920.511. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SADDVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II200K400K600K800K1000KMin: 1133786.75 / Avg: 1186532.07 / Max: 1273885.25Min: 1078748.62 / Avg: 1172609.53 / Max: 1246882.88Min: 1098901.12 / Avg: 1215920.51 / Max: 1264222.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: LPUSHVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II100K200K300K400K500KSE +/- 3773.65, N = 3SE +/- 579.59, N = 3SE +/- 3296.99, N = 3464889.50471773.72469089.321. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: LPUSHVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II80K160K240K320K400KMin: 457875.47 / Avg: 464889.5 / Max: 470809.78Min: 470809.78 / Avg: 471773.72 / Max: 472813.25Min: 464037.12 / Avg: 469089.32 / Max: 475285.161. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: GETVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II300K600K900K1200K1500KSE +/- 12371.19, N = 3SE +/- 4621.31, N = 3SE +/- 12891.50, N = 101492740.751332773.131396977.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: GETVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II300K600K900K1200K1500KMin: 1479290 / Avg: 1492740.75 / Max: 1517450.75Min: 1324503.38 / Avg: 1332773.13 / Max: 1340482.62Min: 1302083.38 / Avg: 1396977.37 / Max: 14471781. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SETVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II200K400K600K800K1000KSE +/- 9396.73, N = 15SE +/- 8945.36, N = 15SE +/- 16082.90, N = 31088408.631089324.741100579.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SETVirtual disk - VMware VMXNET3 - 4 x Intel Xeon GoldVoid Linux Musl BaselineVoid Musl Init Run II200K400K600K800K1000KMin: 1029866.12 / Avg: 1088408.63 / Max: 1149425.25Min: 1028806.56 / Avg: 1089324.74 / Max: 1141552.5Min: 1072961.38 / Avg: 1100579.88 / Max: 1128668.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NGINX Benchmark

This is a test of ab, which is the Apache Benchmark program running against nginx. This test profile measures how many requests per second a given system can sustain when carrying out 2,000,000 requests with 500 requests being carried out concurrently. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNGINX Benchmark 1.9.9Static Web Page ServingVoid Linux Musl BaselineVoid Musl Init Run II13K26K39K52K65KSE +/- 187.69, N = 3SE +/- 78.85, N = 354565.5658413.071. (CC) gcc options: -lcrypto -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterNGINX Benchmark 1.9.9Static Web Page ServingVoid Linux Musl BaselineVoid Musl Init Run II10K20K30K40K50KMin: 54195.08 / Avg: 54565.56 / Max: 54803.15Min: 58273.17 / Avg: 58413.07 / Max: 58546.051. (CC) gcc options: -lcrypto -lz -O3 -march=native

Apache HBase

This is a benchmark of the Apache HBase non-relational distributed database system inspired from Google's Bigtable. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II12002400360048006000SE +/- 118.32, N = 12SE +/- 87.83, N = 353565368
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II9001800270036004500Min: 4650 / Avg: 5356.25 / Max: 5943Min: 5228 / Avg: 5368.33 / Max: 5530

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II4080120160200SE +/- 4.28, N = 12SE +/- 3.18, N = 3186185
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II306090120150Min: 167 / Avg: 186.17 / Max: 213Min: 179 / Avg: 184.67 / Max: 190

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II3K6K9K12K15KSE +/- 172.96, N = 3SE +/- 167.17, N = 31186112294
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II2K4K6K8K10KMin: 11538 / Avg: 11860.67 / Max: 12130Min: 11960 / Avg: 12294 / Max: 12474

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II70140210280350SE +/- 4.98, N = 3SE +/- 4.18, N = 3336324
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II60120180240300Min: 328 / Avg: 335.67 / Max: 345Min: 319 / Avg: 323.67 / Max: 332

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II4K8K12K16K20KSE +/- 134.34, N = 3SE +/- 165.28, N = 31671317497
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II3K6K9K12K15KMin: 16491 / Avg: 16712.67 / Max: 16955Min: 17263 / Avg: 17496.67 / Max: 17816

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II2004006008001000SE +/- 7.84, N = 3SE +/- 8.97, N = 3955912
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II2004006008001000Min: 941 / Avg: 955.33 / Max: 968Min: 895 / Avg: 912.33 / Max: 925

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II5K10K15K20K25KSE +/- 212.77, N = 3SE +/- 204.24, N = 31986121937
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II4K8K12K16K20KMin: 19630 / Avg: 19861 / Max: 20286Min: 21650 / Avg: 21936.67 / Max: 22332

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II30060090012001500SE +/- 17.19, N = 3SE +/- 13.78, N = 316091457
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II30060090012001500Min: 1575 / Avg: 1609.33 / Max: 1628Min: 1430 / Avg: 1456.67 / Max: 1476

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II4K8K12K16K20KSE +/- 274.50, N = 2SE +/- 293.38, N = 42052620866
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Increment - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II4K8K12K16K20KMin: 20251 / Avg: 20525.5 / Max: 20800Min: 20267 / Avg: 20866.25 / Max: 21539

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II7001400210028003500SE +/- 42.00, N = 2SE +/- 43.08, N = 431143065
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Increment - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II5001000150020002500Min: 3072 / Avg: 3114 / Max: 3156Min: 2967 / Avg: 3065 / Max: 3154

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II2K4K6K8K10KSE +/- 177.69, N = 12SE +/- 93.57, N = 1580628421
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II15003000450060007500Min: 7298 / Avg: 8062 / Max: 9235Min: 7683 / Avg: 8421.07 / Max: 8941

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II306090120150SE +/- 2.58, N = 12SE +/- 1.31, N = 15123117
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II20406080100Min: 107 / Avg: 123.17 / Max: 135Min: 110 / Avg: 117.2 / Max: 128

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II6K12K18K24K30KSE +/- 229.18, N = 15SE +/- 237.63, N = 152783327920
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II5K10K15K20K25KMin: 25104 / Avg: 27833 / Max: 28934Min: 25206 / Avg: 27919.93 / Max: 29112

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II306090120150SE +/- 1.28, N = 15SE +/- 1.28, N = 15142142
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 4Void Linux Musl BaselineVoid Musl Init Run II306090120150Min: 137 / Avg: 142.47 / Max: 158Min: 136 / Avg: 141.93 / Max: 157

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II10K20K30K40K50KSE +/- 359.11, N = 15SE +/- 421.67, N = 154518847044
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II8K16K24K32K40KMin: 40724 / Avg: 45188.27 / Max: 46195Min: 41659 / Avg: 47043.6 / Max: 48457

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II80160240320400SE +/- 3.04, N = 15SE +/- 3.34, N = 15353339
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 16Void Linux Musl BaselineVoid Musl Init Run II60120180240300Min: 345 / Avg: 352.53 / Max: 391Min: 328 / Avg: 338.53 / Max: 382

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II11K22K33K44K55KSE +/- 613.90, N = 6SE +/- 548.31, N = 85036853254
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II9K18K27K36K45KMin: 47449 / Avg: 50368.33 / Max: 51509Min: 49654 / Avg: 53254 / Max: 54589

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II140280420560700SE +/- 8.12, N = 6SE +/- 6.37, N = 8633599
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 32Void Linux Musl BaselineVoid Musl Init Run II110220330440550Min: 619 / Avg: 633 / Max: 672Min: 584 / Avg: 598.75 / Max: 641

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II12K24K36K48K60KSE +/- 336.33, N = 3SE +/- 677.77, N = 65240558297
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Read - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II10K20K30K40K50KMin: 51762 / Avg: 52404.67 / Max: 52898Min: 55049 / Avg: 58297 / Max: 59697

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II30060090012001500SE +/- 7.45, N = 3SE +/- 13.12, N = 612171095
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Read - Clients: 64Void Linux Musl BaselineVoid Musl Init Run II2004006008001000Min: 1206 / Avg: 1216.67 / Max: 1231Min: 1069 / Avg: 1094.67 / Max: 1158

OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II12K24K36K48K60KSE +/- 874.36, N = 15SE +/- 510.67, N = 95433953317
OpenBenchmarking.orgRows Per Second, More Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II9K18K27K36K45KMin: 49689 / Avg: 54338.8 / Max: 62500Min: 51921 / Avg: 53317 / Max: 57061

OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II48121620SE +/- 0.29, N = 151818
OpenBenchmarking.orgMicroseconds - Average Latency, Fewer Is BetterApache HBase 2.2.3Test: Random Write - Clients: 1Void Linux Musl BaselineVoid Musl Init Run II510152025Min: 15 / Avg: 17.67 / Max: 19

30 Results Shown

BlogBench:
  Read
  Write
Redis:
  LPOP
  SADD
  LPUSH
  GET
  SET
NGINX Benchmark
Apache HBase:
  Increment - 1:
    Rows Per Second
    Microseconds - Average Latency
  Increment - 4:
    Rows Per Second
    Microseconds - Average Latency
  Increment - 16:
    Rows Per Second
    Microseconds - Average Latency
  Increment - 32:
    Rows Per Second
    Microseconds - Average Latency
  Increment - 64:
    Rows Per Second
    Microseconds - Average Latency
  Rand Read - 1:
    Rows Per Second
    Microseconds - Average Latency
  Rand Read - 4:
    Rows Per Second
    Microseconds - Average Latency
  Rand Read - 16:
    Rows Per Second
    Microseconds - Average Latency
  Rand Read - 32:
    Rows Per Second
    Microseconds - Average Latency
  Rand Read - 64:
    Rows Per Second
    Microseconds - Average Latency
  Rand Write - 1:
    Rows Per Second
    Microseconds - Average Latency