eo okt Intel Core i7-10700T testing with a Logic Supply RXM-181 (Z01-0002A026 BIOS) and Intel UHD 630 CML GT2 30GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2210175-NE-EOOKT631699 .
eo okt Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B C D Intel Core i7-10700T @ 4.50GHz (8 Cores / 16 Threads) Logic Supply RXM-181 (Z01-0002A026 BIOS) Intel Comet Lake PCH 32GB 256GB TS256GMTS800 Intel UHD 630 CML GT2 30GB (1200MHz) Realtek ALC233 DELL P2415Q Intel I219-LM + Intel I210 Ubuntu 22.04 5.15.0-48-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 1.3.204 GCC 11.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 Java Details - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.4 Security Details - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
eo okt openradioss: Bumper Beam openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container jpegxl: PNG - 80 jpegxl: PNG - 90 jpegxl: JPEG - 80 jpegxl: JPEG - 90 jpegxl: PNG - 100 jpegxl: JPEG - 100 jpegxl-decode: 1 jpegxl-decode: All avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream hbase: 10000 - Scan - 1 hbase: 10000 - Scan - 4 hbase: 10000 - Scan - 16 hbase: 1000000 - Scan - 1 hbase: 1000000 - Scan - 4 hbase: 1000000 - Scan - 16 hbase: 10000 - Increment - 1 hbase: 10000 - Increment - 1 hbase: 10000 - Increment - 4 hbase: 10000 - Increment - 4 hbase: 10000 - Increment - 16 hbase: 10000 - Increment - 16 hbase: 10000 - Rand Read - 1 hbase: 10000 - Rand Read - 1 hbase: 10000 - Rand Read - 4 hbase: 10000 - Rand Read - 4 hbase: 1000000 - Increment - 1 hbase: 1000000 - Increment - 1 hbase: 1000000 - Increment - 4 hbase: 1000000 - Increment - 4 hbase: 10000 - Rand Read - 16 hbase: 10000 - Rand Read - 16 hbase: 10000 - Rand Write - 1 hbase: 10000 - Rand Write - 1 hbase: 10000 - Rand Write - 4 hbase: 10000 - Rand Write - 4 hbase: 1000000 - Increment - 16 hbase: 1000000 - Increment - 16 hbase: 10000 - Rand Write - 16 hbase: 10000 - Rand Write - 16 hbase: 1000000 - Rand Read - 1 hbase: 1000000 - Rand Read - 1 hbase: 1000000 - Rand Read - 4 hbase: 1000000 - Rand Read - 4 hbase: 1000000 - Rand Read - 16 hbase: 1000000 - Rand Read - 16 hbase: 1000000 - Rand Write - 1 hbase: 1000000 - Rand Write - 1 hbase: 1000000 - Rand Write - 4 hbase: 1000000 - Rand Write - 4 hbase: 10000 - Seq Read - 1 hbase: 10000 - Seq Read - 1 hbase: 10000 - Seq Read - 4 hbase: 10000 - Seq Read - 4 hbase: 1000000 - Rand Write - 16 hbase: 1000000 - Rand Write - 16 hbase: 10000 - Seq Read - 16 hbase: 10000 - Seq Read - 16 hbase: 10000 - Seq Write - 1 hbase: 10000 - Seq Write - 1 hbase: 10000 - Seq Write - 4 hbase: 10000 - Seq Write - 4 hbase: 10000 - Async Rand Read - 1 hbase: 10000 - Async Rand Read - 1 hbase: 10000 - Async Rand Read - 4 hbase: 10000 - Async Rand Read - 4 hbase: 10000 - Seq Write - 16 hbase: 10000 - Seq Write - 16 hbase: 1000000 - Seq Read - 1 hbase: 1000000 - Seq Read - 1 hbase: 1000000 - Seq Read - 4 hbase: 1000000 - Seq Read - 4 hbase: 10000 - Async Rand Read - 16 hbase: 10000 - Async Rand Read - 16 hbase: 10000 - Async Rand Write - 1 hbase: 10000 - Async Rand Write - 1 hbase: 10000 - Async Rand Write - 4 hbase: 10000 - Async Rand Write - 4 hbase: 1000000 - Seq Read - 16 hbase: 1000000 - Seq Read - 16 hbase: 1000000 - Seq Write - 1 hbase: 1000000 - Seq Write - 1 hbase: 1000000 - Seq Write - 4 hbase: 1000000 - Seq Write - 4 hbase: 10000 - Async Rand Write - 16 hbase: 10000 - Async Rand Write - 16 hbase: 1000000 - Async Rand Read - 1 hbase: 1000000 - Async Rand Read - 1 hbase: 1000000 - Async Rand Read - 4 hbase: 1000000 - Async Rand Read - 4 hbase: 1000000 - Seq Write - 16 hbase: 1000000 - Seq Write - 16 hbase: 1000000 - Async Rand Read - 16 hbase: 1000000 - Async Rand Read - 16 hbase: 1000000 - Async Rand Write - 1 hbase: 1000000 - Async Rand Write - 1 hbase: 1000000 - Async Rand Write - 4 hbase: 1000000 - Async Rand Write - 4 hbase: 1000000 - Async Rand Write - 16 hbase: 1000000 - Async Rand Write - 16 hbase: 10000 - Scan - 1 hbase: 10000 - Scan - 4 hbase: 10000 - Scan - 16 hbase: 1000000 - Scan - 1 hbase: 1000000 - Scan - 4 hbase: 1000000 - Scan - 16 A B C D 296.74 226.17 551.41 388.57 1139.82 6.76 6.62 6.55 6.37 0.62 0.61 37.47 165.99 321.761 146.064 13.186 24.232 8.207 3.3826 1170.3655 3.3587 297.7302 13.8557 288.6537 12.7462 78.4438 21.1873 188.7612 20.5736 48.5926 43.9471 90.9908 38.8199 25.7504 32.8666 121.6787 28.8849 34.6129 15.7405 254.0521 14.0492 71.1711 3.4382 1150.4695 3.3706 296.6771 2999 326 10221 383 24930 628 4112 236 16490 235 9163 108 19697 202 36932 423 19569 45 65740 57 34375 463 109446 138 11522 86 38880 102 64454 247 79898 11 69676 35 3876 252 12058 325 66321 233 29444 532 22272 38 64052 57 3737 259 12727 306 114140 131 12105 82 31107 128 32060 486 2530 386 9199 426 52922 301 102934 9 188046 43 20119 782 11895 83 26303 151 161846 103 24954 640 5172 192 13568 294 22078 722 300.04 231.43 556.03 394.19 1151.01 6.57 6.49 6.41 6.29 0.62 0.6 36.56 158.62 320.903 145.905 12.692 21.032 8.791 3.5187 1135.0455 3.4155 292.7713 13.7098 291.7255 12.6272 79.1839 20.9235 191.1417 20.4815 48.8116 44.451 89.96 39.6507 25.2103 33.0995 120.6923 29.5225 33.8656 15.8659 252.0858 14.0054 71.3934 3.4653 1154.2748 3.4046 293.7159 16181 49249 78669 103135 294848 423901 2612 375 9357 419 24340 643 3852 252 13008 300 7354 135 19624 202 29189 539 22321 40 64734 58 32538 490 123866 120 9134 109 19719 201 16029 989 82359 12 70530 55 3862 253 14403 271 46538 349 30961 505 22472 38 50602 75 3830 253 13483 288 113383 132 11218 88 29614 134 33446 465 2595 377 8950 436 50967 312 104243 9 131821 35 18551 848 11824 84 25395 157 90179 174 25928 615 4530 220 13449 296 22145 721 55 72 192 9 13 40 303.61 235.84 562.7 396.7 1160.66 6.52 6.42 6.37 6.24 0.62 0.6 36.9 158.5 320.472 145.761 12.807 20.717 8.755 3.4796 1139.5827 3.4147 292.8441 13.6739 292.4637 12.6537 79.0175 21.2558 188.1523 20.8379 47.9765 44.2883 90.2909 38.9529 25.662 32.8248 121.8338 29.0727 34.3894 15.7803 253.453 13.9122 71.8712 3.5016 1138.7389 3.407 293.5066 17606 64746 82628 109951 317204 400643 2559 383 9609 408 23482 666 3254 300 12430 315 7401 134 19429 205 31858 492 21645 40 67275 56 33249 479 116168 129 10118 98 20492 193 13864 1142 84746 11 42156 78 3946 247 13507 289 35014 452 34010 457 21277 41 49705 75 3475 280 12928 300 111661 136 10920 91 28185 141 35014 445 2501 392 8555 458 49444 322 105075 9 113899 114 20585 764 11841 84 26295 151 67238 237 25653 621 4695 212 13814 289 22110 722 50 53 182 9 12 43 304 234.27 562.03 397.52 1166.3 6.53 6.42 6.35 6.23 0.61 0.6 36.86 158.62 323.063 146.638 13.003 20.937 8.692 3.5017 1134.6204 3.3942 294.6128 13.818 289.443 12.6528 79.0233 21.1391 189.1904 20.4313 48.9315 45.509 87.8689 38.8605 25.7228 33.9855 117.6734 28.8123 34.7003 15.6908 254.9012 13.8764 72.0579 3.5351 1128.6429 3.3983 294.2542 18727 70027 81740 97343 316049 414155 2529 388 10445 374 24495 640 3339 292 12460 314 7411 134 19691 202 28968 542 21978 41 65281 58 33430 476 123299 122 9014 110 20084 198 14241 1112 83243 11 73002 53 3288 298 12188 321 31231 501 33108 473 18657 49 51990 73 3254 299 12371 315 107953 141 12151 81 29041 137 33628 462 2620 374 9357 418 50532 315 103252 9 132567 53 18969 827 11736 84 24461 162 63359 251 25778 618 5111 195 13364 298 21650 737 46 48 187 10 12 42 OpenBenchmarking.org
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam A B C D 70 140 210 280 350 296.74 300.04 303.61 304.00
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test A B C D 50 100 150 200 250 226.17 231.43 235.84 234.27
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield A B C D 120 240 360 480 600 551.41 556.03 562.70 562.03
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation A B C D 90 180 270 360 450 388.57 394.19 396.70 397.52
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container A B C D 300 600 900 1200 1500 1139.82 1151.01 1160.66 1166.30
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 A B C D 2 4 6 8 10 6.76 6.57 6.52 6.53 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 A B C D 2 4 6 8 10 6.62 6.49 6.42 6.42 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 A B C D 2 4 6 8 10 6.55 6.41 6.37 6.35 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 A B C D 2 4 6 8 10 6.37 6.29 6.24 6.23 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 A B C D 0.1395 0.279 0.4185 0.558 0.6975 0.62 0.62 0.62 0.61 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 A B C D 0.1373 0.2746 0.4119 0.5492 0.6865 0.61 0.60 0.60 0.60 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 A B C D 9 18 27 36 45 37.47 36.56 36.90 36.86
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All A B C D 40 80 120 160 200 165.99 158.62 158.50 158.62
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 A B C D 70 140 210 280 350 321.76 320.90 320.47 323.06 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 A B C D 30 60 90 120 150 146.06 145.91 145.76 146.64 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 A B C D 3 6 9 12 15 13.19 12.69 12.81 13.00 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless A B C D 6 12 18 24 30 24.23 21.03 20.72 20.94 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless A B C D 2 4 6 8 10 8.207 8.791 8.755 8.692 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B C D 0.7917 1.5834 2.3751 3.1668 3.9585 3.3826 3.5187 3.4796 3.5017
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B C D 300 600 900 1200 1500 1170.37 1135.05 1139.58 1134.62
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B C D 0.7685 1.537 2.3055 3.074 3.8425 3.3587 3.4155 3.4147 3.3942
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B C D 60 120 180 240 300 297.73 292.77 292.84 294.61
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B C D 4 8 12 16 20 13.86 13.71 13.67 13.82
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B C D 60 120 180 240 300 288.65 291.73 292.46 289.44
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B C D 3 6 9 12 15 12.75 12.63 12.65 12.65
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B C D 20 40 60 80 100 78.44 79.18 79.02 79.02
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B C D 5 10 15 20 25 21.19 20.92 21.26 21.14
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B C D 40 80 120 160 200 188.76 191.14 188.15 189.19
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B C D 5 10 15 20 25 20.57 20.48 20.84 20.43
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B C D 11 22 33 44 55 48.59 48.81 47.98 48.93
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B C D 10 20 30 40 50 43.95 44.45 44.29 45.51
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B C D 20 40 60 80 100 90.99 89.96 90.29 87.87
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B C D 9 18 27 36 45 38.82 39.65 38.95 38.86
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B C D 6 12 18 24 30 25.75 25.21 25.66 25.72
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B C D 8 16 24 32 40 32.87 33.10 32.82 33.99
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B C D 30 60 90 120 150 121.68 120.69 121.83 117.67
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B C D 7 14 21 28 35 28.88 29.52 29.07 28.81
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B C D 8 16 24 32 40 34.61 33.87 34.39 34.70
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B C D 4 8 12 16 20 15.74 15.87 15.78 15.69
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B C D 60 120 180 240 300 254.05 252.09 253.45 254.90
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B C D 4 8 12 16 20 14.05 14.01 13.91 13.88
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B C D 16 32 48 64 80 71.17 71.39 71.87 72.06
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B C D 0.7954 1.5908 2.3862 3.1816 3.977 3.4382 3.4653 3.5016 3.5351
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B C D 200 400 600 800 1000 1150.47 1154.27 1138.74 1128.64
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B C D 0.7666 1.5332 2.2998 3.0664 3.833 3.3706 3.4046 3.4070 3.3983
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B C D 60 120 180 240 300 296.68 293.72 293.51 294.25
Apache HBase Rows: 10000 - Test: Scan - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 1 B C D 4K 8K 12K 16K 20K 16181 17606 18727
Apache HBase Rows: 10000 - Test: Scan - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 4 B C D 15K 30K 45K 60K 75K 49249 64746 70027
Apache HBase Rows: 10000 - Test: Scan - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 16 B C D 20K 40K 60K 80K 100K 78669 82628 81740
Apache HBase Rows: 1000000 - Test: Scan - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 1 B C D 20K 40K 60K 80K 100K 103135 109951 97343
Apache HBase Rows: 1000000 - Test: Scan - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 4 B C D 70K 140K 210K 280K 350K 294848 317204 316049
Apache HBase Rows: 1000000 - Test: Scan - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 16 B C D 90K 180K 270K 360K 450K 423901 400643 414155
Apache HBase Rows: 10000 - Test: Increment - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 1 A B C D 600 1200 1800 2400 3000 2999 2612 2559 2529
Apache HBase Rows: 10000 - Test: Increment - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 1 A B C D 80 160 240 320 400 326 375 383 388
Apache HBase Rows: 10000 - Test: Increment - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 4 A B C D 2K 4K 6K 8K 10K 10221 9357 9609 10445
Apache HBase Rows: 10000 - Test: Increment - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 4 A B C D 90 180 270 360 450 383 419 408 374
Apache HBase Rows: 10000 - Test: Increment - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 16 A B C D 5K 10K 15K 20K 25K 24930 24340 23482 24495
Apache HBase Rows: 10000 - Test: Increment - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 16 A B C D 140 280 420 560 700 628 643 666 640
Apache HBase Rows: 10000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 1 A B C D 900 1800 2700 3600 4500 4112 3852 3254 3339
Apache HBase Rows: 10000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 1 A B C D 70 140 210 280 350 236 252 300 292
Apache HBase Rows: 10000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 4 A B C D 4K 8K 12K 16K 20K 16490 13008 12430 12460
Apache HBase Rows: 10000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 4 A B C D 70 140 210 280 350 235 300 315 314
Apache HBase Rows: 1000000 - Test: Increment - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 1 A B C D 2K 4K 6K 8K 10K 9163 7354 7401 7411
Apache HBase Rows: 1000000 - Test: Increment - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 1 A B C D 30 60 90 120 150 108 135 134 134
Apache HBase Rows: 1000000 - Test: Increment - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 4 A B C D 4K 8K 12K 16K 20K 19697 19624 19429 19691
Apache HBase Rows: 1000000 - Test: Increment - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 4 A B C D 40 80 120 160 200 202 202 205 202
Apache HBase Rows: 10000 - Test: Random Read - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 16 A B C D 8K 16K 24K 32K 40K 36932 29189 31858 28968
Apache HBase Rows: 10000 - Test: Random Read - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 16 A B C D 120 240 360 480 600 423 539 492 542
Apache HBase Rows: 10000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 1 A B C D 5K 10K 15K 20K 25K 19569 22321 21645 21978
Apache HBase Rows: 10000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 1 A B C D 10 20 30 40 50 45 40 40 41
Apache HBase Rows: 10000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 4 A B C D 14K 28K 42K 56K 70K 65740 64734 67275 65281
Apache HBase Rows: 10000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 4 A B C D 13 26 39 52 65 57 58 56 58
Apache HBase Rows: 1000000 - Test: Increment - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 16 A B C D 7K 14K 21K 28K 35K 34375 32538 33249 33430
Apache HBase Rows: 1000000 - Test: Increment - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 16 A B C D 110 220 330 440 550 463 490 479 476
Apache HBase Rows: 10000 - Test: Random Write - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 16 A B C D 30K 60K 90K 120K 150K 109446 123866 116168 123299
Apache HBase Rows: 10000 - Test: Random Write - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 16 A B C D 30 60 90 120 150 138 120 129 122
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 1 A B C D 2K 4K 6K 8K 10K 11522 9134 10118 9014
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 1 A B C D 20 40 60 80 100 86 109 98 110
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 4 A B C D 8K 16K 24K 32K 40K 38880 19719 20492 20084
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 4 A B C D 40 80 120 160 200 102 201 193 198
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 16 A B C D 14K 28K 42K 56K 70K 64454 16029 13864 14241
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 16 A B C D 200 400 600 800 1000 247 989 1142 1112
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 1 A B C D 20K 40K 60K 80K 100K 79898 82359 84746 83243
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 1 A B C D 3 6 9 12 15 11 12 11 11
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 4 A B C D 16K 32K 48K 64K 80K 69676 70530 42156 73002
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 4 A B C D 20 40 60 80 100 35 55 78 53
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 1 A B C D 800 1600 2400 3200 4000 3876 3862 3946 3288
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 1 A B C D 60 120 180 240 300 252 253 247 298
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 4 A B C D 3K 6K 9K 12K 15K 12058 14403 13507 12188
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 4 A B C D 70 140 210 280 350 325 271 289 321
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 16 A B C D 14K 28K 42K 56K 70K 66321 46538 35014 31231
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 16 A B C D 110 220 330 440 550 233 349 452 501
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 16 A B C D 7K 14K 21K 28K 35K 29444 30961 34010 33108
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 16 A B C D 120 240 360 480 600 532 505 457 473
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 1 A B C D 5K 10K 15K 20K 25K 22272 22472 21277 18657
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 1 A B C D 11 22 33 44 55 38 38 41 49
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 4 A B C D 14K 28K 42K 56K 70K 64052 50602 49705 51990
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 4 A B C D 20 40 60 80 100 57 75 75 73
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 1 A B C D 800 1600 2400 3200 4000 3737 3830 3475 3254
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 1 A B C D 70 140 210 280 350 259 253 280 299
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 4 A B C D 3K 6K 9K 12K 15K 12727 13483 12928 12371
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 4 A B C D 70 140 210 280 350 306 288 300 315
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 16 A B C D 20K 40K 60K 80K 100K 114140 113383 111661 107953
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 16 A B C D 30 60 90 120 150 131 132 136 141
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 1 A B C D 3K 6K 9K 12K 15K 12105 11218 10920 12151
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 1 A B C D 20 40 60 80 100 82 88 91 81
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 4 A B C D 7K 14K 21K 28K 35K 31107 29614 28185 29041
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 4 A B C D 30 60 90 120 150 128 134 141 137
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 16 A B C D 7K 14K 21K 28K 35K 32060 33446 35014 33628
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 16 A B C D 110 220 330 440 550 486 465 445 462
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 1 A B C D 600 1200 1800 2400 3000 2530 2595 2501 2620
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 1 A B C D 90 180 270 360 450 386 377 392 374
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 4 A B C D 2K 4K 6K 8K 10K 9199 8950 8555 9357
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 4 A B C D 100 200 300 400 500 426 436 458 418
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 16 A B C D 11K 22K 33K 44K 55K 52922 50967 49444 50532
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 16 A B C D 70 140 210 280 350 301 312 322 315
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 1 A B C D 20K 40K 60K 80K 100K 102934 104243 105075 103252
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 1 A B C D 3 6 9 12 15 9 9 9 9
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 4 A B C D 40K 80K 120K 160K 200K 188046 131821 113899 132567
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 4 A B C D 30 60 90 120 150 43 35 114 53
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 16 A B C D 4K 8K 12K 16K 20K 20119 18551 20585 18969
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 16 A B C D 200 400 600 800 1000 782 848 764 827
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 1 A B C D 3K 6K 9K 12K 15K 11895 11824 11841 11736
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 1 A B C D 20 40 60 80 100 83 84 84 84
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 4 A B C D 6K 12K 18K 24K 30K 26303 25395 26295 24461
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 4 A B C D 40 80 120 160 200 151 157 151 162
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 16 A B C D 30K 60K 90K 120K 150K 161846 90179 67238 63359
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 16 A B C D 50 100 150 200 250 103 174 237 251
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 16 A B C D 6K 12K 18K 24K 30K 24954 25928 25653 25778
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 16 A B C D 140 280 420 560 700 640 615 621 618
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 1 A B C D 1100 2200 3300 4400 5500 5172 4530 4695 5111
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 1 A B C D 50 100 150 200 250 192 220 212 195
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 4 A B C D 3K 6K 9K 12K 15K 13568 13449 13814 13364
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 4 A B C D 60 120 180 240 300 294 296 289 298
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 16 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 16 A B C D 5K 10K 15K 20K 25K 22078 22145 22110 21650
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 16 A B C D 160 320 480 640 800 722 721 722 737
Apache HBase Rows: 10000 - Test: Scan - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 1 B C D 12 24 36 48 60 55 50 46
Apache HBase Rows: 10000 - Test: Scan - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 4 B C D 16 32 48 64 80 72 53 48
Apache HBase Rows: 10000 - Test: Scan - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 16 B C D 40 80 120 160 200 192 182 187
Apache HBase Rows: 1000000 - Test: Scan - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 1 B C D 3 6 9 12 15 9 9 10
Apache HBase Rows: 1000000 - Test: Scan - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 4 B C D 3 6 9 12 15 13 12 12
Apache HBase Rows: 1000000 - Test: Scan - Clients: 16 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 16 B C D 10 20 30 40 50 40 43 42
Phoronix Test Suite v10.8.4