1280p october Tests for a future article. Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 14GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2210276-NE-1280POCTO84&grw .
1280p october Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B Intel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads) MSI MS-14C6 (E14C6IMS.115 BIOS) Intel Alder Lake PCH 16GB 1024GB Micron_3400_MTFDKBA1T0TFH MSI Intel ADL GT2 14GB (1450MHz) Realtek ALC274 Intel Alder Lake-P PCH CNVi WiFi Ubuntu 22.04 5.15.0-43-generic (x86_64) KDE Plasma 5.24.4 X Server 1.21.1.3 4.6 Mesa 22.0.5 1.3.204 GCC 11.3.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x41c - Thermald 2.4.9 Java Details - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.6 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
1280p october encode-flac: WAV To FLAC jpegxl-decode: 1 jpegxl-decode: All jpegxl: PNG - 80 jpegxl: PNG - 90 jpegxl: JPEG - 80 jpegxl: JPEG - 90 jpegxl: PNG - 100 jpegxl: JPEG - 100 xmrig: Monero - 1M xmrig: Wownero - 1M openradioss: Bumper Beam openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 256 - GoogLeNet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream cpuminer-opt: Magi cpuminer-opt: x25x cpuminer-opt: scrypt cpuminer-opt: Deepcoin cpuminer-opt: Ringcoin cpuminer-opt: Blake-2 S cpuminer-opt: Garlicoin cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Triple SHA-256, Onecoin avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless quadray: 1 - 4K quadray: 2 - 4K quadray: 3 - 4K quadray: 5 - 4K quadray: 1 - 1080p quadray: 2 - 1080p quadray: 3 - 1080p quadray: 5 - 1080p hbase: 10000 - Scan - 1 hbase: 10000 - Scan - 4 hbase: 1000000 - Scan - 1 hbase: 1000000 - Scan - 4 hbase: 10000 - Increment - 1 hbase: 10000 - Increment - 1 hbase: 10000 - Increment - 4 hbase: 10000 - Increment - 4 hbase: 10000 - Rand Read - 1 hbase: 10000 - Rand Read - 1 hbase: 10000 - Rand Read - 4 hbase: 10000 - Rand Read - 4 hbase: 1000000 - Increment - 1 hbase: 1000000 - Increment - 1 hbase: 1000000 - Increment - 4 hbase: 1000000 - Increment - 4 hbase: 10000 - Rand Write - 1 hbase: 10000 - Rand Write - 1 hbase: 10000 - Rand Write - 4 hbase: 10000 - Rand Write - 4 hbase: 1000000 - Rand Read - 1 hbase: 1000000 - Rand Read - 1 hbase: 1000000 - Rand Read - 4 hbase: 1000000 - Rand Read - 4 hbase: 1000000 - Rand Write - 1 hbase: 1000000 - Rand Write - 1 hbase: 1000000 - Rand Write - 4 hbase: 1000000 - Rand Write - 4 hbase: 10000 - Seq Read - 1 hbase: 10000 - Seq Read - 1 hbase: 10000 - Seq Read - 4 hbase: 10000 - Seq Read - 4 hbase: 10000 - Seq Write - 1 hbase: 10000 - Seq Write - 1 hbase: 10000 - Seq Write - 4 hbase: 10000 - Seq Write - 4 hbase: 10000 - Async Rand Read - 1 hbase: 10000 - Async Rand Read - 1 hbase: 10000 - Async Rand Read - 4 hbase: 10000 - Async Rand Read - 4 hbase: 1000000 - Seq Read - 1 hbase: 1000000 - Seq Read - 1 hbase: 1000000 - Seq Read - 4 hbase: 1000000 - Seq Read - 4 hbase: 10000 - Async Rand Write - 1 hbase: 10000 - Async Rand Write - 1 hbase: 10000 - Async Rand Write - 4 hbase: 10000 - Async Rand Write - 4 hbase: 1000000 - Seq Write - 1 hbase: 1000000 - Seq Write - 1 hbase: 1000000 - Seq Write - 4 hbase: 1000000 - Seq Write - 4 hbase: 1000000 - Async Rand Read - 1 hbase: 1000000 - Async Rand Read - 1 hbase: 1000000 - Async Rand Read - 4 hbase: 1000000 - Async Rand Read - 4 hbase: 1000000 - Async Rand Write - 1 hbase: 1000000 - Async Rand Write - 1 hbase: 1000000 - Async Rand Write - 4 hbase: 1000000 - Async Rand Write - 4 hbase: 10000 - Scan - 1 hbase: 10000 - Scan - 4 hbase: 1000000 - Scan - 1 hbase: 1000000 - Scan - 4 A B 16.649 47.73 230.27 8.24 8.13 8.12 7.95 0.71 0.7 3546.7 4515.4 350.53 268.34 582.79 455.47 1171.56 74.37 81.75 86.94 93.65 37.73 12.84 37.14 13.34 38.3 13.6 39.81 4.0091 1662.0313 3.8591 259.1236 17.5928 396.3268 14.8299 67.4207 25.3412 274.7189 23.3616 42.795 52.9093 131.6788 43.8794 22.7837 35.6869 194.7359 31.5822 31.659 16.993 408.1954 14.9092 67.068 3.9097 1753.0544 3.8307 261.0413 291.22 303.96 102.29 5429.97 1507.25 226780 1280.74 44430 7904.25 15260 55050 116170 209.81 90.819 7.843 11.487 5.852 4.42 1.23 1.01 0.3 16.45 4.68 3.87 1.18 2650 368 12009 324 4219 229 20447 188 13112 75 26025 152 25707 36 59181 64 18161 54 57264 69 75552 13 80850 35 4195 232 17814 217 24691 36 69330 54 4281 225 19162 198 16965 58 50972 78 1968 500 11327 344 101153 9 63061 59 17087 58 27224 146 8110 123 16704 239 16.549 46.84 227.6 8.18 8.09 8.08 7.93 0.72 0.7 3511.5 4454.1 337.54 259.53 571.29 455.55 1177.52 74.9 81.4 86.99 93.46 37.61 12.89 37.17 13.26 38.21 13.51 39.68 4.0122 1649.6294 3.8836 257.4869 17.5857 396.8836 14.6566 68.2177 26.7412 259.3806 23.288 42.9318 52.8377 131.7367 43.7317 22.8601 37.1624 187.9041 32.0275 31.2187 17.0476 405.1757 15.0567 66.411 3.9958 1714.8582 3.8664 258.63 293.87 305.31 102.45 5399 1502.06 226410 1273.47 44480 7709.44 15260 54960 115560 212.098 90.744 7.901 11.532 5.738 4.42 1.22 1.01 0.3 16.32 4.68 3.88 1.16 13459 72964 57202 358633 2455 398 12067 322 3704 263 17289 224 9296 107 27633 143 26455 35 72706 51 13831 72 24555 162 75081 13 77312 32 4831 201 19048 203 26667 34 65691 56 4141 234 19243 200 16144 61 42859 92 2961 330 11599 336 103993 9 66265 59 16057 61 27312 146 8024 124 15905 250 66 46 17 11 OpenBenchmarking.org
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC A B 4 8 12 16 20 16.65 16.55 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 A B 11 22 33 44 55 47.73 46.84
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All A B 50 100 150 200 250 230.27 227.60
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 A B 2 4 6 8 10 8.24 8.18 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 A B 2 4 6 8 10 8.13 8.09 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 A B 2 4 6 8 10 8.12 8.08 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 A B 2 4 6 8 10 7.95 7.93 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 A B 0.162 0.324 0.486 0.648 0.81 0.71 0.72 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 A B 0.1575 0.315 0.4725 0.63 0.7875 0.7 0.7 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M A B 800 1600 2400 3200 4000 3546.7 3511.5 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M A B 1000 2000 3000 4000 5000 4515.4 4454.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam A B 80 160 240 320 400 350.53 337.54
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test A B 60 120 180 240 300 268.34 259.53
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield A B 130 260 390 520 650 582.79 571.29
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation A B 100 200 300 400 500 455.47 455.55
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container A B 300 600 900 1200 1500 1171.56 1177.52
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet A B 20 40 60 80 100 74.37 74.90
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet A B 20 40 60 80 100 81.75 81.40
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet A B 20 40 60 80 100 86.94 86.99
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: AlexNet A B 20 40 60 80 100 93.65 93.46
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet A B 9 18 27 36 45 37.73 37.61
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 A B 3 6 9 12 15 12.84 12.89
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet A B 9 18 27 36 45 37.14 37.17
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 A B 3 6 9 12 15 13.34 13.26
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet A B 9 18 27 36 45 38.30 38.21
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 A B 3 6 9 12 15 13.60 13.51
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: GoogLeNet A B 9 18 27 36 45 39.81 39.68
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B 0.9027 1.8054 2.7081 3.6108 4.5135 4.0091 4.0122
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B 400 800 1200 1600 2000 1662.03 1649.63
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B 0.8738 1.7476 2.6214 3.4952 4.369 3.8591 3.8836
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B 60 120 180 240 300 259.12 257.49
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B 4 8 12 16 20 17.59 17.59
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B 90 180 270 360 450 396.33 396.88
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B 4 8 12 16 20 14.83 14.66
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B 15 30 45 60 75 67.42 68.22
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B 6 12 18 24 30 25.34 26.74
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B 60 120 180 240 300 274.72 259.38
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B 6 12 18 24 30 23.36 23.29
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B 10 20 30 40 50 42.80 42.93
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B 12 24 36 48 60 52.91 52.84
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B 30 60 90 120 150 131.68 131.74
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B 10 20 30 40 50 43.88 43.73
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B 5 10 15 20 25 22.78 22.86
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B 9 18 27 36 45 35.69 37.16
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B 40 80 120 160 200 194.74 187.90
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B 7 14 21 28 35 31.58 32.03
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B 7 14 21 28 35 31.66 31.22
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B 4 8 12 16 20 16.99 17.05
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B 90 180 270 360 450 408.20 405.18
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B 4 8 12 16 20 14.91 15.06
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B 15 30 45 60 75 67.07 66.41
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B 0.8991 1.7982 2.6973 3.5964 4.4955 3.9097 3.9958
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B 400 800 1200 1600 2000 1753.05 1714.86
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B 0.8699 1.7398 2.6097 3.4796 4.3495 3.8307 3.8664
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B 60 120 180 240 300 261.04 258.63
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi A B 60 120 180 240 300 291.22 293.87 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: x25x OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: x25x A B 70 140 210 280 350 303.96 305.31 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: scrypt A B 20 40 60 80 100 102.29 102.45 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin A B 1200 2400 3600 4800 6000 5429.97 5399.00 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin A B 300 600 900 1200 1500 1507.25 1502.06 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S A B 50K 100K 150K 200K 250K 226780 226410 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin A B 300 600 900 1200 1500 1280.74 1273.47 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Skeincoin A B 10K 20K 30K 40K 50K 44430 44480 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl A B 2K 4K 6K 8K 10K 7904.25 7709.44 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits A B 3K 6K 9K 12K 15K 15260 15260 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite A B 12K 24K 36K 48K 60K 55050 54960 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin A B 20K 40K 60K 80K 100K 116170 115560 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 A B 50 100 150 200 250 209.81 212.10 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 A B 20 40 60 80 100 90.82 90.74 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 A B 2 4 6 8 10 7.843 7.901 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless A B 3 6 9 12 15 11.49 11.53 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless A B 1.3167 2.6334 3.9501 5.2668 6.5835 5.852 5.738 1. (CXX) g++ options: -O3 -fPIC -lm
QuadRay Scene: 1 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K A B 0.9945 1.989 2.9835 3.978 4.9725 4.42 4.42 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 2 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K A B 0.2768 0.5536 0.8304 1.1072 1.384 1.23 1.22 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 3 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K A B 0.2273 0.4546 0.6819 0.9092 1.1365 1.01 1.01 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 5 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K A B 0.0675 0.135 0.2025 0.27 0.3375 0.3 0.3 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 1 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 1080p A B 4 8 12 16 20 16.45 16.32 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 2 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 1080p A B 1.053 2.106 3.159 4.212 5.265 4.68 4.68 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 3 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 1080p A B 0.873 1.746 2.619 3.492 4.365 3.87 3.88 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 5 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p A B 0.2655 0.531 0.7965 1.062 1.3275 1.18 1.16 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
Apache HBase Rows: 10000 - Test: Scan - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 1 B 3K 6K 9K 12K 15K 13459
Apache HBase Rows: 10000 - Test: Scan - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 4 B 16K 32K 48K 64K 80K 72964
Apache HBase Rows: 1000000 - Test: Scan - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 1 B 12K 24K 36K 48K 60K 57202
Apache HBase Rows: 1000000 - Test: Scan - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 4 B 80K 160K 240K 320K 400K 358633
Apache HBase Rows: 10000 - Test: Increment - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 1 A B 600 1200 1800 2400 3000 2650 2455
Apache HBase Rows: 10000 - Test: Increment - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 1 A B 90 180 270 360 450 368 398
Apache HBase Rows: 10000 - Test: Increment - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 4 A B 3K 6K 9K 12K 15K 12009 12067
Apache HBase Rows: 10000 - Test: Increment - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 4 A B 70 140 210 280 350 324 322
Apache HBase Rows: 10000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 1 A B 900 1800 2700 3600 4500 4219 3704
Apache HBase Rows: 10000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 1 A B 60 120 180 240 300 229 263
Apache HBase Rows: 10000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 4 A B 4K 8K 12K 16K 20K 20447 17289
Apache HBase Rows: 10000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 4 A B 50 100 150 200 250 188 224
Apache HBase Rows: 1000000 - Test: Increment - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 1 A B 3K 6K 9K 12K 15K 13112 9296
Apache HBase Rows: 1000000 - Test: Increment - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 1 A B 20 40 60 80 100 75 107
Apache HBase Rows: 1000000 - Test: Increment - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 4 A B 6K 12K 18K 24K 30K 26025 27633
Apache HBase Rows: 1000000 - Test: Increment - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 4 A B 30 60 90 120 150 152 143
Apache HBase Rows: 10000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 1 A B 6K 12K 18K 24K 30K 25707 26455
Apache HBase Rows: 10000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 1 A B 8 16 24 32 40 36 35
Apache HBase Rows: 10000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 4 A B 16K 32K 48K 64K 80K 59181 72706
Apache HBase Rows: 10000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 4 A B 14 28 42 56 70 64 51
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 1 A B 4K 8K 12K 16K 20K 18161 13831
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 1 A B 16 32 48 64 80 54 72
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 4 A B 12K 24K 36K 48K 60K 57264 24555
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 4 A B 40 80 120 160 200 69 162
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 1 A B 16K 32K 48K 64K 80K 75552 75081
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 1 A B 3 6 9 12 15 13 13
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 4 A B 20K 40K 60K 80K 100K 80850 77312
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 4 A B 8 16 24 32 40 35 32
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 1 A B 1000 2000 3000 4000 5000 4195 4831
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 1 A B 50 100 150 200 250 232 201
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 4 A B 4K 8K 12K 16K 20K 17814 19048
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 4 A B 50 100 150 200 250 217 203
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 1 A B 6K 12K 18K 24K 30K 24691 26667
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 1 A B 8 16 24 32 40 36 34
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 4 A B 15K 30K 45K 60K 75K 69330 65691
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 4 A B 13 26 39 52 65 54 56
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 1 A B 900 1800 2700 3600 4500 4281 4141
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 1 A B 50 100 150 200 250 225 234
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 4 A B 4K 8K 12K 16K 20K 19162 19243
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 4 A B 40 80 120 160 200 198 200
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 1 A B 4K 8K 12K 16K 20K 16965 16144
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 1 A B 14 28 42 56 70 58 61
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 4 A B 11K 22K 33K 44K 55K 50972 42859
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 4 A B 20 40 60 80 100 78 92
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 1 A B 600 1200 1800 2400 3000 1968 2961
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 1 A B 110 220 330 440 550 500 330
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 4 A B 2K 4K 6K 8K 10K 11327 11599
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 4 A B 70 140 210 280 350 344 336
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 1 A B 20K 40K 60K 80K 100K 101153 103993
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 1 A B 3 6 9 12 15 9 9
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 4 A B 14K 28K 42K 56K 70K 63061 66265
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 4 A B 13 26 39 52 65 59 59
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 1 A B 4K 8K 12K 16K 20K 17087 16057
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 1 A B 14 28 42 56 70 58 61
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 4 A B 6K 12K 18K 24K 30K 27224 27312
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 4 A B 30 60 90 120 150 146 146
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 1 A B 2K 4K 6K 8K 10K 8110 8024
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 1 A B 30 60 90 120 150 123 124
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 4 A B 4K 8K 12K 16K 20K 16704 15905
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 4 A B 50 100 150 200 250 239 250
Apache HBase Rows: 10000 - Test: Scan - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 1 B 15 30 45 60 75 66
Apache HBase Rows: 10000 - Test: Scan - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 4 B 10 20 30 40 50 46
Apache HBase Rows: 1000000 - Test: Scan - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 1 B 4 8 12 16 20 17
Apache HBase Rows: 1000000 - Test: Scan - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 4 B 3 6 9 12 15 11
Phoronix Test Suite v10.8.4