1280p october Tests for a future article. Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 14GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2210276-NE-1280POCTO84&rdt&grs .
1280p october Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B Intel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads) MSI MS-14C6 (E14C6IMS.115 BIOS) Intel Alder Lake PCH 16GB 1024GB Micron_3400_MTFDKBA1T0TFH MSI Intel ADL GT2 14GB (1450MHz) Realtek ALC274 Intel Alder Lake-P PCH CNVi WiFi Ubuntu 22.04 5.15.0-43-generic (x86_64) KDE Plasma 5.24.4 X Server 1.21.1.3 4.6 Mesa 22.0.5 1.3.204 GCC 11.3.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x41c - Thermald 2.4.9 Java Details - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.6 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
1280p october hbase: 1000000 - Rand Read - 4 hbase: 1000000 - Rand Read - 4 hbase: 10000 - Async Rand Write - 1 hbase: 10000 - Async Rand Write - 1 hbase: 1000000 - Increment - 1 hbase: 1000000 - Increment - 1 hbase: 1000000 - Rand Read - 1 hbase: 1000000 - Rand Read - 1 hbase: 10000 - Rand Write - 4 hbase: 10000 - Rand Write - 4 hbase: 10000 - Rand Read - 4 hbase: 1000000 - Seq Read - 4 hbase: 10000 - Rand Read - 4 hbase: 1000000 - Seq Read - 4 hbase: 10000 - Seq Read - 1 hbase: 10000 - Seq Read - 1 hbase: 10000 - Rand Read - 1 hbase: 10000 - Rand Read - 1 hbase: 1000000 - Rand Write - 4 hbase: 10000 - Increment - 1 hbase: 10000 - Seq Write - 1 hbase: 10000 - Increment - 1 hbase: 10000 - Seq Read - 4 hbase: 10000 - Seq Read - 4 hbase: 1000000 - Async Rand Read - 1 hbase: 1000000 - Increment - 4 hbase: 1000000 - Increment - 4 deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream hbase: 10000 - Seq Write - 1 hbase: 10000 - Seq Write - 4 deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream hbase: 1000000 - Async Rand Read - 1 hbase: 1000000 - Seq Read - 1 hbase: 1000000 - Seq Read - 1 hbase: 1000000 - Seq Write - 4 hbase: 1000000 - Async Rand Write - 4 hbase: 1000000 - Async Rand Write - 4 hbase: 1000000 - Rand Write - 4 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream hbase: 10000 - Async Rand Read - 1 openradioss: Bumper Beam hbase: 10000 - Seq Write - 4 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream openradioss: Cell Phone Drop Test hbase: 10000 - Async Rand Read - 1 hbase: 10000 - Rand Write - 1 hbase: 10000 - Rand Write - 1 hbase: 1000000 - Seq Write - 1 cpuminer-opt: Myriad-Groestl hbase: 10000 - Async Rand Write - 4 hbase: 10000 - Async Rand Write - 4 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream openradioss: Bird Strike on Windshield avifenc: 10, Lossless jpegxl-decode: 1 quadray: 5 - 1080p deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream jpegxl: PNG - 100 xmrig: Wownero - 1M deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream jpegxl-decode: All avifenc: 0 hbase: 1000000 - Async Rand Write - 1 hbase: 10000 - Async Rand Read - 4 xmrig: Monero - 1M deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream cpuminer-opt: Magi quadray: 2 - 4K hbase: 1000000 - Async Rand Write - 1 quadray: 1 - 1080p deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream avifenc: 6 jpegxl: PNG - 80 tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 64 - ResNet-50 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream hbase: 1000000 - Rand Write - 1 hbase: 10000 - Increment - 4 encode-flac: WAV To FLAC tensorflow: CPU - 32 - ResNet-50 cpuminer-opt: Deepcoin cpuminer-opt: Garlicoin cpuminer-opt: Triple SHA-256, Onecoin openradioss: INIVOL and Fluid Structure Interaction Drop Container jpegxl: JPEG - 80 jpegxl: PNG - 90 hbase: 10000 - Increment - 4 cpuminer-opt: x25x tensorflow: CPU - 32 - AlexNet hbase: 10000 - Async Rand Read - 4 avifenc: 6, Lossless tensorflow: CPU - 16 - ResNet-50 cpuminer-opt: Ringcoin deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream tensorflow: CPU - 256 - GoogLeNet hbase: 1000000 - Async Rand Read - 4 deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream tensorflow: CPU - 16 - GoogLeNet deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream quadray: 3 - 1080p jpegxl: JPEG - 90 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 256 - AlexNet cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Blake-2 S cpuminer-opt: scrypt deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream cpuminer-opt: Skeincoin avifenc: 2 tensorflow: CPU - 32 - GoogLeNet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream tensorflow: CPU - 64 - AlexNet deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream openradioss: Rubber O-Ring Seal Installation hbase: 1000000 - Scan - 4 hbase: 1000000 - Scan - 1 hbase: 10000 - Scan - 4 hbase: 10000 - Scan - 1 hbase: 1000000 - Async Rand Read - 4 hbase: 1000000 - Seq Write - 4 hbase: 1000000 - Seq Write - 1 hbase: 1000000 - Rand Write - 1 hbase: 1000000 - Scan - 4 hbase: 1000000 - Scan - 1 hbase: 10000 - Scan - 4 hbase: 10000 - Scan - 1 cpuminer-opt: LBC, LBRY Credits quadray: 2 - 1080p quadray: 5 - 4K quadray: 3 - 4K quadray: 1 - 4K jpegxl: JPEG - 100 tensorflow: CPU - 256 - ResNet-50 A B 69 57264 500 1968 75 13112 54 18161 64 59181 188 50972 20447 78 232 4195 229 4219 35 368 24691 2650 17814 217 17087 152 26025 274.7189 36 69330 25.3412 58 58 16965 63061 16704 239 80850 35.6869 225 350.53 54 194.7359 268.34 4281 25707 36 101153 7904.25 11327 344 1753.0544 3.9097 582.79 5.852 47.73 1.18 31.659 31.5822 0.71 4515.4 14.8299 67.4207 230.27 209.81 8110 198 3546.7 14.9092 67.068 261.0413 3.8307 291.22 1.23 123 16.45 1662.0313 408.1954 7.843 8.24 74.37 13.6 259.1236 3.8591 75552 324 16.649 13.34 5429.97 1280.74 116170 1171.56 8.12 8.13 12009 303.96 81.75 19162 11.487 12.84 1507.25 43.8794 22.7837 39.81 27224 16.993 42.795 37.73 23.3616 3.87 7.95 38.3 93.65 55050 226780 102.29 396.3268 52.9093 44430 90.819 37.14 4.0091 86.94 131.6788 17.5928 455.47 146 59 9 13 15260 4.68 0.3 1.01 4.42 0.7 162 24555 330 2961 107 9296 72 13831 51 72706 224 42859 17289 92 201 4831 263 3704 32 398 26667 2455 19048 203 16057 143 27633 259.3806 34 65691 26.7412 61 61 16144 66265 15905 250 77312 37.1624 234 337.54 56 187.9041 259.53 4141 26455 35 103993 7709.44 11599 336 1714.8582 3.9958 571.29 5.738 46.84 1.16 31.2187 32.0275 0.72 4454.1 14.6566 68.2177 227.6 212.098 8024 200 3511.5 15.0567 66.411 258.63 3.8664 293.87 1.22 124 16.32 1649.6294 405.1757 7.901 8.18 74.9 13.51 257.4869 3.8836 75081 322 16.549 13.26 5399 1273.47 115560 1177.52 8.08 8.09 12067 305.31 81.4 19243 11.532 12.89 1502.06 43.7317 22.8601 39.68 27312 17.0476 42.9318 37.61 23.288 3.88 7.93 38.21 93.46 54960 226410 102.45 396.8836 52.8377 44480 90.744 37.17 4.0122 86.99 131.7367 17.5857 455.55 11 17 46 66 146 59 9 13 358633 57202 72964 13459 15260 4.68 0.3 1.01 4.42 0.7 OpenBenchmarking.org
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 4 A B 40 80 120 160 200 69 162
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 4 A B 12K 24K 36K 48K 60K 57264 24555
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 1 A B 110 220 330 440 550 500 330
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 1 A B 600 1200 1800 2400 3000 1968 2961
Apache HBase Rows: 1000000 - Test: Increment - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 1 A B 20 40 60 80 100 75 107
Apache HBase Rows: 1000000 - Test: Increment - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 1 A B 3K 6K 9K 12K 15K 13112 9296
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 1 A B 16 32 48 64 80 54 72
Apache HBase Rows: 1000000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Read - Clients: 1 A B 4K 8K 12K 16K 20K 18161 13831
Apache HBase Rows: 10000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 4 A B 14 28 42 56 70 64 51
Apache HBase Rows: 10000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 4 A B 16K 32K 48K 64K 80K 59181 72706
Apache HBase Rows: 10000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 4 A B 50 100 150 200 250 188 224
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 4 A B 11K 22K 33K 44K 55K 50972 42859
Apache HBase Rows: 10000 - Test: Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 4 A B 4K 8K 12K 16K 20K 20447 17289
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 4 A B 20 40 60 80 100 78 92
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 1 A B 50 100 150 200 250 232 201
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 1 A B 1000 2000 3000 4000 5000 4195 4831
Apache HBase Rows: 10000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 1 A B 60 120 180 240 300 229 263
Apache HBase Rows: 10000 - Test: Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Read - Clients: 1 A B 900 1800 2700 3600 4500 4219 3704
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 4 A B 8 16 24 32 40 35 32
Apache HBase Rows: 10000 - Test: Increment - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 1 A B 90 180 270 360 450 368 398
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 1 A B 6K 12K 18K 24K 30K 24691 26667
Apache HBase Rows: 10000 - Test: Increment - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 1 A B 600 1200 1800 2400 3000 2650 2455
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 4 A B 4K 8K 12K 16K 20K 17814 19048
Apache HBase Rows: 10000 - Test: Sequential Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Read - Clients: 4 A B 50 100 150 200 250 217 203
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 1 A B 4K 8K 12K 16K 20K 17087 16057
Apache HBase Rows: 1000000 - Test: Increment - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 4 A B 30 60 90 120 150 152 143
Apache HBase Rows: 1000000 - Test: Increment - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Increment - Clients: 4 A B 6K 12K 18K 24K 30K 26025 27633
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B 60 120 180 240 300 274.72 259.38
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 1 A B 8 16 24 32 40 36 34
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 4 A B 15K 30K 45K 60K 75K 69330 65691
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream A B 6 12 18 24 30 25.34 26.74
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 1 A B 14 28 42 56 70 58 61
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 1 A B 14 28 42 56 70 58 61
Apache HBase Rows: 1000000 - Test: Sequential Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Read - Clients: 1 A B 4K 8K 12K 16K 20K 16965 16144
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 4 A B 14K 28K 42K 56K 70K 63061 66265
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 4 A B 4K 8K 12K 16K 20K 16704 15905
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 4 A B 50 100 150 200 250 239 250
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 4 A B 20K 40K 60K 80K 100K 80850 77312
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B 9 18 27 36 45 35.69 37.16
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 1 A B 50 100 150 200 250 225 234
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam A B 80 160 240 320 400 350.53 337.54
Apache HBase Rows: 10000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Sequential Write - Clients: 4 A B 13 26 39 52 65 54 56
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream A B 40 80 120 160 200 194.74 187.90
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test A B 60 120 180 240 300 268.34 259.53
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 1 A B 900 1800 2700 3600 4500 4281 4141
Apache HBase Rows: 10000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 1 A B 6K 12K 18K 24K 30K 25707 26455
Apache HBase Rows: 10000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Random Write - Clients: 1 A B 8 16 24 32 40 36 35
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 1 A B 20K 40K 60K 80K 100K 101153 103993
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl A B 2K 4K 6K 8K 10K 7904.25 7709.44 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 4 A B 2K 4K 6K 8K 10K 11327 11599
Apache HBase Rows: 10000 - Test: Async Random Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Write - Clients: 4 A B 70 140 210 280 350 344 336
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B 400 800 1200 1600 2000 1753.05 1714.86
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream A B 0.8991 1.7982 2.6973 3.5964 4.4955 3.9097 3.9958
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield A B 130 260 390 520 650 582.79 571.29
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless A B 1.3167 2.6334 3.9501 5.2668 6.5835 5.852 5.738 1. (CXX) g++ options: -O3 -fPIC -lm
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 A B 11 22 33 44 55 47.73 46.84
QuadRay Scene: 5 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p A B 0.2655 0.531 0.7965 1.062 1.3275 1.18 1.16 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B 7 14 21 28 35 31.66 31.22
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream A B 7 14 21 28 35 31.58 32.03
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 A B 0.162 0.324 0.486 0.648 0.81 0.71 0.72 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M A B 1000 2000 3000 4000 5000 4515.4 4454.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B 4 8 12 16 20 14.83 14.66
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream A B 15 30 45 60 75 67.42 68.22
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All A B 50 100 150 200 250 230.27 227.60
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 A B 50 100 150 200 250 209.81 212.10 1. (CXX) g++ options: -O3 -fPIC -lm
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 1 A B 2K 4K 6K 8K 10K 8110 8024
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 4 A B 40 80 120 160 200 198 200
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M A B 800 1600 2400 3200 4000 3546.7 3511.5 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B 4 8 12 16 20 14.91 15.06
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream A B 15 30 45 60 75 67.07 66.41
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B 60 120 180 240 300 261.04 258.63
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream A B 0.8699 1.7398 2.6097 3.4796 4.3495 3.8307 3.8664
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi A B 60 120 180 240 300 291.22 293.87 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
QuadRay Scene: 2 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K A B 0.2768 0.5536 0.8304 1.1072 1.384 1.23 1.22 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
Apache HBase Rows: 1000000 - Test: Async Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Write - Clients: 1 A B 30 60 90 120 150 123 124
QuadRay Scene: 1 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 1080p A B 4 8 12 16 20 16.45 16.32 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B 400 800 1200 1600 2000 1662.03 1649.63
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B 90 180 270 360 450 408.20 405.18
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 A B 2 4 6 8 10 7.843 7.901 1. (CXX) g++ options: -O3 -fPIC -lm
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 A B 2 4 6 8 10 8.24 8.18 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet A B 20 40 60 80 100 74.37 74.90
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 A B 3 6 9 12 15 13.60 13.51
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B 60 120 180 240 300 259.12 257.49
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream A B 0.8738 1.7476 2.6214 3.4952 4.369 3.8591 3.8836
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 1 A B 16K 32K 48K 64K 80K 75552 75081
Apache HBase Rows: 10000 - Test: Increment - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 4 A B 70 140 210 280 350 324 322
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC A B 4 8 12 16 20 16.65 16.55 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 A B 3 6 9 12 15 13.34 13.26
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin A B 1200 2400 3600 4800 6000 5429.97 5399.00 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin A B 300 600 900 1200 1500 1280.74 1273.47 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin A B 20K 40K 60K 80K 100K 116170 115560 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container A B 300 600 900 1200 1500 1171.56 1177.52
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 A B 2 4 6 8 10 8.12 8.08 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 A B 2 4 6 8 10 8.13 8.09 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Apache HBase Rows: 10000 - Test: Increment - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Increment - Clients: 4 A B 3K 6K 9K 12K 15K 12009 12067
Cpuminer-Opt Algorithm: x25x OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: x25x A B 70 140 210 280 350 303.96 305.31 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet A B 20 40 60 80 100 81.75 81.40
Apache HBase Rows: 10000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Async Random Read - Clients: 4 A B 4K 8K 12K 16K 20K 19162 19243
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless A B 3 6 9 12 15 11.49 11.53 1. (CXX) g++ options: -O3 -fPIC -lm
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 A B 3 6 9 12 15 12.84 12.89
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin A B 300 600 900 1200 1500 1507.25 1502.06 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B 10 20 30 40 50 43.88 43.73
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream A B 5 10 15 20 25 22.78 22.86
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: GoogLeNet A B 9 18 27 36 45 39.81 39.68
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 4 A B 6K 12K 18K 24K 30K 27224 27312
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream A B 4 8 12 16 20 16.99 17.05
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B 10 20 30 40 50 42.80 42.93
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet A B 9 18 27 36 45 37.73 37.61
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream A B 6 12 18 24 30 23.36 23.29
QuadRay Scene: 3 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 1080p A B 0.873 1.746 2.619 3.492 4.365 3.87 3.88 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 A B 2 4 6 8 10 7.95 7.93 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet A B 9 18 27 36 45 38.30 38.21
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: AlexNet A B 20 40 60 80 100 93.65 93.46
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite A B 12K 24K 36K 48K 60K 55050 54960 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S A B 50K 100K 150K 200K 250K 226780 226410 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: scrypt A B 20 40 60 80 100 102.29 102.45 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B 90 180 270 360 450 396.33 396.88
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B 12 24 36 48 60 52.91 52.84
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Skeincoin A B 10K 20K 30K 40K 50K 44430 44480 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 A B 20 40 60 80 100 90.82 90.74 1. (CXX) g++ options: -O3 -fPIC -lm
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet A B 9 18 27 36 45 37.14 37.17
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream A B 0.9027 1.8054 2.7081 3.6108 4.5135 4.0091 4.0122
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet A B 20 40 60 80 100 86.94 86.99
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream A B 30 60 90 120 150 131.68 131.74
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream A B 4 8 12 16 20 17.59 17.59
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation A B 100 200 300 400 500 455.47 455.55
Apache HBase Rows: 1000000 - Test: Scan - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 4 B 3 6 9 12 15 11
Apache HBase Rows: 1000000 - Test: Scan - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 1 B 4 8 12 16 20 17
Apache HBase Rows: 10000 - Test: Scan - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 4 B 10 20 30 40 50 46
Apache HBase Rows: 10000 - Test: Scan - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 1 B 15 30 45 60 75 66
Apache HBase Rows: 1000000 - Test: Async Random Read - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Async Random Read - Clients: 4 A B 30 60 90 120 150 146 146
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 4 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 4 A B 13 26 39 52 65 59 59
Apache HBase Rows: 1000000 - Test: Sequential Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Sequential Write - Clients: 1 A B 3 6 9 12 15 9 9
Apache HBase Rows: 1000000 - Test: Random Write - Clients: 1 OpenBenchmarking.org Microseconds - Average Latency, Fewer Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Random Write - Clients: 1 A B 3 6 9 12 15 13 13
Apache HBase Rows: 1000000 - Test: Scan - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 4 B 80K 160K 240K 320K 400K 358633
Apache HBase Rows: 1000000 - Test: Scan - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 1000000 - Test: Scan - Clients: 1 B 12K 24K 36K 48K 60K 57202
Apache HBase Rows: 10000 - Test: Scan - Clients: 4 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 4 B 16K 32K 48K 64K 80K 72964
Apache HBase Rows: 10000 - Test: Scan - Clients: 1 OpenBenchmarking.org Rows Per Second, More Is Better Apache HBase 2.5.0 Rows: 10000 - Test: Scan - Clients: 1 B 3K 6K 9K 12K 15K 13459
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits A B 3K 6K 9K 12K 15K 15260 15260 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
QuadRay Scene: 2 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 1080p A B 1.053 2.106 3.159 4.212 5.265 4.68 4.68 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 5 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K A B 0.0675 0.135 0.2025 0.27 0.3375 0.3 0.3 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 3 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K A B 0.2273 0.4546 0.6819 0.9092 1.1365 1.01 1.01 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 1 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K A B 0.9945 1.989 2.9835 3.978 4.9725 4.42 4.42 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 A B 0.1575 0.315 0.4725 0.63 0.7875 0.7 0.7 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Phoronix Test Suite v10.8.4