dfgg Tests for a future article. Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2312172-NE-DFGG9428382&grs&sor .
dfgg Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Compiler File-System Screen Resolution a b Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads) Dell 06CDVY (1.0.9 BIOS) Intel Ice Lake-LP DRAM 16GB Toshiba KBG40ZPZ512G NVMe 512GB Intel Iris Plus ICL GT2 16GB (1100MHz) Realtek ALC289 Intel Ice Lake-LP PCH CNVi WiFi Ubuntu 23.04 6.2.0-36-generic (x86_64) GNOME Shell 44.3 X Server + Wayland 4.6 Mesa 23.0.4-0ubuntu1~23.04.1 OpenCL 3.0 GCC 12.3.0 ext4 1920x1200 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xc2 - Thermald 2.5.2 Java Details - OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu123.04) Python Details - Python 3.11.4 Security Details - gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
dfgg lczero: BLAS lczero: Eigen svt-av1: Preset 12 - Bosphorus 4K scylladb: Writes deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream openssl: ChaCha20-Poly1305 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream openssl: AES-256-GCM svt-av1: Preset 13 - Bosphorus 4K webp2: Default openssl: RSA4096 openssl: SHA512 openssl: RSA4096 svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 4K openssl: ChaCha20 svt-av1: Preset 8 - Bosphorus 1080p deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream xmrig: Wownero - 1M xmrig: KawPow - 1M deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream svt-av1: Preset 4 - Bosphorus 1080p deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream openssl: SHA256 deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream spark-tpch: 1 - Geometric Mean Of All Queries xmrig: CryptoNight-Femto UPX2 - 1M deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream xmrig: Monero - 1M deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream xmrig: CryptoNight-Heavy - 1M xmrig: GhostRider - 1M deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream spark-tpch: 10 - Geometric Mean Of All Queries deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream openssl: AES-128-GCM deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream java-scimark2: Dense LU Matrix Factorization svt-av1: Preset 13 - Bosphorus 1080p java-scimark2: Fast Fourier Transform deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream svt-av1: Preset 12 - Bosphorus 1080p java-scimark2: Sparse Matrix Multiply java-scimark2: Composite java-scimark2: Jacobi Successive Over-Relaxation java-scimark2: Monte Carlo webp2: Quality 100, Compression Effort 5 webp2: Quality 95, Compression Effort 7 webp2: Quality 75, Compression Effort 7 spark-tpch: 10 - Q22 spark-tpch: 10 - Q21 spark-tpch: 10 - Q20 spark-tpch: 10 - Q19 spark-tpch: 10 - Q18 spark-tpch: 10 - Q17 spark-tpch: 10 - Q16 spark-tpch: 10 - Q15 spark-tpch: 10 - Q14 spark-tpch: 10 - Q13 spark-tpch: 10 - Q12 spark-tpch: 10 - Q11 spark-tpch: 10 - Q10 spark-tpch: 10 - Q09 spark-tpch: 10 - Q08 spark-tpch: 10 - Q07 spark-tpch: 10 - Q06 spark-tpch: 10 - Q05 spark-tpch: 10 - Q04 spark-tpch: 10 - Q03 spark-tpch: 10 - Q02 spark-tpch: 10 - Q01 spark-tpch: 1 - Q22 spark-tpch: 1 - Q21 spark-tpch: 1 - Q20 spark-tpch: 1 - Q19 spark-tpch: 1 - Q18 spark-tpch: 1 - Q17 spark-tpch: 1 - Q16 spark-tpch: 1 - Q15 spark-tpch: 1 - Q14 spark-tpch: 1 - Q13 spark-tpch: 1 - Q12 spark-tpch: 1 - Q11 spark-tpch: 1 - Q10 spark-tpch: 1 - Q09 spark-tpch: 1 - Q08 spark-tpch: 1 - Q07 spark-tpch: 1 - Q06 spark-tpch: 1 - Q05 spark-tpch: 1 - Q04 spark-tpch: 1 - Q03 spark-tpch: 1 - Q02 spark-tpch: 1 - Q01 webp2: Quality 100, Lossless Compression a b 32 20 31.313 27851 4.753 209.8137 10536056700 2.1073 946.7894 13364107950 33.862 2.60 769 856037790 45019.3 7.044 0.892 15375647270 26.035 142.617 14.0194 1573.6 1296.4 34.2792 58.7695 58.3036 34.0015 29.9776 33.3394 465.0267 2.1503 233.268 8.5472 3.684 229.5918 4.355 2.7047 737.6542 14.1581 13.0888 76.3219 2139384230 30.0189 33.2934 4.7325257 1302.3 141.2265 84.7081 23.5728 1308.1 373.7685 2.6753 1298.8 187.4 12.85 77.8061 41.82015849 54.8668 18.221 475.1575 4.2078 53.5162 37.339 29.6028 33.7574 15202890820 465.0523 2.1502 926.2229 2.153 8173.69 254.955 518.57 19.3773 103.1697 12.4521 80.287 192.189 1999.02 2605.95 1403.14 935.32 0.95 0.01 0.03 12.72342587 196.26261902 49.48738861 39.19455719 106.44145203 100.78012848 11.40860558 37.02564621 38.59869003 20.4395504 48.54096985 10.59748077 51.23233795 73.75761414 62.18651581 54.82960892 36.56314087 62.50230026 48.47943115 55.9370575 12.79198933 41.60556793 2.24996281 20.6189518 4.86529493 4.04202414 10.34319305 8.40441418 1.72381175 3.9105525 3.69340801 3.23068547 5.02531958 1.93730366 5.49699688 7.87298393 4.97744513 6.48225832 3.51144052 6.3854022 6.71648932 6.39238787 3.8889091 7.15123463 45 24 28.226 25475 5.1288 194.4486 9830306880 1.9736 1010.4402 12548378350 31.873 2.45 727.2 813051380 43229 6.765 0.858 14830093930 26.967 147.3519 13.5693 1524.2 1333.9 33.342 60.4086 59.9277 33.0835 30.7461 32.5063 476.6726 2.0978 227.7259 8.7547 3.605 234.5362 4.2633 2.6521 752.2363 13.9054 13.309 75.06 2104039050 30.5143 32.7537 4.8087436 1281.7 143.4885 83.382 23.9439 1290.3 378.906 2.6391 1282.1 185.2 12.7015 78.7156 41.34495884 55.433 18.035 480.0465 4.1657 54.0352 36.9819 29.8871 33.4371 15081739060 468.4816 2.1345 931.3366 2.142 8211.45 256.084 516.53 19.3021 103.4981 12.4176 80.5097 192.665 1994.16 2611.98 1401.22 936.54 0.95 0.01 0.03 13.09406471 189.4044342 47.05195236 38.3745575 103.68283844 94.20128632 11.61297989 35.7738266 38.96424103 21.21042061 47.39456177 10.96131992 51.31714249 73.32006836 60.07417679 55.61378479 37.05972672 66.50273895 48.52398682 55.57704926 12.18801308 44.68072128 2.04710698 20.03598785 5.14292622 3.57351255 10.19796848 8.57745552 2.04605532 3.84038305 3.97235441 2.9968822 4.83623695 2.02040172 5.96171904 8.6917963 5.44389009 5.84497738 3.2446425 7.11330652 7.085783 7.07467794 3.41797328 7.08784056 OpenBenchmarking.org
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.30 Backend: BLAS b a 10 20 30 40 50 45 32 1. (CXX) g++ options: -flto -pthread
LeelaChessZero Backend: Eigen OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.30 Backend: Eigen b a 6 12 18 24 30 24 20 1. (CXX) g++ options: -flto -pthread
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 7 14 21 28 35 31.31 28.23 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
ScyllaDB Test: Writes OpenBenchmarking.org Op/s, More Is Better ScyllaDB 5.2.9 Test: Writes a b 6K 12K 18K 24K 30K 27851 25475
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b 1.154 2.308 3.462 4.616 5.77 4.7530 5.1288
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b 50 100 150 200 250 209.81 194.45
OpenSSL Algorithm: ChaCha20-Poly1305 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20-Poly1305 a b 2000M 4000M 6000M 8000M 10000M 10536056700 9830306880 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 0.4741 0.9482 1.4223 1.8964 2.3705 2.1073 1.9736
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 200 400 600 800 1000 946.79 1010.44
OpenSSL Algorithm: AES-256-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-256-GCM a b 3000M 6000M 9000M 12000M 15000M 13364107950 12548378350 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b 8 16 24 32 40 33.86 31.87 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default a b 0.585 1.17 1.755 2.34 2.925 2.60 2.45 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
OpenSSL Algorithm: RSA4096 OpenBenchmarking.org sign/s, More Is Better OpenSSL Algorithm: RSA4096 a b 170 340 510 680 850 769.0 727.2 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
OpenSSL Algorithm: SHA512 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA512 a b 200M 400M 600M 800M 1000M 856037790 813051380 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
OpenSSL Algorithm: RSA4096 OpenBenchmarking.org verify/s, More Is Better OpenSSL Algorithm: RSA4096 a b 10K 20K 30K 40K 50K 45019.3 43229.0 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 2 4 6 8 10 7.044 6.765 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b 0.2007 0.4014 0.6021 0.8028 1.0035 0.892 0.858 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenSSL Algorithm: ChaCha20 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20 a b 3000M 6000M 9000M 12000M 15000M 15375647270 14830093930 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b a 6 12 18 24 30 26.97 26.04 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 142.62 147.35
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 4 8 12 16 20 14.02 13.57
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Wownero - Hash Count: 1M a b 300 600 900 1200 1500 1573.6 1524.2 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: KawPow - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M b a 300 600 900 1200 1500 1333.9 1296.4 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 8 16 24 32 40 34.28 33.34
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b 14 28 42 56 70 58.77 60.41
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 13 26 39 52 65 58.30 59.93
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b 8 16 24 32 40 34.00 33.08
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b 7 14 21 28 35 29.98 30.75
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b 8 16 24 32 40 33.34 32.51
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 100 200 300 400 500 465.03 476.67
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 0.4838 0.9676 1.4514 1.9352 2.419 2.1503 2.0978
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 50 100 150 200 250 233.27 227.73
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 2 4 6 8 10 8.5472 8.7547
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 0.8289 1.6578 2.4867 3.3156 4.1445 3.684 3.605 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b 50 100 150 200 250 229.59 234.54
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b 0.9799 1.9598 2.9397 3.9196 4.8995 4.3550 4.2633
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b 0.6086 1.2172 1.8258 2.4344 3.043 2.7047 2.6521
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b 160 320 480 640 800 737.65 752.24
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 4 8 12 16 20 14.16 13.91
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 13.09 13.31
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 76.32 75.06
OpenSSL Algorithm: SHA256 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA256 a b 500M 1000M 1500M 2000M 2500M 2139384230 2104039050 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b 7 14 21 28 35 30.02 30.51
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b 8 16 24 32 40 33.29 32.75
Apache Spark TPC-H Scale Factor: 1 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Geometric Mean Of All Queries a b 1.082 2.164 3.246 4.328 5.41 4.7325257 4.8087436 MIN: 1.72 / MAX: 20.62 MIN: 2.02 / MAX: 20.04
Xmrig Variant: CryptoNight-Femto UPX2 - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M a b 300 600 900 1200 1500 1302.3 1281.7 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 141.23 143.49
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 84.71 83.38
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 6 12 18 24 30 23.57 23.94
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Monero - Hash Count: 1M a b 300 600 900 1200 1500 1308.1 1290.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b 80 160 240 320 400 373.77 378.91
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b 0.6019 1.2038 1.8057 2.4076 3.0095 2.6753 2.6391
Xmrig Variant: CryptoNight-Heavy - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M a b 300 600 900 1200 1500 1298.8 1282.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Xmrig Variant: GhostRider - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: GhostRider - Hash Count: 1M a b 40 80 120 160 200 187.4 185.2 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 12.85 12.70
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 77.81 78.72
Apache Spark TPC-H Scale Factor: 10 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Geometric Mean Of All Queries b a 10 20 30 40 50 41.34 41.82 MIN: 10.96 / MAX: 189.4 MIN: 10.6 / MAX: 196.26
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 12 24 36 48 60 54.87 55.43
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 4 8 12 16 20 18.22 18.04
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b 100 200 300 400 500 475.16 480.05
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b 0.9468 1.8936 2.8404 3.7872 4.734 4.2078 4.1657
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 12 24 36 48 60 53.52 54.04
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 9 18 27 36 45 37.34 36.98
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b 7 14 21 28 35 29.60 29.89
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b 8 16 24 32 40 33.76 33.44
OpenSSL Algorithm: AES-128-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-128-GCM a b 3000M 6000M 9000M 12000M 15000M 15202890820 15081739060 1. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 100 200 300 400 500 465.05 468.48
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 0.4838 0.9676 1.4514 1.9352 2.419 2.1502 2.1345
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 200 400 600 800 1000 926.22 931.34
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 0.4844 0.9688 1.4532 1.9376 2.422 2.153 2.142
Java SciMark Computational Test: Dense LU Matrix Factorization OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Dense LU Matrix Factorization b a 2K 4K 6K 8K 10K 8211.45 8173.69
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p b a 60 120 180 240 300 256.08 254.96 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Java SciMark Computational Test: Fast Fourier Transform OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Fast Fourier Transform a b 110 220 330 440 550 518.57 516.53
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 5 10 15 20 25 19.38 19.30
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 103.17 103.50
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 12.45 12.42
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 80.29 80.51
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p b a 40 80 120 160 200 192.67 192.19 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Java SciMark Computational Test: Sparse Matrix Multiply OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Sparse Matrix Multiply a b 400 800 1200 1600 2000 1999.02 1994.16
Java SciMark Computational Test: Composite OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Composite b a 600 1200 1800 2400 3000 2611.98 2605.95
Java SciMark Computational Test: Jacobi Successive Over-Relaxation OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Jacobi Successive Over-Relaxation a b 300 600 900 1200 1500 1403.14 1401.22
Java SciMark Computational Test: Monte Carlo OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Monte Carlo b a 200 400 600 800 1000 936.54 935.32
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Compression Effort 5 b a 0.2138 0.4276 0.6414 0.8552 1.069 0.95 0.95 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 95, Compression Effort 7 b a 0.0023 0.0046 0.0069 0.0092 0.0115 0.01 0.01 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 75, Compression Effort 7 b a 0.0068 0.0136 0.0204 0.0272 0.034 0.03 0.03 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Apache Spark TPC-H Scale Factor: 10 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q22 a b 3 6 9 12 15 12.72 13.09
Apache Spark TPC-H Scale Factor: 10 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q21 b a 40 80 120 160 200 189.40 196.26
Apache Spark TPC-H Scale Factor: 10 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q20 b a 11 22 33 44 55 47.05 49.49
Apache Spark TPC-H Scale Factor: 10 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q19 b a 9 18 27 36 45 38.37 39.19
Apache Spark TPC-H Scale Factor: 10 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q18 b a 20 40 60 80 100 103.68 106.44
Apache Spark TPC-H Scale Factor: 10 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q17 b a 20 40 60 80 100 94.20 100.78
Apache Spark TPC-H Scale Factor: 10 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q16 a b 3 6 9 12 15 11.41 11.61
Apache Spark TPC-H Scale Factor: 10 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q15 b a 9 18 27 36 45 35.77 37.03
Apache Spark TPC-H Scale Factor: 10 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q14 a b 9 18 27 36 45 38.60 38.96
Apache Spark TPC-H Scale Factor: 10 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q13 a b 5 10 15 20 25 20.44 21.21
Apache Spark TPC-H Scale Factor: 10 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q12 b a 11 22 33 44 55 47.39 48.54
Apache Spark TPC-H Scale Factor: 10 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q11 a b 3 6 9 12 15 10.60 10.96
Apache Spark TPC-H Scale Factor: 10 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q10 a b 12 24 36 48 60 51.23 51.32
Apache Spark TPC-H Scale Factor: 10 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q09 b a 16 32 48 64 80 73.32 73.76
Apache Spark TPC-H Scale Factor: 10 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q08 b a 14 28 42 56 70 60.07 62.19
Apache Spark TPC-H Scale Factor: 10 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q07 a b 12 24 36 48 60 54.83 55.61
Apache Spark TPC-H Scale Factor: 10 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q06 a b 9 18 27 36 45 36.56 37.06
Apache Spark TPC-H Scale Factor: 10 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q05 a b 15 30 45 60 75 62.50 66.50
Apache Spark TPC-H Scale Factor: 10 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q04 a b 11 22 33 44 55 48.48 48.52
Apache Spark TPC-H Scale Factor: 10 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q03 b a 13 26 39 52 65 55.58 55.94
Apache Spark TPC-H Scale Factor: 10 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q02 b a 3 6 9 12 15 12.19 12.79
Apache Spark TPC-H Scale Factor: 10 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q01 a b 10 20 30 40 50 41.61 44.68
Apache Spark TPC-H Scale Factor: 1 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q22 b a 0.5062 1.0124 1.5186 2.0248 2.531 2.04710698 2.24996281
Apache Spark TPC-H Scale Factor: 1 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q21 b a 5 10 15 20 25 20.04 20.62
Apache Spark TPC-H Scale Factor: 1 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q20 a b 1.1572 2.3144 3.4716 4.6288 5.786 4.86529493 5.14292622
Apache Spark TPC-H Scale Factor: 1 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q19 b a 0.9095 1.819 2.7285 3.638 4.5475 3.57351255 4.04202414
Apache Spark TPC-H Scale Factor: 1 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q18 b a 3 6 9 12 15 10.20 10.34
Apache Spark TPC-H Scale Factor: 1 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q17 a b 2 4 6 8 10 8.40441418 8.57745552
Apache Spark TPC-H Scale Factor: 1 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q16 a b 0.4604 0.9208 1.3812 1.8416 2.302 1.72381175 2.04605532
Apache Spark TPC-H Scale Factor: 1 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q15 b a 0.8799 1.7598 2.6397 3.5196 4.3995 3.84038305 3.91055250
Apache Spark TPC-H Scale Factor: 1 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q14 a b 0.8938 1.7876 2.6814 3.5752 4.469 3.69340801 3.97235441
Apache Spark TPC-H Scale Factor: 1 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q13 b a 0.7269 1.4538 2.1807 2.9076 3.6345 2.99688220 3.23068547
Apache Spark TPC-H Scale Factor: 1 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q12 b a 1.1307 2.2614 3.3921 4.5228 5.6535 4.83623695 5.02531958
Apache Spark TPC-H Scale Factor: 1 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q11 a b 0.4546 0.9092 1.3638 1.8184 2.273 1.93730366 2.02040172
Apache Spark TPC-H Scale Factor: 1 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q10 a b 1.3414 2.6828 4.0242 5.3656 6.707 5.49699688 5.96171904
Apache Spark TPC-H Scale Factor: 1 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q09 a b 2 4 6 8 10 7.87298393 8.69179630
Apache Spark TPC-H Scale Factor: 1 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q08 a b 1.2249 2.4498 3.6747 4.8996 6.1245 4.97744513 5.44389009
Apache Spark TPC-H Scale Factor: 1 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q07 b a 2 4 6 8 10 5.84497738 6.48225832
Apache Spark TPC-H Scale Factor: 1 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q06 b a 0.7901 1.5802 2.3703 3.1604 3.9505 3.24464250 3.51144052
Apache Spark TPC-H Scale Factor: 1 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q05 a b 2 4 6 8 10 6.38540220 7.11330652
Apache Spark TPC-H Scale Factor: 1 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q04 a b 2 4 6 8 10 6.71648932 7.08578300
Apache Spark TPC-H Scale Factor: 1 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q03 a b 2 4 6 8 10 6.39238787 7.07467794
Apache Spark TPC-H Scale Factor: 1 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q02 b a 0.875 1.75 2.625 3.5 4.375 3.41797328 3.88890910
Apache Spark TPC-H Scale Factor: 1 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q01 b a 2 4 6 8 10 7.08784056 7.15123463
Phoronix Test Suite v10.8.5