dfgg

Tests for a future article. Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312172-NE-DFGG9428382
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 2 Tests
Java Tests 3 Tests
Machine Learning 2 Tests
Python Tests 3 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 16 2023
  14 Hours, 49 Minutes
b
December 17 2023
  14 Hours, 49 Minutes
Invert Hiding All Results Option
  14 Hours, 49 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


dfgg Tests for a future article. Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.04 via the Phoronix Test Suite. a: Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Ice Lake-LP DRAM, Memory: 16GB, Disk: Toshiba KBG40ZPZ512G NVMe 512GB, Graphics: Intel Iris Plus ICL GT2 16GB (1100MHz), Audio: Realtek ALC289, Network: Intel Ice Lake-LP PCH CNVi WiFi OS: Ubuntu 23.04, Kernel: 6.2.0-36-generic (x86_64), Desktop: GNOME Shell 44.3, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.0.4-0ubuntu1~23.04.1, OpenCL: OpenCL 3.0, Compiler: GCC 12.3.0, File-System: ext4, Screen Resolution: 1920x1200 b: Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Ice Lake-LP DRAM, Memory: 16GB, Disk: Toshiba KBG40ZPZ512G NVMe 512GB, Graphics: Intel Iris Plus ICL GT2 16GB (1100MHz), Audio: Realtek ALC289, Network: Intel Ice Lake-LP PCH CNVi WiFi OS: Ubuntu 23.04, Kernel: 6.2.0-36-generic (x86_64), Desktop: GNOME Shell 44.3, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.0.4-0ubuntu1~23.04.1, OpenCL: OpenCL 3.0, Compiler: GCC 12.3.0, File-System: ext4, Screen Resolution: 1920x1200 Java SciMark 2.2 Computational Test: Composite Mflops > Higher Is Better a . 2605.95 |================================================================== b . 2611.98 |================================================================== Java SciMark 2.2 Computational Test: Monte Carlo Mflops > Higher Is Better a . 935.32 |=================================================================== b . 936.54 |=================================================================== Java SciMark 2.2 Computational Test: Fast Fourier Transform Mflops > Higher Is Better a . 518.57 |=================================================================== b . 516.53 |=================================================================== Java SciMark 2.2 Computational Test: Sparse Matrix Multiply Mflops > Higher Is Better a . 1999.02 |================================================================== b . 1994.16 |================================================================== Java SciMark 2.2 Computational Test: Dense LU Matrix Factorization Mflops > Higher Is Better a . 8173.69 |================================================================== b . 8211.45 |================================================================== Java SciMark 2.2 Computational Test: Jacobi Successive Over-Relaxation Mflops > Higher Is Better a . 1403.14 |================================================================== b . 1401.22 |================================================================== WebP2 Image Encode 20220823 Encode Settings: Default MP/s > Higher Is Better a . 2.60 |===================================================================== b . 2.45 |================================================================= WebP2 Image Encode 20220823 Encode Settings: Quality 75, Compression Effort 7 MP/s > Higher Is Better a . 0.03 |===================================================================== b . 0.03 |===================================================================== WebP2 Image Encode 20220823 Encode Settings: Quality 95, Compression Effort 7 MP/s > Higher Is Better a . 0.01 |===================================================================== b . 0.01 |===================================================================== WebP2 Image Encode 20220823 Encode Settings: Quality 100, Compression Effort 5 MP/s > Higher Is Better a . 0.95 |===================================================================== b . 0.95 |===================================================================== OpenSSL Algorithm: RSA4096 sign/s > Higher Is Better a . 769.0 |==================================================================== b . 727.2 |================================================================ WebP2 Image Encode 20220823 Encode Settings: Quality 100, Lossless Compression MP/s > Higher Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M H/s > Higher Is Better a . 1296.4 |================================================================= b . 1333.9 |=================================================================== Xmrig 6.21 Variant: Monero - Hash Count: 1M H/s > Higher Is Better a . 1308.1 |=================================================================== b . 1290.3 |================================================================== Xmrig 6.21 Variant: Wownero - Hash Count: 1M H/s > Higher Is Better a . 1573.6 |=================================================================== b . 1524.2 |================================================================= Xmrig 6.21 Variant: GhostRider - Hash Count: 1M H/s > Higher Is Better a . 187.4 |==================================================================== b . 185.2 |=================================================================== Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M H/s > Higher Is Better a . 1298.8 |=================================================================== b . 1282.1 |================================================================== Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M H/s > Higher Is Better a . 1302.3 |=================================================================== b . 1281.7 |================================================================== LeelaChessZero 0.30 Backend: BLAS Nodes Per Second > Higher Is Better a . 32 |================================================== b . 45 |======================================================================= LeelaChessZero 0.30 Backend: Eigen Nodes Per Second > Higher Is Better a . 20 |=========================================================== b . 24 |======================================================================= OpenSSL Algorithm: AES-128-GCM byte/s > Higher Is Better a . 15202890820 |============================================================== b . 15081739060 |============================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 2.1073 |=================================================================== b . 1.9736 |=============================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 946.79 |============================================================== b . 1010.44 |================================================================== OpenSSL Algorithm: ChaCha20-Poly1305 byte/s > Higher Is Better a . 10536056700 |============================================================== b . 9830306880 |========================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 2.1503 |=================================================================== b . 2.0978 |================================================================= OpenSSL Algorithm: RSA4096 verify/s > Higher Is Better a . 45019.3 |================================================================== b . 43229.0 |=============================================================== OpenSSL Algorithm: AES-256-GCM byte/s > Higher Is Better a . 13364107950 |============================================================== b . 12548378350 |========================================================== OpenSSL Algorithm: ChaCha20 byte/s > Higher Is Better a . 15375647270 |============================================================== b . 14830093930 |============================================================ OpenSSL Algorithm: SHA512 byte/s > Higher Is Better a . 856037790 |================================================================ b . 813051380 |============================================================= OpenSSL Algorithm: SHA256 byte/s > Higher Is Better a . 2139384230 |=============================================================== b . 2104039050 |============================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 465.03 |================================================================= b . 476.67 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 84.71 |==================================================================== b . 83.38 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 23.57 |=================================================================== b . 23.94 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 76.32 |==================================================================== b . 75.06 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 13.09 |=================================================================== b . 13.31 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 34.00 |==================================================================== b . 33.08 |================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 58.77 |================================================================== b . 60.41 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 33.34 |==================================================================== b . 32.51 |================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 29.98 |================================================================== b . 30.75 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 233.27 |=================================================================== b . 227.73 |================================================================= Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 8.5472 |================================================================= b . 8.7547 |=================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 209.81 |=================================================================== b . 194.45 |============================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 4.7530 |============================================================== b . 5.1288 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 14.02 |==================================================================== b . 13.57 |================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 142.62 |================================================================= b . 147.35 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 12.45 |==================================================================== b . 12.42 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 80.29 |==================================================================== b . 80.51 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 2.7047 |=================================================================== b . 2.6521 |================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 737.65 |================================================================== b . 752.24 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 2.6753 |=================================================================== b . 2.6391 |================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 373.77 |================================================================== b . 378.91 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 34.28 |==================================================================== b . 33.34 |================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 58.30 |================================================================== b . 59.93 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 33.29 |==================================================================== b . 32.75 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 30.02 |=================================================================== b . 30.51 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 14.16 |==================================================================== b . 13.91 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 141.23 |================================================================== b . 143.49 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 12.85 |==================================================================== b . 12.70 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 77.81 |=================================================================== b . 78.72 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 19.38 |==================================================================== b . 19.30 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 103.17 |=================================================================== b . 103.50 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 18.22 |==================================================================== b . 18.04 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 54.87 |=================================================================== b . 55.43 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 4.2078 |=================================================================== b . 4.1657 |================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 475.16 |================================================================== b . 480.05 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 4.3550 |=================================================================== b . 4.2633 |================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 229.59 |================================================================== b . 234.54 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 37.34 |==================================================================== b . 36.98 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 53.52 |=================================================================== b . 54.04 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 33.76 |==================================================================== b . 33.44 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 29.60 |=================================================================== b . 29.89 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 2.153 |==================================================================== b . 2.142 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 926.22 |=================================================================== b . 931.34 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 2.1502 |=================================================================== b . 2.1345 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 465.05 |=================================================================== b . 468.48 |=================================================================== SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 0.892 |==================================================================== b . 0.858 |================================================================= SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 7.044 |==================================================================== b . 6.765 |================================================================= SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 31.31 |==================================================================== b . 28.23 |============================================================= SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 33.86 |==================================================================== b . 31.87 |================================================================ SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 3.684 |==================================================================== b . 3.605 |=================================================================== SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 26.04 |================================================================== b . 26.97 |==================================================================== SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 192.19 |=================================================================== b . 192.67 |=================================================================== SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 254.96 |=================================================================== b . 256.08 |=================================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Geometric Mean Of All Queries Seconds < Lower Is Better a . 4.7325257 |=============================================================== b . 4.8087436 |================================================================ Apache Spark TPC-H 3.5 Scale Factor: 10 - Geometric Mean Of All Queries Seconds < Lower Is Better a . 41.82 |==================================================================== b . 41.34 |=================================================================== ScyllaDB 5.2.9 Test: Writes Op/s > Higher Is Better a . 27851 |==================================================================== b . 25475 |============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q01 Seconds < Lower Is Better a . 7.15123463 |=============================================================== b . 7.08784056 |============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q02 Seconds < Lower Is Better a . 3.88890910 |=============================================================== b . 3.41797328 |======================================================= Apache Spark TPC-H 3.5 Scale Factor: 1 - Q03 Seconds < Lower Is Better a . 6.39238787 |========================================================= b . 7.07467794 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q04 Seconds < Lower Is Better a . 6.71648932 |============================================================ b . 7.08578300 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q05 Seconds < Lower Is Better a . 6.38540220 |========================================================= b . 7.11330652 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q06 Seconds < Lower Is Better a . 3.51144052 |=============================================================== b . 3.24464250 |========================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q07 Seconds < Lower Is Better a . 6.48225832 |=============================================================== b . 5.84497738 |========================================================= Apache Spark TPC-H 3.5 Scale Factor: 1 - Q08 Seconds < Lower Is Better a . 4.97744513 |========================================================== b . 5.44389009 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q09 Seconds < Lower Is Better a . 7.87298393 |========================================================= b . 8.69179630 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q10 Seconds < Lower Is Better a . 5.49699688 |========================================================== b . 5.96171904 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q11 Seconds < Lower Is Better a . 1.93730366 |============================================================ b . 2.02040172 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q12 Seconds < Lower Is Better a . 5.02531958 |=============================================================== b . 4.83623695 |============================================================= Apache Spark TPC-H 3.5 Scale Factor: 1 - Q13 Seconds < Lower Is Better a . 3.23068547 |=============================================================== b . 2.99688220 |========================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q14 Seconds < Lower Is Better a . 3.69340801 |=========================================================== b . 3.97235441 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q15 Seconds < Lower Is Better a . 3.91055250 |=============================================================== b . 3.84038305 |============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q16 Seconds < Lower Is Better a . 1.72381175 |===================================================== b . 2.04605532 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q17 Seconds < Lower Is Better a . 8.40441418 |============================================================== b . 8.57745552 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q18 Seconds < Lower Is Better a . 10.34 |==================================================================== b . 10.20 |=================================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q19 Seconds < Lower Is Better a . 4.04202414 |=============================================================== b . 3.57351255 |======================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q20 Seconds < Lower Is Better a . 4.86529493 |============================================================ b . 5.14292622 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q21 Seconds < Lower Is Better a . 20.62 |==================================================================== b . 20.04 |================================================================== Apache Spark TPC-H 3.5 Scale Factor: 1 - Q22 Seconds < Lower Is Better a . 2.24996281 |=============================================================== b . 2.04710698 |========================================================= Apache Spark TPC-H 3.5 Scale Factor: 10 - Q01 Seconds < Lower Is Better a . 41.61 |=============================================================== b . 44.68 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q02 Seconds < Lower Is Better a . 12.79 |==================================================================== b . 12.19 |================================================================= Apache Spark TPC-H 3.5 Scale Factor: 10 - Q03 Seconds < Lower Is Better a . 55.94 |==================================================================== b . 55.58 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q04 Seconds < Lower Is Better a . 48.48 |==================================================================== b . 48.52 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q05 Seconds < Lower Is Better a . 62.50 |================================================================ b . 66.50 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q06 Seconds < Lower Is Better a . 36.56 |=================================================================== b . 37.06 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q07 Seconds < Lower Is Better a . 54.83 |=================================================================== b . 55.61 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q08 Seconds < Lower Is Better a . 62.19 |==================================================================== b . 60.07 |================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q09 Seconds < Lower Is Better a . 73.76 |==================================================================== b . 73.32 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q10 Seconds < Lower Is Better a . 51.23 |==================================================================== b . 51.32 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q11 Seconds < Lower Is Better a . 10.60 |================================================================== b . 10.96 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q12 Seconds < Lower Is Better a . 48.54 |==================================================================== b . 47.39 |================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q13 Seconds < Lower Is Better a . 20.44 |================================================================== b . 21.21 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q14 Seconds < Lower Is Better a . 38.60 |=================================================================== b . 38.96 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q15 Seconds < Lower Is Better a . 37.03 |==================================================================== b . 35.77 |================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q16 Seconds < Lower Is Better a . 11.41 |=================================================================== b . 11.61 |==================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q17 Seconds < Lower Is Better a . 100.78 |=================================================================== b . 94.20 |=============================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q18 Seconds < Lower Is Better a . 106.44 |=================================================================== b . 103.68 |================================================================= Apache Spark TPC-H 3.5 Scale Factor: 10 - Q19 Seconds < Lower Is Better a . 39.19 |==================================================================== b . 38.37 |=================================================================== Apache Spark TPC-H 3.5 Scale Factor: 10 - Q20 Seconds < Lower Is Better a . 49.49 |==================================================================== b . 47.05 |================================================================= Apache Spark TPC-H 3.5 Scale Factor: 10 - Q21 Seconds < Lower Is Better a . 196.26 |=================================================================== b . 189.40 |================================================================= Apache Spark TPC-H 3.5 Scale Factor: 10 - Q22 Seconds < Lower Is Better a . 12.72 |================================================================== b . 13.09 |====================================================================