adl feb Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Ubuntu 22.10 via the Phoronix Test Suite. a: Processor: Intel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads), Motherboard: MSI MS-14C6 (E14C6IMS.115 BIOS), Chipset: Intel Alder Lake PCH, Memory: 16GB, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: MSI Intel ADL GT2 15GB (1450MHz), Audio: Realtek ALC274, Network: Intel Alder Lake-P PCH CNVi WiFi OS: Ubuntu 22.10, Kernel: 5.19.0-29-generic (x86_64), Desktop: Xfce 4.16, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 22.2.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 n: Processor: Intel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads), Motherboard: MSI MS-14C6 (E14C6IMS.115 BIOS), Chipset: Intel Alder Lake PCH, Memory: 16GB, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: MSI Intel ADL GT2 15GB (1450MHz), Audio: Realtek ALC274, Network: Intel Alder Lake-P PCH CNVi WiFi OS: Ubuntu 22.10, Kernel: 5.19.0-29-generic (x86_64), Desktop: Xfce 4.16, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 22.2.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 c: Processor: Intel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads), Motherboard: MSI MS-14C6 (E14C6IMS.115 BIOS), Chipset: Intel Alder Lake PCH, Memory: 16GB, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: MSI Intel ADL GT2 15GB (1450MHz), Audio: Realtek ALC274, Network: Intel Alder Lake-P PCH CNVi WiFi OS: Ubuntu 22.10, Kernel: 5.19.0-29-generic (x86_64), Desktop: Xfce 4.16, Display Server: X Server 1.21.1.4, OpenGL: 4.6 Mesa 22.2.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 2.93 |================================================================ n . 3.03 |================================================================== c . 3.17 |===================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Seconds < Lower Is Better a . 207.95 |=================================================================== n . 207.11 |=================================================================== c . 206.32 |================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 11.91 |==================================================================== n . 11.90 |=================================================================== c . 11.99 |==================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time Seconds < Lower Is Better a . 3.60 |=================================================================== n . 3.65 |==================================================================== c . 3.72 |===================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time Seconds < Lower Is Better a . 2.21 |=================================================================== n . 2.25 |===================================================================== c . 2.26 |===================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time Seconds < Lower Is Better a . 1.583333601 |============================================================= n . 1.570000000 |============================================================= c . 1.600000000 |============================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 1.35 |==================================================================== n . 1.32 |=================================================================== c . 1.36 |===================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 4.120000000 |============================================================ n . 4.155978363 |============================================================= c . 4.240000000 |============================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Seconds < Lower Is Better a . 209.93 |=================================================================== n . 210.21 |=================================================================== c . 210.11 |=================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 11.94 |=================================================================== n . 12.10 |==================================================================== c . 12.04 |==================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time Seconds < Lower Is Better a . 3.95 |===================================================================== n . 3.74 |================================================================= c . 3.89 |==================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time Seconds < Lower Is Better a . 3.03 |================================================================ n . 3.06 |================================================================ c . 3.28 |===================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time Seconds < Lower Is Better a . 2.48 |===================================================================== n . 2.32 |================================================================= c . 2.40 |=================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 1.98 |===================================================================== n . 1.93 |=================================================================== c . 1.95 |==================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 4.56 |===================================================================== n . 4.39 |================================================================== c . 4.36 |================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Seconds < Lower Is Better a . 207.48 |=================================================================== n . 208.49 |=================================================================== c . 208.24 |=================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 11.97 |==================================================================== n . 12.02 |==================================================================== c . 11.87 |=================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time Seconds < Lower Is Better a . 4.61 |=================================================================== n . 4.75 |===================================================================== c . 4.58 |=================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time Seconds < Lower Is Better a . 3.32 |==================================================================== n . 3.39 |===================================================================== c . 3.36 |==================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time Seconds < Lower Is Better a . 2.82 |==================================================================== n . 2.87 |===================================================================== c . 2.81 |==================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 2.37 |===================================================================== n . 2.24 |================================================================= c . 2.22 |================================================================= Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 4.79 |================================================================== n . 4.99 |===================================================================== c . 4.96 |===================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Seconds < Lower Is Better a . 206.58 |================================================================= n . 213.35 |=================================================================== c . 207.73 |================================================================= Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 12.00 |================================================================ n . 12.70 |==================================================================== c . 11.89 |================================================================ Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time Seconds < Lower Is Better a . 5.17 |=================================================================== n . 5.29 |===================================================================== c . 5.07 |================================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time Seconds < Lower Is Better a . 3.595425861 |============================================================= n . 3.600000000 |============================================================= c . 3.650000000 |============================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time Seconds < Lower Is Better a . 3.349109111 |=========================================================== n . 3.500000000 |============================================================== c . 3.490000000 |============================================================== Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 2.74 |=================================================================== n . 2.81 |===================================================================== c . 2.71 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 15.51 |==================================================================== n . 15.41 |==================================================================== c . 15.16 |================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Seconds < Lower Is Better a . 207.56 |=================================================================== n . 208.17 |=================================================================== c . 208.62 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 11.84 |=================================================================== n . 12.02 |==================================================================== c . 11.80 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time Seconds < Lower Is Better a . 8.01 |================================================================= n . 8.39 |==================================================================== c . 8.48 |===================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time Seconds < Lower Is Better a . 13.07 |==================================================================== n . 11.79 |============================================================= c . 11.63 |============================================================= Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time Seconds < Lower Is Better a . 14.09 |==================================================================== n . 13.71 |================================================================== c . 13.41 |================================================================= Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 14.05 |==================================================================== n . 13.18 |================================================================ c . 12.59 |============================================================= Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 16.49 |==================================================================== n . 16.51 |==================================================================== c . 16.58 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Seconds < Lower Is Better a . 207.45 |================================================================== n . 207.76 |=================================================================== c . 209.01 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 11.87 |=================================================================== n . 12.03 |==================================================================== c . 11.88 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time Seconds < Lower Is Better a . 8.93 |==================================================================== n . 9.02 |===================================================================== c . 8.88 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time Seconds < Lower Is Better a . 11.95 |================================================================= n . 12.10 |================================================================== c . 12.53 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time Seconds < Lower Is Better a . 14.41 |================================================================ n . 14.60 |================================================================= c . 15.33 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 12.70 |============================================================ n . 13.55 |================================================================ c . 14.39 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 15.93 |==================================================================== n . 16.03 |==================================================================== c . 15.95 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Seconds < Lower Is Better a . 207.52 |=================================================================== n . 207.47 |=================================================================== c . 207.56 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 11.90 |==================================================================== n . 11.85 |==================================================================== c . 11.79 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time Seconds < Lower Is Better a . 9.05 |=========================================================== n . 10.43 |==================================================================== c . 8.33 |====================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time Seconds < Lower Is Better a . 12.18 |==================================================================== n . 11.11 |============================================================== c . 12.07 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time Seconds < Lower Is Better a . 13.44 |=============================================================== n . 14.46 |==================================================================== c . 13.77 |================================================================= Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 13.10 |================================================================== n . 12.98 |================================================================= c . 13.54 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time Seconds < Lower Is Better a . 16.77 |==================================================================== n . 16.69 |=================================================================== c . 16.87 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Seconds < Lower Is Better a . 207.55 |=================================================================== n . 207.76 |=================================================================== c . 208.25 |=================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe Seconds < Lower Is Better a . 11.85 |==================================================================== n . 11.74 |=================================================================== c . 11.84 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time Seconds < Lower Is Better a . 9.21 |==================================================================== n . 9.19 |==================================================================== c . 9.34 |===================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time Seconds < Lower Is Better a . 12.04 |=================================================================== n . 12.26 |==================================================================== c . 12.21 |==================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time Seconds < Lower Is Better a . 14.32 |=================================================================== n . 14.48 |==================================================================== c . 14.15 |================================================================== Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time Seconds < Lower Is Better a . 12.84 |================================================================== n . 13.16 |=================================================================== c . 13.29 |==================================================================== Memcached 1.6.18 Set To Get Ratio: 1:1 Ops/sec > Higher Is Better a . 1767830.19 |=============================================================== n . 1767278.13 |=============================================================== c . 1767103.07 |=============================================================== Memcached 1.6.18 Set To Get Ratio: 1:5 Ops/sec > Higher Is Better a . 1869070.00 |=============================================================== n . 1781728.27 |============================================================ c . 1823583.08 |============================================================= Memcached 1.6.18 Set To Get Ratio: 1:10 Ops/sec > Higher Is Better a . 1742709.67 |=============================================================== n . 1739460.62 |=============================================================== c . 1750912.91 |=============================================================== Memcached 1.6.18 Set To Get Ratio: 1:100 Ops/sec > Higher Is Better a . 1665301.70 |============================================================== n . 1686349.51 |=============================================================== c . 1662246.52 |============================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 4.5104 |================================================================== n . 4.5519 |=================================================================== c . 4.5475 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 1534.45 |================================================================== n . 1508.55 |================================================================= c . 1520.54 |================================================================= Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 4.2064 |================================================================== n . 4.2403 |=================================================================== c . 4.1882 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 237.73 |=================================================================== n . 235.82 |================================================================== c . 238.76 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 67.19 |==================================================================== n . 66.91 |==================================================================== c . 67.14 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 104.10 |=================================================================== n . 104.53 |=================================================================== c . 104.02 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 44.09 |==================================================================== n . 44.31 |==================================================================== c . 44.24 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 22.67 |==================================================================== n . 22.56 |==================================================================== c . 22.59 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 19.78 |==================================================================== n . 18.60 |================================================================ c . 18.83 |================================================================= Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 351.50 |=============================================================== n . 374.06 |=================================================================== c . 367.85 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 14.83 |==================================================================== n . 14.72 |=================================================================== c . 14.87 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 67.41 |=================================================================== n . 67.93 |==================================================================== c . 67.25 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 28.03 |=================================================================== n . 28.42 |==================================================================== c . 28.17 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 248.43 |=================================================================== n . 244.89 |================================================================== c . 247.38 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 23.45 |==================================================================== n . 23.19 |=================================================================== c . 23.20 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 42.62 |=================================================================== n . 43.11 |==================================================================== c . 43.08 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 67.19 |==================================================================== n . 60.93 |============================================================== c . 62.17 |=============================================================== Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 103.93 |============================================================= n . 114.78 |=================================================================== c . 112.44 |================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 50.53 |==================================================================== n . 43.66 |=========================================================== c . 43.45 |========================================================== Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 19.78 |========================================================== n . 22.90 |==================================================================== c . 23.01 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 43.45 |==================================================================== n . 41.00 |================================================================ c . 40.82 |================================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 160.84 |=============================================================== n . 170.19 |=================================================================== c . 171.25 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 33.35 |==================================================================== n . 33.13 |==================================================================== c . 33.22 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 29.98 |==================================================================== n . 30.17 |==================================================================== c . 30.10 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 6.3049 |=================================================================== n . 6.1199 |================================================================= c . 5.7934 |============================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 1102.50 |============================================================== n . 1137.90 |================================================================ c . 1169.90 |================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 5.7188 |=================================================================== n . 5.6899 |=================================================================== c . 5.7275 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 174.84 |=================================================================== n . 175.73 |=================================================================== c . 174.58 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 19.02 |================================================================== n . 19.52 |==================================================================== c . 19.18 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 365.11 |=================================================================== n . 357.48 |================================================================== c . 361.73 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 16.16 |==================================================================== n . 16.24 |==================================================================== c . 16.20 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 61.89 |==================================================================== n . 61.59 |==================================================================== c . 61.74 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 4.5054 |=================================================================== n . 4.4135 |================================================================== c . 4.4061 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 1545.76 |================================================================== n . 1523.22 |================================================================= c . 1529.39 |================================================================= Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 4.1644 |=================================================================== n . 4.1721 |=================================================================== c . 4.1810 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 240.13 |=================================================================== n . 239.68 |=================================================================== c . 239.17 |===================================================================