7763 2204 AMD EPYC 7763 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite. a: Processor: AMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads), Motherboard: AMD DAYTONA_X (RYM1009B BIOS), Chipset: AMD Starship/Matisse, Memory: 256GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VE228, Network: 2 x Mellanox MT27710 OS: Ubuntu 22.04, Kernel: 6.2.0-phx (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.3, Vulkan: 1.3.224, Compiler: GCC 11.3.0 + LLVM 14.0.0, File-System: ext4, Screen Resolution: 1920x1080 b: Processor: AMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads), Motherboard: AMD DAYTONA_X (RYM1009B BIOS), Chipset: AMD Starship/Matisse, Memory: 256GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VE228, Network: 2 x Mellanox MT27710 OS: Ubuntu 22.04, Kernel: 6.2.0-phx (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.3, Vulkan: 1.3.224, Compiler: GCC 11.3.0 + LLVM 14.0.0, File-System: ext4, Screen Resolution: 1920x1080 c: Processor: AMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads), Motherboard: AMD DAYTONA_X (RYM1009B BIOS), Chipset: AMD Starship/Matisse, Memory: 256GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VE228, Network: 2 x Mellanox MT27710 OS: Ubuntu 22.04, Kernel: 6.2.0-phx (x86_64), Desktop: GNOME Shell 42.5, Display Server: X Server 1.21.1.3, Vulkan: 1.3.224, Compiler: GCC 11.3.0 + LLVM 14.0.0, File-System: ext4, Screen Resolution: 1920x1080 srsRAN Project 23.5 Test: Downlink Processor Benchmark Mbps > Higher Is Better a . 657.7 |==================================================================== b . 658.1 |==================================================================== c . 619.3 |================================================================ srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total Mbps > Higher Is Better a . 9682.1 |=================================================================== b . 9718.6 |=================================================================== c . 9727.1 |=================================================================== srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread Mbps > Higher Is Better a . 211.1 |==================================================================== b . 208.2 |=================================================================== c . 210.8 |==================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast Frames Per Second > Higher Is Better a . 5.991 |==================================================================== b . 5.993 |==================================================================== c . 5.976 |==================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster Frames Per Second > Higher Is Better a . 10.65 |=================================================================== b . 10.82 |==================================================================== c . 10.82 |==================================================================== VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast Frames Per Second > Higher Is Better a . 16.08 |==================================================================== b . 16.09 |==================================================================== c . 16.06 |==================================================================== VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster Frames Per Second > Higher Is Better a . 29.35 |==================================================================== b . 29.39 |==================================================================== c . 29.47 |==================================================================== Timed GCC Compilation 13.2 Time To Compile Seconds < Lower Is Better a . 1020.13 |================================================================== b . 1020.85 |================================================================== c . 1020.22 |================================================================== Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 1000 - Rounds: 30 Seconds < Lower Is Better a . 101.58 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 3000 - Rounds: 30 Seconds < Lower Is Better a . 346.09 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 Seconds < Lower Is Better a . 169.51 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 3000 - Rounds: 30 Seconds < Lower Is Better a . 572.13 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 Seconds < Lower Is Better a . 339.97 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 3000 - Rounds: 30 Seconds < Lower Is Better a . 2390.93 |================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 point/sec > Higher Is Better a . 644019.72 |============================================================== b . 648308.27 |============================================================== c . 667880.96 |================================================================ Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 Average Latency > Higher Is Better a . 17.45 |==================================================================== b . 17.28 |=================================================================== c . 16.35 |================================================================ Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 point/sec > Higher Is Better a . 1038515.62 |============================================================= b . 1069145.79 |=============================================================== c . 1044153.44 |============================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 Average Latency > Higher Is Better a . 34.36 |==================================================================== b . 32.92 |================================================================= c . 34.08 |=================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 point/sec > Higher Is Better a . 898967.08 |=========================================================== b . 978176.76 |================================================================ c . 870795.92 |========================================================= Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 Average Latency > Higher Is Better a . 15.24 |================================================================ b . 13.60 |========================================================== c . 16.07 |==================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 point/sec > Higher Is Better a . 1232509.19 |============================================================== b . 1226219.88 |============================================================= c . 1261385.89 |=============================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 Average Latency > Higher Is Better a . 33.50 |=================================================================== b . 33.86 |==================================================================== c . 32.71 |================================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 point/sec > Higher Is Better a . 1182440.62 |====================================================== b . 1365831.50 |=============================================================== c . 1367763.49 |=============================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 Average Latency > Higher Is Better a . 13.54 |==================================================================== b . 11.63 |========================================================== c . 11.83 |=========================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 point/sec > Higher Is Better a . 1636128.73 |============================================================= b . 1686943.16 |=============================================================== c . 1446487.70 |====================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 Average Latency > Higher Is Better a . 27.1 |============================================================ b . 26.0 |========================================================= c . 31.4 |===================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 point/sec > Higher Is Better a . 39287432.92 |============================================================= b . 38401769.17 |============================================================ c . 39945212.99 |============================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 Average Latency > Higher Is Better a . 36.04 |=================================================================== b . 36.85 |==================================================================== c . 35.01 |================================================================= Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 point/sec > Higher Is Better a . 51316464.44 |============================================================= b . 50507747.12 |============================================================ c . 52464142.83 |============================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 Average Latency > Higher Is Better a . 81.20 |=================================================================== b . 82.26 |==================================================================== c . 79.16 |================================================================= Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 point/sec > Higher Is Better a . 46437377.67 |============================================================= b . 47245476.78 |============================================================== c . 46674344.69 |============================================================= Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 Average Latency > Higher Is Better a . 35.09 |==================================================================== b . 33.84 |================================================================== c . 34.79 |=================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 point/sec > Higher Is Better a . 42048733.22 |============================================================ b . 41987111.39 |============================================================ c . 43363203.76 |============================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 Average Latency > Higher Is Better a . 109.38 |================================================================== b . 110.88 |=================================================================== c . 106.73 |================================================================ Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 point/sec > Higher Is Better a . 51341708.85 |============================================================== b . 50045888.98 |============================================================ c . 49201448.81 |=========================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 Average Latency > Higher Is Better a . 35.05 |================================================================ b . 36.10 |================================================================== c . 37.12 |==================================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 point/sec > Higher Is Better a . 56935634.55 |=========================================================== b . 59505306.55 |============================================================== c . 56463717.54 |=========================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 Average Latency > Higher Is Better a . 81.81 |=================================================================== b . 79.83 |================================================================= c . 83.14 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 37.61 |==================================================================== b . 37.53 |==================================================================== c . 37.58 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 840.54 |=================================================================== b . 840.95 |=================================================================== c . 840.12 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 19.95 |==================================================================== b . 20.03 |==================================================================== c . 20.02 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 50.13 |==================================================================== b . 49.92 |==================================================================== c . 49.93 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 1105.38 |================================================================== b . 1104.04 |================================================================== c . 1103.36 |================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 28.92 |==================================================================== b . 28.95 |==================================================================== c . 28.96 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 173.95 |=================================================================== b . 172.75 |=================================================================== c . 172.07 |================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 5.7470 |================================================================== b . 5.7863 |=================================================================== c . 5.8092 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 489.77 |=================================================================== b . 486.17 |=================================================================== c . 482.13 |================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 65.28 |=================================================================== b . 65.74 |=================================================================== c . 66.28 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 86.61 |==================================================================== b . 86.39 |==================================================================== c . 86.32 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 11.54 |==================================================================== b . 11.57 |==================================================================== c . 11.58 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 143.07 |=================================================================== b . 143.21 |=================================================================== c . 143.57 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 223.47 |=================================================================== b . 223.37 |=================================================================== c . 222.82 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 39.42 |==================================================================== b . 39.56 |==================================================================== c . 39.51 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 25.36 |==================================================================== b . 25.27 |==================================================================== c . 25.30 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 468.14 |=================================================================== b . 467.97 |=================================================================== c . 468.33 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 68.29 |==================================================================== b . 68.28 |==================================================================== c . 68.25 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 159.85 |=================================================================== b . 159.75 |=================================================================== c . 160.59 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 6.2525 |=================================================================== b . 6.2566 |=================================================================== c . 6.2238 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 3814.52 |================================================================== b . 3824.30 |================================================================== c . 3823.08 |================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 8.3665 |=================================================================== b . 8.3441 |=================================================================== c . 8.3468 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 723.71 |================================================================== b . 732.11 |=================================================================== c . 731.51 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 1.3784 |=================================================================== b . 1.3623 |================================================================== c . 1.3634 |================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 225.31 |=================================================================== b . 225.46 |=================================================================== c . 225.80 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 141.70 |=================================================================== b . 141.62 |=================================================================== c . 141.48 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 119.80 |=================================================================== b . 119.88 |=================================================================== c . 119.98 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 8.3423 |=================================================================== b . 8.3368 |=================================================================== c . 8.3300 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 46.72 |==================================================================== b . 46.57 |==================================================================== c . 46.90 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 681.28 |=================================================================== b . 679.82 |=================================================================== c . 679.81 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 24.44 |==================================================================== b . 24.57 |==================================================================== c . 24.53 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 40.90 |==================================================================== b . 40.70 |==================================================================== c . 40.76 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 467.98 |=================================================================== b . 468.11 |=================================================================== c . 467.64 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 68.32 |==================================================================== b . 68.27 |==================================================================== c . 68.34 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 159.97 |=================================================================== b . 159.92 |=================================================================== c . 160.70 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 6.2481 |=================================================================== b . 6.2495 |=================================================================== c . 6.2195 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 227.42 |=================================================================== b . 227.64 |=================================================================== c . 227.57 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 140.40 |=================================================================== b . 140.26 |=================================================================== c . 140.30 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 120.36 |=================================================================== b . 120.18 |=================================================================== c . 120.58 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 8.3056 |=================================================================== b . 8.3182 |=================================================================== c . 8.2903 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 326.31 |=================================================================== b . 326.54 |=================================================================== c . 326.41 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 97.87 |==================================================================== b . 97.84 |==================================================================== c . 97.87 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 97.36 |==================================================================== b . 97.61 |==================================================================== c . 96.88 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 10.26 |==================================================================== b . 10.24 |==================================================================== c . 10.31 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 53.48 |==================================================================== b . 53.55 |==================================================================== c . 53.62 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 597.97 |=================================================================== b . 596.81 |=================================================================== c . 596.53 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 28.63 |==================================================================== b . 28.63 |==================================================================== c . 28.64 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 34.91 |==================================================================== b . 34.91 |==================================================================== c . 34.90 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 574.96 |=================================================================== b . 575.29 |=================================================================== c . 575.12 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 55.58 |==================================================================== b . 55.56 |==================================================================== c . 55.58 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 94.51 |==================================================================== b . 94.04 |==================================================================== c . 94.55 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 10.58 |==================================================================== b . 10.63 |==================================================================== c . 10.57 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 166.00 |=================================================================== b . 166.22 |=================================================================== c . 166.06 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 192.35 |=================================================================== b . 192.16 |=================================================================== c . 192.21 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 53.78 |==================================================================== b . 53.70 |==================================================================== c . 53.95 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 18.59 |==================================================================== b . 18.61 |==================================================================== c . 18.53 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 37.61 |==================================================================== b . 37.54 |==================================================================== c . 37.58 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 841.48 |=================================================================== b . 840.26 |=================================================================== c . 840.44 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 20.07 |==================================================================== b . 20.03 |==================================================================== c . 20.05 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 49.81 |==================================================================== b . 49.91 |==================================================================== c . 49.86 |==================================================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better a . 14.11 |==================================================================== b . 13.97 |=================================================================== c . 14.03 |==================================================================== NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better a . 6.35 |===================================================================== b . 6.26 |==================================================================== c . 6.17 |=================================================================== NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better a . 7.00 |===================================================================== b . 6.55 |================================================================= c . 6.34 |============================================================== NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better a . 9.09 |===================================================================== b . 7.93 |============================================================ c . 7.60 |========================================================== NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better a . 6.09 |===================================================================== b . 5.90 |=================================================================== c . 5.86 |================================================================== NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better a . 9.98 |===================================================================== b . 9.78 |==================================================================== c . 9.75 |=================================================================== NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better a . 3.97 |===================================================================== b . 3.43 |============================================================ c . 3.48 |============================================================ NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better a . 14.62 |==================================================================== b . 14.53 |==================================================================== c . 14.47 |=================================================================== NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better a . 23.84 |==================================================================== b . 23.64 |=================================================================== c . 23.91 |==================================================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better a . 8.50 |===================================================================== b . 8.42 |==================================================================== c . 8.51 |===================================================================== NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better a . 5.23 |===================================================================== b . 5.22 |===================================================================== c . 5.22 |===================================================================== NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better a . 15.49 |==================================================================== b . 15.34 |=================================================================== c . 15.54 |==================================================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better a . 20.66 |==================================================================== b . 20.51 |=================================================================== c . 20.80 |==================================================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better a . 14.17 |================================================================== b . 14.11 |================================================================== c . 14.59 |==================================================================== NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better a . 35.24 |==================================================================== b . 27.54 |===================================================== c . 27.59 |===================================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better a . 48.79 |==================================================================== b . 48.49 |==================================================================== c . 48.43 |=================================================================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better a . 10.25 |==================================================================== b . 9.04 |============================================================ c . 8.88 |=========================================================== Blender 3.6 Blend File: BMW27 - Compute: CPU-Only Seconds < Lower Is Better a . 27.27 |=================================================================== b . 27.50 |==================================================================== c . 27.24 |=================================================================== Blender 3.6 Blend File: Classroom - Compute: CPU-Only Seconds < Lower Is Better a . 68.80 |==================================================================== b . 68.50 |==================================================================== c . 68.70 |==================================================================== Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only Seconds < Lower Is Better a . 33.70 |==================================================================== b . 33.76 |==================================================================== c . 33.72 |==================================================================== Blender 3.6 Blend File: Barbershop - Compute: CPU-Only Seconds < Lower Is Better a . 253.49 |=================================================================== b . 253.77 |=================================================================== c . 253.43 |=================================================================== Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only Seconds < Lower Is Better a . 84.55 |==================================================================== b . 84.35 |==================================================================== c . 84.17 |==================================================================== Apache Cassandra 4.1.3 Test: Writes Op/s > Higher Is Better a . 236650 |=================================================================== b . 238161 |=================================================================== c . 234887 |================================================================== BRL-CAD 7.36 VGR Performance Metric VGR Performance Metric > Higher Is Better a . 734386 |=================================================================== b . 729876 |=================================================================== c . 730434 |===================================================================