epyc 9654 AMD March 2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite. a: Processor: AMD EPYC 9654 96-Core @ 3.71GHz (96 Cores / 192 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 768GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VGA HDMI, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 b: Processor: AMD EPYC 9654 96-Core @ 3.71GHz (96 Cores / 192 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 768GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VGA HDMI, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 c: Processor: AMD EPYC 9654 96-Core @ 3.71GHz (96 Cores / 192 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 768GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VGA HDMI, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 d: Processor: 2 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VGA HDMI, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 e: Processor: 2 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads), Motherboard: AMD Titanite_4G (RTI1004D BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 800GB INTEL SSDPF21Q800GB, Graphics: ASPEED, Monitor: VGA HDMI, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.1, Display Server: X Server 1.21.1.4, Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080 OpenCV 4.7 Test: Core ms < Lower Is Better a . 65772 |================= b . 65256 |================ c . 68743 |================= d . 267066 |=================================================================== e . 182548 |============================================== PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency ms < Lower Is Better a . 18.46 |================== b . 18.32 |================== c . 18.48 |================== d . 22.31 |===================== e . 71.16 |==================================================================== PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better a . 54169 |=================================================================== b . 54579 |==================================================================== c . 54120 |=================================================================== d . 44818 |======================================================== e . 14053 |================== PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write TPS > Higher Is Better a . 58262 |============================================================ b . 61635 |================================================================ c . 65740 |==================================================================== d . 46860 |================================================ e . 17020 |================== PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency ms < Lower Is Better a . 13.73 |==================== b . 12.98 |=================== c . 12.17 |================== d . 17.07 |========================= e . 47.00 |==================================================================== OpenCV 4.7 Test: Video ms < Lower Is Better a . 41999 |====================== b . 38143 |==================== c . 37173 |==================== d . 126947 |=================================================================== e . 122021 |================================================================ OpenCV 4.7 Test: Object Detection ms < Lower Is Better a . 24950 |======================== b . 24394 |======================= c . 23509 |====================== d . 71477 |==================================================================== e . 33386 |================================ OpenCV 4.7 Test: Image Processing ms < Lower Is Better a . 119907 |======================== b . 119436 |======================== c . 122137 |========================= d . 333961 |=================================================================== e . 312082 |=============================================================== MariaDB 11.0.1 Clients: 512 Queries Per Second > Higher Is Better a . 915 |====================================================================== b . 898 |===================================================================== c . 894 |==================================================================== d . 624 |================================================ e . 334 |========================== MariaDB 11.0.1 Clients: 1024 Queries Per Second > Higher Is Better a . 912 |====================================================================== b . 874 |=================================================================== c . 873 |=================================================================== d . 561 |=========================================== e . 336 |========================== MariaDB 11.0.1 Clients: 2048 Queries Per Second > Higher Is Better a . 860 |====================================================================== b . 839 |==================================================================== c . 852 |===================================================================== d . 650 |===================================================== e . 327 |=========================== RocksDB 8.0 Test: Random Fill Sync Op/s > Higher Is Better a . 445786 |================================================================== b . 446883 |================================================================== c . 451734 |=================================================================== d . 357298 |===================================================== e . 174168 |========================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better a . 142.32 |============================================================ b . 157.65 |=================================================================== c . 158.76 |=================================================================== d . 67.80 |============================= e . 67.07 |============================ TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better a . 57.39 |=================================================================== b . 57.85 |==================================================================== c . 57.90 |==================================================================== d . 25.41 |============================== e . 24.48 |============================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better a . 241.43 |=================================================================== b . 239.45 |================================================================== c . 239.48 |================================================================== d . 120.06 |================================= e . 106.10 |============================= OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better a . 22944 |================================= b . 23755 |================================== c . 23144 |================================= d . 34502 |================================================= e . 47834 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 43.02 |================================== b . 42.28 |================================== c . 42.11 |================================== d . 85.02 |==================================================================== e . 84.40 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 42.33 |================================== b . 42.10 |================================== c . 42.82 |================================== d . 84.55 |==================================================================== e . 84.28 |==================================================================== OpenSSL 3.1 Algorithm: ChaCha20 byte/s > Higher Is Better a . 510745602460 |============================== b . 506631603000 |============================== c . 510959296510 |============================== d . 1017352168790 |============================================================ e . 1017084218150 |============================================================ OpenSSL 3.1 Algorithm: RSA4096 verify/s > Higher Is Better a . 1462850.1 |================================ b . 1462987.4 |================================ c . 1462827.1 |================================ d . 2936562.9 |================================================================ e . 2937037.2 |================================================================ OpenSSL 3.1 Algorithm: RSA4096 sign/s > Higher Is Better a . 35951.3 |================================= b . 35946.3 |================================= c . 35968.3 |================================= d . 72050.2 |================================================================== e . 72086.6 |================================================================== OpenSSL 3.1 Algorithm: AES-256-GCM byte/s > Higher Is Better a . 780271471000 |============================== b . 776495266470 |============================== c . 779191711330 |============================== d . 1552708662150 |============================================================ e . 1551466680320 |============================================================ OpenSSL 3.1 Algorithm: SHA256 byte/s > Higher Is Better a . 129947980460 |=============================== b . 129484883100 |=============================== c . 130061831940 |=============================== d . 258641794620 |============================================================= e . 258600679990 |============================================================= RocksDB 8.0 Test: Random Read Op/s > Higher Is Better a . 432927777 |================================ b . 435267657 |================================ c . 434781404 |================================ d . 863491650 |================================================================ e . 859555542 |================================================================ OpenSSL 3.1 Algorithm: AES-128-GCM byte/s > Higher Is Better a . 908982494280 |============================== b . 910814515750 |============================== c . 909186377320 |============================== d . 1810537338580 |============================================================ e . 1804869906900 |============================================================ OpenSSL 3.1 Algorithm: ChaCha20-Poly1305 byte/s > Higher Is Better a . 356999237630 |=============================== b . 356991460690 |=============================== c . 356961832960 |=============================== d . 710753283550 |============================================================= e . 710739693380 |============================================================= OpenSSL 3.1 Algorithm: SHA512 byte/s > Higher Is Better a . 40028926290 |=============================== b . 40018641540 |=============================== c . 40002428390 |=============================== d . 79615585770 |============================================================== e . 79537665830 |============================================================== MariaDB 11.0.1 Clients: 4096 Queries Per Second > Higher Is Better a . 654 |================================================================== b . 693 |====================================================================== c . 678 |==================================================================== d . 578 |========================================================== e . 351 |=================================== Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 1003.57 |================================== b . 1001.98 |================================== c . 1005.72 |================================== d . 1953.93 |================================================================== e . 1964.26 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 316.61 |================================== b . 320.18 |=================================== c . 317.49 |================================== d . 620.41 |=================================================================== e . 617.58 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 624.96 |================================== b . 626.07 |================================== c . 625.26 |================================== d . 1209.92 |================================================================== e . 1204.66 |================================================================== John The Ripper 2023.03.14 Test: bcrypt Real C/S > Higher Is Better a . 163238 |=================================== b . 163353 |=================================== c . 163353 |=================================== d . 315340 |=================================================================== e . 314928 |=================================================================== John The Ripper 2023.03.14 Test: WPA PSK Real C/S > Higher Is Better a . 653913 |================================== b . 654104 |================================== c . 653913 |================================== d . 1263000 |================================================================== e . 1255000 |================================================================== John The Ripper 2023.03.14 Test: Blowfish Real C/S > Higher Is Better a . 163353 |=================================== b . 163241 |=================================== c . 163299 |=================================== d . 315110 |=================================================================== e . 314188 |=================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better a . 355.72 |=================================================================== b . 353.36 |=================================================================== c . 354.96 |=================================================================== d . 184.36 |=================================== e . 184.99 |=================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 194.56 |=================================================================== b . 194.06 |=================================================================== c . 183.22 |=============================================================== d . 101.31 |=================================== e . 102.18 |=================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 440.46 |=================================== b . 439.92 |=================================== c . 439.02 |=================================== d . 840.73 |=================================================================== e . 838.03 |=================================================================== OpenCV 4.7 Test: Graph API ms < Lower Is Better a . 230494 |======================================== b . 204945 |=================================== c . 207090 |==================================== d . 390167 |=================================================================== e . 382454 |================================================================== SPECFEM3D 4.0 Model: Water-layered Halfspace Seconds < Lower Is Better a . 20.43 |==================================================================== b . 20.45 |==================================================================== c . 19.90 |================================================================== d . 12.81 |=========================================== e . 10.76 |==================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 400.51 |==================================== b . 400.39 |==================================== c . 399.66 |=================================== d . 754.44 |=================================================================== e . 754.76 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 149.05 |==================================== b . 149.22 |==================================== c . 149.20 |==================================== d . 281.23 |=================================================================== e . 279.96 |=================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better a . 593.90 |=================================================================== b . 594.27 |=================================================================== c . 591.78 |=================================================================== d . 330.97 |===================================== e . 319.03 |==================================== RocksDB 8.0 Test: Read While Writing Op/s > Higher Is Better a . 9939924 |====================================== b . 9108882 |=================================== c . 10000158 |====================================== d . 16955493 |================================================================= e . 14344054 |======================================================= ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 24.15 |=================================================================== b . 24.60 |==================================================================== c . 24.65 |==================================================================== d . 13.27 |===================================== e . 13.31 |===================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 599.50 |=================================================================== b . 563.30 |=============================================================== c . 598.43 |=================================================================== d . 326.98 |===================================== e . 323.43 |==================================== SPECFEM3D 4.0 Model: Mount St. Helens Seconds < Lower Is Better a . 8.549248083 |============================================================== b . 8.433500494 |============================================================= c . 8.266046040 |============================================================ d . 4.709617691 |================================== e . 4.677201807 |================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 1567.75 |==================================== b . 1567.61 |==================================== c . 1569.26 |==================================== d . 2845.82 |================================================================== e . 2819.96 |================================================================= TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better a . 80.72 |=================================================================== b . 81.80 |==================================================================== c . 80.41 |=================================================================== d . 45.40 |====================================== e . 45.90 |====================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better a . 316.04 |=================================================================== b . 310.53 |================================================================== c . 295.75 |=============================================================== d . 191.95 |========================================= e . 177.73 |====================================== John The Ripper 2023.03.14 Test: MD5 Real C/S > Higher Is Better a . 15556000 |===================================== b . 15608000 |===================================== c . 15608000 |===================================== d . 27276000 |================================================================= e . 27169000 |================================================================= SPECFEM3D 4.0 Model: Tomographic Model Seconds < Lower Is Better a . 8.695806704 |============================================================== b . 8.699354709 |============================================================== c . 8.463161346 |============================================================ d . 5.078447644 |==================================== e . 5.322492803 |====================================== SPECFEM3D 4.0 Model: Homogeneous Halfspace Seconds < Lower Is Better a . 10.661133476 |============================================================= b . 10.386901707 |=========================================================== c . 10.665913818 |============================================================= d . 6.276873254 |==================================== e . 6.233892837 |==================================== GROMACS 2023 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better a . 11.25 |======================================== b . 11.24 |======================================== c . 11.25 |======================================== d . 18.41 |================================================================= e . 19.13 |==================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 207.16 |=========================================================== b . 233.81 |=================================================================== c . 207.35 |=========================================================== d . 137.87 |======================================== e . 139.61 |======================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 29.43 |============================================================== b . 32.46 |==================================================================== c . 32.52 |==================================================================== d . 19.77 |========================================= e . 19.27 |======================================== Embree 4.0.1 Binary: Pathtracer - Model: Crown Frames Per Second > Higher Is Better a . 104.24 |======================================== b . 105.18 |========================================= c . 105.12 |========================================= d . 172.28 |================================================================== e . 173.58 |=================================================================== SPECFEM3D 4.0 Model: Layered Halfspace Seconds < Lower Is Better a . 19.84 |==================================================================== b . 19.46 |=================================================================== c . 19.78 |==================================================================== d . 11.92 |========================================= e . 12.58 |=========================================== OpenCV 4.7 Test: Features 2D ms < Lower Is Better a . 71850 |======================================== b . 73789 |========================================= c . 75180 |========================================== d . 110697 |============================================================== e . 119368 |=================================================================== RocksDB 8.0 Test: Read Random Write Random Op/s > Higher Is Better a . 2792738 |================================================================== b . 2785663 |================================================================== c . 2802641 |================================================================== d . 1689260 |======================================== e . 1713878 |======================================== Embree 4.0.1 Binary: Pathtracer - Model: Asian Dragon Obj Frames Per Second > Higher Is Better a . 106.54 |========================================= b . 107.11 |========================================= c . 106.93 |========================================= d . 174.26 |=================================================================== e . 173.96 |=================================================================== Embree 4.0.1 Binary: Pathtracer ISPC - Model: Crown Frames Per Second > Higher Is Better a . 110.74 |========================================= b . 111.28 |========================================= c . 111.30 |========================================= d . 180.48 |=================================================================== e . 180.84 |=================================================================== Embree 4.0.1 Binary: Pathtracer - Model: Asian Dragon Frames Per Second > Higher Is Better a . 121.11 |========================================== b . 121.27 |========================================== c . 120.91 |========================================= d . 194.90 |=================================================================== e . 195.27 |=================================================================== Embree 4.0.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj Frames Per Second > Higher Is Better a . 113.17 |========================================== b . 113.45 |========================================== c . 113.32 |========================================== d . 181.45 |=================================================================== e . 182.35 |=================================================================== Embree 4.0.1 Binary: Pathtracer ISPC - Model: Asian Dragon Frames Per Second > Higher Is Better a . 132.71 |========================================== b . 132.71 |========================================== c . 132.53 |========================================== d . 211.98 |=================================================================== e . 213.30 |=================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 6.40254 |================================================================== b . 6.36316 |================================================================== c . 6.38620 |================================================================== d . 4.09797 |========================================== e . 4.06505 |========================================== TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better a . 104.12 |================================================================== b . 105.04 |=================================================================== c . 105.29 |=================================================================== d . 68.59 |============================================ e . 68.53 |============================================ MariaDB 11.0.1 Clients: 8192 Queries Per Second > Higher Is Better a . 446 |====================================================================== b . 438 |===================================================================== c . 439 |===================================================================== d . 384 |============================================================ e . 293 |============================================== Memcached 1.6.19 Set To Get Ratio: 1:5 Ops/sec > Higher Is Better a . 3870015.60 |=============================================================== b . 3833862.64 |============================================================== c . 3858723.95 |=============================================================== d . 2575013.52 |========================================== e . 2550554.27 |========================================== RocksDB 8.0 Test: Sequential Fill Op/s > Higher Is Better a . 662613 |=================================================================== b . 662046 |=================================================================== c . 660783 |=================================================================== d . 438833 |============================================ e . 438667 |============================================ Darmstadt Automotive Parallel Heterogeneous Suite 2021.11.02 Backend: OpenMP - Kernel: Points2Image Test Cases Per Minute > Higher Is Better a . 18078.77 |================================================================= b . 17717.93 |================================================================ c . 17373.13 |============================================================== d . 13064.15 |=============================================== e . 11981.05 |=========================================== RocksDB 8.0 Test: Random Fill Op/s > Higher Is Better a . 644356 |=================================================================== b . 640977 |=================================================================== c . 641338 |=================================================================== d . 438023 |============================================== e . 431217 |============================================= RocksDB 8.0 Test: Update Random Op/s > Higher Is Better a . 645530 |=================================================================== b . 644787 |=================================================================== c . 647442 |=================================================================== d . 436861 |============================================= e . 435840 |============================================= ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 6.61035 |=============================================================== b . 6.48661 |============================================================= c . 6.96155 |================================================================== d . 4.84593 |============================================== e . 4.70693 |============================================= Apache HTTP Server 2.4.56 Concurrent Requests: 500 Requests Per Second > Higher Is Better a . 173757.38 |===================================================== b . 208703.78 |================================================================ c . 185857.48 |========================================================= d . 141164.84 |=========================================== e . 142512.83 |============================================ TensorFlow 2.12 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better a . 856.98 |=================================================================== b . 853.06 |=================================================================== c . 857.87 |=================================================================== d . 588.27 |============================================== e . 597.46 |=============================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 159.11 |=================================================================== b . 159.99 |=================================================================== c . 159.60 |=================================================================== d . 111.96 |=============================================== e . 112.51 |=============================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.253310 |================================================================= b . 1.258000 |================================================================= c . 1.171210 |============================================================= d . 0.922595 |================================================ e . 0.880761 |============================================== OpenCV 4.7 Test: Stitching ms < Lower Is Better a . 190987 |================================================ b . 190687 |================================================ c . 191634 |================================================ d . 268869 |=================================================================== e . 241229 |============================================================ PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency ms < Lower Is Better a . 2619.51 |================================================================== b . 2099.01 |===================================================== c . 2357.91 |=========================================================== d . 2198.90 |======================================================= e . 1871.56 |=============================================== PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better a . 382 |================================================== b . 476 |============================================================== c . 424 |======================================================== d . 455 |============================================================ e . 534 |====================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 12.17420 |================================================================ b . 12.26920 |================================================================= c . 12.17350 |================================================================ d . 8.85423 |=============================================== e . 8.98441 |================================================ PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Write TPS > Higher Is Better a . 676 |=================================================================== b . 552 |====================================================== c . 711 |====================================================================== d . 565 |======================================================== e . 523 |=================================================== PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency ms < Lower Is Better a . 1184.08 |=================================================== b . 1449.39 |=============================================================== c . 1125.10 |================================================= d . 1415.16 |============================================================= e . 1529.11 |================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 9.36520 |=================================================== b . 12.01250 |================================================================= c . 9.35231 |=================================================== d . 10.54650 |========================================================= e . 8.85844 |================================================ TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better a . 1375.44 |================================================= b . 1375.77 |================================================= c . 1378.85 |================================================= d . 1843.74 |================================================================== e . 1775.18 |================================================================ ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 524.04 |================================================================ b . 552.79 |=================================================================== c . 536.34 |================================================================= d . 417.12 |=================================================== e . 482.69 |=========================================================== Timed LLVM Compilation 16.0 Build System: Ninja Seconds < Lower Is Better a . 125.88 |=================================================================== b . 126.29 |=================================================================== c . 126.23 |=================================================================== d . 97.85 |==================================================== e . 97.52 |==================================================== Timed Node.js Compilation 19.8.1 Time To Compile Seconds < Lower Is Better a . 133.70 |=================================================================== b . 132.80 |=================================================================== c . 133.11 |=================================================================== d . 106.05 |===================================================== e . 104.36 |==================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 111.70 |=================================================================== b . 111.35 |=================================================================== c . 112.03 |=================================================================== d . 88.95 |===================================================== e . 89.83 |====================================================== Darmstadt Automotive Parallel Heterogeneous Suite 2021.11.02 Backend: OpenMP - Kernel: NDT Mapping Test Cases Per Minute > Higher Is Better a . 954.82 |=================================================================== b . 949.71 |=================================================================== c . 937.41 |================================================================== d . 802.27 |======================================================== e . 760.15 |===================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 128.49 |=================================================================== b . 128.98 |=================================================================== c . 126.84 |================================================================== d . 126.74 |================================================================== e . 102.87 |===================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 30.78 |==================================================================== b . 30.68 |==================================================================== c . 29.03 |================================================================ d . 24.74 |======================================================= e . 26.62 |=========================================================== nginx 1.23.2 Connections: 500 Requests Per Second > Higher Is Better a . 240111.29 |================================================================ b . 237868.20 |=============================================================== c . 241662.73 |================================================================ d . 196034.90 |==================================================== e . 194753.92 |==================================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better a . 459.97 |=================================================================== b . 454.67 |================================================================== c . 455.45 |================================================================== d . 382.29 |======================================================== e . 377.56 |======================================================= Timed FFmpeg Compilation 6.0 Time To Compile Seconds < Lower Is Better a . 12.81 |================================================================== b . 13.01 |=================================================================== c . 13.16 |==================================================================== d . 10.86 |======================================================== e . 11.19 |========================================================== PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only TPS > Higher Is Better a . 3833822 |================================================================== b . 3816666 |================================================================== c . 3785876 |================================================================= d . 3643496 |=============================================================== e . 3274920 |======================================================== PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.209 |========================================================== b . 0.210 |=========================================================== c . 0.211 |=========================================================== d . 0.220 |============================================================= e . 0.244 |==================================================================== Apache HTTP Server 2.4.56 Concurrent Requests: 200 Requests Per Second > Higher Is Better a . 143188.90 |======================================================= b . 164665.51 |================================================================ c . 165838.18 |================================================================ ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 5.11448 |================================================================== b . 4.89263 |=============================================================== c . 5.10242 |================================================================== d . 4.78422 |============================================================== e . 4.46687 |========================================================== ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run Queries Per Minute, Geo Mean > Higher Is Better a . 612.78 |================================================================== b . 606.58 |================================================================= c . 623.39 |=================================================================== d . 551.92 |=========================================================== e . 568.25 |============================================================= ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run Queries Per Minute, Geo Mean > Higher Is Better a . 602.50 |=================================================================== b . 603.93 |=================================================================== c . 592.33 |================================================================== d . 536.53 |============================================================ e . 536.90 |============================================================ PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.267 |============================================================= b . 0.268 |============================================================== c . 0.265 |============================================================= d . 0.296 |==================================================================== e . 0.289 |================================================================== PostgreSQL 15 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better a . 3741941 |================================================================= b . 3730123 |================================================================= c . 3776352 |================================================================== d . 3381479 |=========================================================== e . 3461133 |============================================================ TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better a . 146.79 |=================================================================== b . 146.90 |=================================================================== c . 146.99 |=================================================================== d . 131.65 |============================================================ e . 132.39 |============================================================ ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache Queries Per Minute, Geo Mean > Higher Is Better a . 584.98 |=================================================================== b . 582.44 |=================================================================== c . 578.76 |================================================================== d . 525.86 |============================================================ e . 527.95 |============================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 30.57 |============================================================= b . 30.58 |============================================================= c . 30.54 |============================================================= d . 33.65 |=================================================================== e . 33.94 |==================================================================== Memcached 1.6.19 Set To Get Ratio: 1:100 Ops/sec > Higher Is Better a . 2851726.85 |=============================================================== b . 2813977.52 |============================================================== c . 2821181.21 |============================================================== d . 2571874.26 |========================================================= e . 2595069.86 |========================================================= Timed Godot Game Engine Compilation 4.0 Time To Compile Seconds < Lower Is Better a . 107.66 |=================================================================== b . 107.35 |=================================================================== c . 107.08 |=================================================================== d . 97.31 |============================================================= e . 98.01 |============================================================= ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 123.88 |=================================================================== b . 113.02 |============================================================= c . 123.13 |=================================================================== d . 123.93 |=================================================================== e . 123.28 |=================================================================== Darmstadt Automotive Parallel Heterogeneous Suite 2021.11.02 Backend: OpenMP - Kernel: Euclidean Cluster Test Cases Per Minute > Higher Is Better a . 1637.36 |================================================================== b . 1637.00 |================================================================== c . 1636.09 |================================================================== d . 1506.75 |============================================================= e . 1494.17 |============================================================ Timed LLVM Compilation 16.0 Build System: Unix Makefiles Seconds < Lower Is Better a . 217.19 |=================================================================== b . 217.45 |=================================================================== c . 214.02 |================================================================== d . 199.26 |============================================================= e . 200.02 |============================================================== TensorFlow 2.12 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better a . 1276.22 |============================================================= b . 1272.13 |============================================================= c . 1276.62 |============================================================= d . 1347.52 |================================================================ e . 1386.72 |================================================================== Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Compression Speed MB/s > Higher Is Better a . 903.2 |================================================================= b . 900.7 |================================================================= c . 893.2 |================================================================ d . 865.5 |============================================================== e . 943.2 |==================================================================== John The Ripper 2023.03.14 Test: HMAC-SHA512 Real C/S > Higher Is Better a . 309175000 |================================================================ b . 308492000 |================================================================ c . 309621000 |================================================================ d . 286156000 |=========================================================== e . 292569000 |============================================================ FFmpeg 6.0 Encoder: libx265 - Scenario: Live Seconds < Lower Is Better a . 37.07 |================================================================ b . 36.89 |================================================================ c . 36.98 |================================================================ d . 39.28 |==================================================================== e . 37.18 |================================================================ FFmpeg 6.0 Encoder: libx265 - Scenario: Live FPS > Higher Is Better a . 136.22 |=================================================================== b . 136.89 |=================================================================== c . 136.56 |=================================================================== d . 128.58 |=============================================================== e . 135.83 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 119.60 |=============================================================== b . 119.64 |=============================================================== c . 119.81 |=============================================================== d . 126.94 |=================================================================== e . 126.86 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 321.47 |=============================================================== b . 320.99 |=============================================================== c . 321.15 |=============================================================== d . 339.58 |=================================================================== e . 340.47 |=================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 37.04 |=================================================================== b . 37.39 |==================================================================== c . 37.07 |=================================================================== d . 36.24 |================================================================== e . 35.53 |================================================================= Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 28.02 |================================================================= b . 28.20 |================================================================= c . 28.09 |================================================================= d . 29.45 |==================================================================== e . 29.09 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 35.68 |==================================================================== b . 35.45 |==================================================================== c . 35.59 |==================================================================== d . 33.95 |================================================================= e . 34.37 |================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 108.80 |================================================================ b . 108.80 |================================================================ c . 109.23 |================================================================ d . 113.91 |=================================================================== e . 114.31 |=================================================================== Build2 0.15 Time To Compile Seconds < Lower Is Better a . 63.17 |==================================================================== b . 63.01 |==================================================================== c . 63.22 |==================================================================== d . 60.47 |================================================================= e . 60.35 |================================================================= PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.215 |================================================================== b . 0.210 |================================================================= c . 0.210 |================================================================= d . 0.220 |==================================================================== e . 0.218 |=================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better a . 163.85 |================================================================ b . 163.76 |================================================================ c . 163.66 |================================================================ d . 166.89 |================================================================= e . 171.18 |=================================================================== Memcached 1.6.19 Set To Get Ratio: 1:10 Ops/sec > Higher Is Better a . 3203112.60 |=============================================================== b . 3163183.27 |============================================================== c . 3154822.34 |============================================================== d . 3068513.78 |============================================================ e . 3063463.55 |============================================================ PostgreSQL 15 Scaling Factor: 1 - Clients: 800 - Mode: Read Only TPS > Higher Is Better a . 3718995 |================================================================= b . 3803554 |================================================================== c . 3804637 |================================================================== d . 3638802 |=============================================================== e . 3669550 |================================================================ Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 195.20 |=================================================================== b . 195.77 |=================================================================== c . 194.08 |================================================================== d . 188.21 |================================================================ e . 190.66 |================================================================= Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 5.1198 |================================================================= b . 5.1045 |================================================================ c . 5.1492 |================================================================= d . 5.3096 |=================================================================== e . 5.2412 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 76.71 |================================================================== b . 76.60 |================================================================= c . 76.66 |================================================================== d . 79.18 |==================================================================== e . 79.54 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 195.13 |=================================================================== b . 193.40 |================================================================== c . 196.15 |=================================================================== d . 191.64 |================================================================= e . 189.01 |================================================================= Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 5.1229 |================================================================= b . 5.1686 |================================================================= c . 5.0961 |================================================================= d . 5.2159 |================================================================== e . 5.2886 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 28.22 |================================================================== b . 28.25 |================================================================== c . 28.19 |================================================================== d . 29.23 |==================================================================== e . 28.95 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 35.43 |==================================================================== b . 35.39 |==================================================================== c . 35.47 |==================================================================== d . 34.20 |================================================================== e . 34.53 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 151.37 |================================================================= b . 149.81 |================================================================= c . 150.76 |================================================================= d . 154.28 |=================================================================== e . 155.03 |=================================================================== FFmpeg 6.0 Encoder: libx265 - Scenario: Platform Seconds < Lower Is Better a . 132.60 |================================================================= b . 132.95 |================================================================= c . 132.87 |================================================================= d . 134.38 |================================================================== e . 137.04 |=================================================================== FFmpeg 6.0 Encoder: libx265 - Scenario: Platform FPS > Higher Is Better a . 57.13 |==================================================================== b . 56.98 |==================================================================== c . 57.01 |==================================================================== d . 56.37 |=================================================================== e . 55.28 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 11.27 |================================================================== b . 11.28 |================================================================== c . 11.25 |================================================================== d . 11.60 |==================================================================== e . 11.58 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 88.70 |==================================================================== b . 88.60 |==================================================================== c . 88.83 |==================================================================== d . 86.18 |================================================================== e . 86.29 |================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better a . 516.87 |================================================================= b . 515.11 |================================================================= c . 518.52 |================================================================= d . 530.51 |=================================================================== e . 521.87 |================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 47.77 |================================================================== b . 47.87 |================================================================== c . 47.69 |================================================================== d . 49.03 |==================================================================== e . 48.81 |==================================================================== Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Compression Speed MB/s > Higher Is Better a . 8.50 |===================================================================== b . 8.56 |===================================================================== c . 8.38 |==================================================================== d . 8.34 |=================================================================== e . 8.46 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 1109.11 |================================================================ b . 1110.67 |================================================================= c . 1108.42 |================================================================ d . 1133.26 |================================================================== e . 1136.30 |================================================================== Google Draco 1.5.6 Model: Church Facade ms < Lower Is Better a . 6872 |===================================================================== b . 6788 |==================================================================== c . 6888 |===================================================================== d . 6721 |=================================================================== e . 6784 |==================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 1108.70 |================================================================ b . 1108.98 |================================================================ c . 1112.33 |================================================================= d . 1127.58 |================================================================== e . 1135.45 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 194.54 |================================================================= b . 199.04 |=================================================================== c . 197.09 |================================================================== d . 196.62 |================================================================== e . 196.32 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 5.1385 |=================================================================== b . 5.0222 |================================================================= c . 5.0721 |================================================================== d . 5.0842 |================================================================== e . 5.0918 |================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Upload FPS > Higher Is Better a . 12.45 |=================================================================== b . 12.42 |=================================================================== c . 12.45 |=================================================================== d . 12.70 |==================================================================== e . 12.66 |==================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Upload Seconds < Lower Is Better a . 202.83 |=================================================================== b . 203.29 |=================================================================== c . 202.88 |=================================================================== d . 198.82 |================================================================== e . 199.47 |================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 206.94 |=================================================================== b . 206.62 |=================================================================== c . 206.67 |=================================================================== d . 203.70 |================================================================== e . 202.46 |================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 4.8287 |================================================================== b . 4.8365 |================================================================== c . 4.8353 |================================================================== d . 4.9053 |=================================================================== e . 4.9353 |=================================================================== Zstd Compression 1.5.4 Compression Level: 8 - Decompression Speed MB/s > Higher Is Better a . 1580.9 |================================================================== b . 1593.0 |================================================================== c . 1577.1 |================================================================== d . 1611.3 |=================================================================== e . 1599.2 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 101.79 |=================================================================== b . 101.40 |=================================================================== c . 100.93 |================================================================== d . 99.65 |================================================================== e . 100.24 |================================================================== Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 9.8202 |================================================================= b . 9.8577 |================================================================= c . 9.9034 |================================================================= d . 10.0304 |================================================================== e . 9.9712 |================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 62.35 |==================================================================== b . 61.93 |==================================================================== c . 62.29 |==================================================================== d . 61.04 |=================================================================== e . 61.70 |=================================================================== Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 16.02 |=================================================================== b . 16.13 |=================================================================== c . 16.04 |=================================================================== d . 16.36 |==================================================================== e . 16.19 |=================================================================== Google Draco 1.5.6 Model: Lion ms < Lower Is Better a . 5321 |===================================================================== b . 5296 |===================================================================== c . 5270 |==================================================================== d . 5218 |==================================================================== e . 5300 |===================================================================== Zstd Compression 1.5.4 Compression Level: 12 - Decompression Speed MB/s > Higher Is Better a . 1633.7 |=================================================================== b . 1629.0 |================================================================== c . 1641.7 |=================================================================== d . 1636.0 |=================================================================== e . 1611.2 |================================================================== PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.267 |=================================================================== b . 0.270 |==================================================================== c . 0.272 |==================================================================== d . 0.270 |==================================================================== e . 0.269 |=================================================================== FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand FPS > Higher Is Better a . 57.02 |==================================================================== b . 57.18 |==================================================================== c . 57.12 |==================================================================== d . 57.42 |==================================================================== e . 56.39 |=================================================================== FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand Seconds < Lower Is Better a . 132.84 |================================================================== b . 132.48 |================================================================== c . 132.63 |================================================================== d . 131.93 |================================================================== e . 134.34 |=================================================================== PostgreSQL 15 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better a . 3738972 |================================================================== b . 3707315 |================================================================= c . 3672471 |================================================================= d . 3700926 |================================================================= e . 3716270 |================================================================== Zstd Compression 1.5.4 Compression Level: 19 - Decompression Speed MB/s > Higher Is Better a . 1395.1 |================================================================== b . 1399.2 |=================================================================== c . 1393.8 |================================================================== d . 1406.1 |=================================================================== e . 1385.0 |================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Video On Demand FPS > Higher Is Better a . 48.08 |=================================================================== b . 48.19 |=================================================================== c . 48.27 |=================================================================== d . 48.76 |==================================================================== e . 48.60 |==================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Video On Demand Seconds < Lower Is Better a . 157.53 |=================================================================== b . 157.20 |=================================================================== c . 156.93 |=================================================================== d . 155.36 |================================================================== e . 155.85 |================================================================== Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Decompression Speed MB/s > Higher Is Better a . 1619.8 |=================================================================== b . 1625.7 |=================================================================== c . 1613.8 |=================================================================== d . 1615.1 |=================================================================== e . 1606.4 |================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Platform Seconds < Lower Is Better a . 156.98 |=================================================================== b . 156.98 |=================================================================== c . 157.00 |=================================================================== d . 155.28 |================================================================== e . 155.68 |================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Platform FPS > Higher Is Better a . 48.25 |=================================================================== b . 48.26 |=================================================================== c . 48.25 |=================================================================== d . 48.78 |==================================================================== e . 48.66 |==================================================================== dav1d 1.1 Video Input: Summer Nature 4K FPS > Higher Is Better a . 379.84 |================================================================== b . 381.16 |=================================================================== c . 383.95 |=================================================================== nginx 1.23.2 Connections: 200 Requests Per Second > Higher Is Better a . 257954.01 |================================================================ b . 258099.68 |================================================================ c . 255419.28 |=============================================================== Zstd Compression 1.5.4 Compression Level: 8 - Compression Speed MB/s > Higher Is Better a . 1225.5 |=================================================================== b . 1217.1 |=================================================================== c . 1220.5 |=================================================================== d . 1217.7 |=================================================================== e . 1213.4 |================================================================== Zstd Compression 1.5.4 Compression Level: 12 - Compression Speed MB/s > Higher Is Better a . 316.8 |==================================================================== b . 317.4 |==================================================================== c . 314.8 |=================================================================== d . 317.8 |==================================================================== e . 315.2 |=================================================================== Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed MB/s > Higher Is Better a . 1329.8 |=================================================================== b . 1336.1 |=================================================================== c . 1338.0 |=================================================================== d . 1334.7 |=================================================================== e . 1330.7 |=================================================================== Zstd Compression 1.5.4 Compression Level: 19 - Compression Speed MB/s > Higher Is Better a . 17.4 |===================================================================== b . 17.4 |===================================================================== c . 17.4 |===================================================================== d . 17.3 |===================================================================== e . 17.3 |===================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Live Seconds < Lower Is Better a . 23.17 |==================================================================== b . 23.15 |==================================================================== c . 23.20 |==================================================================== d . 23.09 |==================================================================== e . 23.09 |==================================================================== FFmpeg 6.0 Encoder: libx264 - Scenario: Live FPS > Higher Is Better a . 217.98 |=================================================================== b . 218.14 |=================================================================== c . 217.64 |=================================================================== d . 218.73 |=================================================================== e . 218.71 |=================================================================== dav1d 1.1 Video Input: Summer Nature 1080p FPS > Higher Is Better a . 807.16 |=================================================================== b . 806.08 |=================================================================== c . 809.86 |=================================================================== FFmpeg 6.0 Encoder: libx265 - Scenario: Upload Seconds < Lower Is Better a . 89.61 |==================================================================== b . 89.73 |==================================================================== c . 89.78 |==================================================================== d . 89.69 |==================================================================== e . 89.59 |==================================================================== dav1d 1.1 Video Input: Chimera 1080p FPS > Higher Is Better a . 657.50 |=================================================================== b . 656.31 |=================================================================== c . 657.22 |=================================================================== FFmpeg 6.0 Encoder: libx265 - Scenario: Upload FPS > Higher Is Better a . 28.18 |==================================================================== b . 28.14 |==================================================================== c . 28.13 |==================================================================== d . 28.15 |==================================================================== e . 28.18 |==================================================================== dav1d 1.1 Video Input: Chimera 1080p 10-bit FPS > Higher Is Better a . 602.64 |=================================================================== b . 603.05 |=================================================================== c . 603.51 |=================================================================== Apache HTTP Server 2.4.56 Concurrent Requests: 1000 Requests Per Second > Higher Is Better Apache HTTP Server 2.4.56 Concurrent Requests: 100 Requests Per Second > Higher Is Better ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 27.00 |================================================================= b . 26.74 |================================================================= c . 26.98 |================================================================= d . 27.59 |=================================================================== e . 28.14 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 32.49 |======================================================= b . 32.59 |======================================================= c . 34.45 |========================================================== d . 40.41 |==================================================================== e . 37.56 |=============================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.07185 |============================================================ b . 8.84768 |================================================================== c . 8.12097 |============================================================= d . 8.06837 |============================================================ e . 8.11091 |============================================================= ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 8.95095 |==================================================== b . 8.97883 |==================================================== c . 8.92427 |==================================================== d . 11.23890 |================================================================= e . 11.12940 |================================================================ ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.82662 |============================================ b . 4.27622 |======================================= c . 4.82211 |============================================ d . 7.25262 |================================================================== e . 7.16208 |================================================================= ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.13811 |================================== b . 5.15191 |================================== c . 5.45677 |==================================== d . 9.86863 |================================================================== e . 9.78363 |================================================================= ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 33.98 |============================================= b . 30.81 |======================================== c . 30.75 |======================================== d . 50.59 |================================================================== e . 51.89 |==================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 41.40 |===================================== b . 40.65 |===================================== c . 40.57 |===================================== d . 75.36 |==================================================================== e . 75.14 |==================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 195.52 |=========================================================== b . 204.39 |============================================================= c . 195.98 |=========================================================== d . 209.02 |=============================================================== e . 223.87 |=================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 797.88 |============================================== b . 794.91 |============================================== c . 853.82 |================================================== d . 1083.89 |=============================================================== e . 1135.38 |================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.90737 |===================================================== b . 1.80855 |================================================== c . 1.86383 |=================================================== d . 2.39687 |================================================================== e . 2.07112 |========================================================= ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1.66637 |==================================== b . 1.77352 |====================================== c . 1.66912 |==================================== d . 3.05530 |================================================================= e . 3.08913 |================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 106.78 |=============================================================== b . 83.24 |================================================= c . 106.92 |=============================================================== d . 94.82 |======================================================== e . 112.88 |=================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 82.14 |================================================= b . 81.50 |================================================ c . 82.14 |================================================= d . 112.94 |=================================================================== e . 111.30 |================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 151.27 |================================================ b . 154.16 |================================================= c . 143.64 |============================================= d . 206.36 |================================================================= e . 212.45 |=================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 156.18 |=========================================== b . 157.15 |=========================================== c . 156.58 |=========================================== d . 244.02 |================================================================== e . 245.99 |=================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.78072 |===================================================== b . 7.75078 |===================================================== c . 7.88188 |====================================================== d . 7.88812 |====================================================== e . 9.71824 |================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.27808 |============================================== b . 6.24337 |============================================== c . 6.25916 |============================================== d . 8.92113 |================================================================== e . 8.87820 |================================================================== nginx 1.23.2 Connections: 1000 Requests Per Second > Higher Is Better nginx 1.23.2 Connections: 100 Requests Per Second > Higher Is Better