zstd onnx runtime raptor lake Intel Core i9-13900K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800/6800 XT / 6900 on Ubuntu 23.04 via the Phoronix Test Suite. a: Processor: Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (0602 BIOS), Chipset: Intel Device 7a27, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: AMD Radeon RX 6800/6800 XT / 6900 (2475/1000MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U, Network: Intel Device 7a70 OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server + Wayland, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (0602 BIOS), Chipset: Intel Device 7a27, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: AMD Radeon RX 6800/6800 XT / 6900 (2475/1000MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U, Network: Intel Device 7a70 OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.5 (LLVM 15.0.6 DRM 3.47), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160 c: Processor: Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (0602 BIOS), Chipset: Intel Device 7a27, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: AMD Radeon RX 6800/6800 XT / 6900 (2475/1000MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U, Network: Intel Device 7a70 OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.5 (LLVM 15.0.6 DRM 3.47), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160 d: Processor: Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (0602 BIOS), Chipset: Intel Device 7a27, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: AMD Radeon RX 6800/6800 XT / 6900 (2475/1000MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U, Network: Intel Device 7a70 OS: Ubuntu 23.04, Kernel: 5.19.0-21-generic (x86_64), Desktop: GNOME Shell 43.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.5 (LLVM 15.0.6 DRM 3.47), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160 Zstd Compression 1.5.4 Compression Level: 3 - Compression Speed MB/s > Higher Is Better a . 3282.8 |================================================================== b . 3337.1 |=================================================================== c . 3351.9 |=================================================================== d . 3303.8 |================================================================== Zstd Compression 1.5.4 Compression Level: 3 - Decompression Speed MB/s > Higher Is Better a . 2320.9 |=================================================================== b . 2319.0 |=================================================================== c . 2316.5 |=================================================================== d . 2320.3 |=================================================================== Zstd Compression 1.5.4 Compression Level: 8 - Compression Speed MB/s > Higher Is Better a . 768.5 |==================================================================== b . 768.1 |==================================================================== c . 770.7 |==================================================================== d . 770.6 |==================================================================== Zstd Compression 1.5.4 Compression Level: 8 - Decompression Speed MB/s > Higher Is Better a . 2544.0 |=================================================================== b . 2544.5 |=================================================================== c . 2544.5 |=================================================================== d . 2538.4 |=================================================================== Zstd Compression 1.5.4 Compression Level: 12 - Compression Speed MB/s > Higher Is Better a . 257.6 |==================================================================== b . 255.2 |=================================================================== c . 254.9 |=================================================================== d . 254.3 |=================================================================== Zstd Compression 1.5.4 Compression Level: 12 - Decompression Speed MB/s > Higher Is Better a . 2527.1 |=================================================================== b . 2534.8 |=================================================================== c . 2533.2 |=================================================================== d . 2524.2 |=================================================================== Zstd Compression 1.5.4 Compression Level: 19 - Compression Speed MB/s > Higher Is Better a . 20.6 |===================================================================== b . 20.6 |===================================================================== c . 20.6 |===================================================================== d . 20.6 |===================================================================== Zstd Compression 1.5.4 Compression Level: 19 - Decompression Speed MB/s > Higher Is Better a . 2177.4 |=================================================================== b . 2173.8 |=================================================================== c . 2173.5 |=================================================================== d . 2174.9 |=================================================================== Zstd Compression 1.5.4 Compression Level: 3, Long Mode - Compression Speed MB/s > Higher Is Better a . 1208.6 |=================================================================== b . 1180.7 |================================================================= c . 1173.2 |================================================================= d . 1166.8 |================================================================= Zstd Compression 1.5.4 Compression Level: 3, Long Mode - Decompression Speed MB/s > Higher Is Better a . 2355.6 |=================================================================== b . 2354.4 |=================================================================== c . 2354.1 |=================================================================== d . 2351.0 |=================================================================== Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Compression Speed MB/s > Higher Is Better a . 716.6 |=================================================================== b . 716.3 |=================================================================== c . 729.0 |==================================================================== d . 721.1 |=================================================================== Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Decompression Speed MB/s > Higher Is Better a . 2550.3 |=================================================================== b . 2549.1 |=================================================================== c . 2547.5 |=================================================================== d . 2545.3 |=================================================================== Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Compression Speed MB/s > Higher Is Better a . 11.5 |===================================================================== b . 11.5 |===================================================================== c . 11.5 |===================================================================== d . 11.5 |===================================================================== Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed MB/s > Higher Is Better a . 2089.3 |=================================================================== b . 2087.4 |=================================================================== c . 2093.5 |=================================================================== d . 2091.5 |=================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 136.27 |================================================================ b . 140.15 |================================================================= c . 138.43 |================================================================= d . 143.70 |=================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.33381 |================================================================== b . 7.13149 |================================================================ c . 7.21976 |================================================================= d . 6.95506 |=============================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 179.35 |================================================================== b . 181.73 |=================================================================== c . 179.11 |================================================================== d . 178.59 |================================================================== ONNX Runtime 1.14 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 5.57297 |================================================================== b . 5.50076 |================================================================= c . 5.58038 |================================================================== d . 5.59669 |================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 15.15 |================================================================== b . 15.35 |=================================================================== c . 15.54 |=================================================================== d . 15.66 |==================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 65.99 |==================================================================== b . 65.15 |=================================================================== c . 64.34 |================================================================== d . 63.83 |================================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 16.37 |================================================================ b . 16.18 |=============================================================== c . 17.47 |==================================================================== d . 16.02 |============================================================== ONNX Runtime 1.14 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 61.09 |=================================================================== b . 61.88 |=================================================================== c . 57.25 |============================================================== d . 62.42 |==================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 20.25 |==================================================================== b . 20.16 |=================================================================== c . 20.16 |=================================================================== d . 20.32 |==================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 49.38 |==================================================================== b . 49.61 |==================================================================== c . 49.60 |==================================================================== d . 49.22 |=================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 20.98 |=================================================================== b . 21.22 |==================================================================== c . 21.00 |=================================================================== d . 21.11 |==================================================================== ONNX Runtime 1.14 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 47.65 |==================================================================== b . 47.13 |=================================================================== c . 47.61 |==================================================================== d . 47.37 |==================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 307.25 |=================================================================== b . 306.76 |=================================================================== c . 303.33 |================================================================== d . 306.92 |=================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 3.25366 |================================================================= b . 3.25890 |================================================================= c . 3.29575 |================================================================== d . 3.25716 |================================================================= ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 968.27 |================================================================ b . 962.23 |=============================================================== c . 958.03 |=============================================================== d . 1003.24 |================================================================== ONNX Runtime 1.14 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.031940 |================================================================ b . 1.038460 |================================================================= c . 1.042950 |================================================================= d . 0.996063 |============================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 2.33535 |================================================================== b . 2.35087 |================================================================== c . 2.27124 |================================================================ d . 2.31074 |================================================================= ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 428.20 |================================================================= b . 425.42 |================================================================= c . 440.29 |=================================================================== d . 432.76 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 3.03777 |================================================================== b . 3.03000 |================================================================== c . 3.03244 |================================================================== d . 3.02577 |================================================================== ONNX Runtime 1.14 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 329.19 |=================================================================== b . 330.03 |=================================================================== c . 329.77 |=================================================================== d . 330.49 |=================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 9.60869 |=========================================================== b . 10.16660 |=============================================================== c . 10.32210 |================================================================ d . 10.55530 |================================================================= ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 104.07 |=================================================================== b . 98.39 |=============================================================== c . 96.88 |============================================================== d . 94.74 |============================================================= ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 9.42226 |================================================================== b . 9.39511 |================================================================== c . 9.42361 |================================================================== d . 9.38318 |================================================================== ONNX Runtime 1.14 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 106.13 |=================================================================== b . 106.44 |=================================================================== c . 106.12 |=================================================================== d . 106.57 |=================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 225.13 |================================================================== b . 223.44 |================================================================== c . 228.08 |=================================================================== d . 225.86 |================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4.44106 |================================================================== b . 4.47496 |================================================================== c . 4.38369 |================================================================= d . 4.42664 |================================================================= ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 307.67 |================================================================ b . 314.39 |================================================================= c . 322.41 |=================================================================== d . 320.30 |=================================================================== ONNX Runtime 1.14 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.24947 |================================================================== b . 3.18126 |================================================================= c . 3.10117 |=============================================================== d . 3.12147 |=============================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 91.82 |================================================================= b . 91.01 |================================================================= c . 91.06 |================================================================= d . 95.62 |==================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.89 |=================================================================== b . 10.99 |==================================================================== c . 10.98 |==================================================================== d . 10.46 |================================================================= ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 93.28 |==================================================================== b . 93.13 |==================================================================== c . 93.28 |==================================================================== d . 93.29 |==================================================================== ONNX Runtime 1.14 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 10.72 |==================================================================== b . 10.74 |==================================================================== c . 10.72 |==================================================================== d . 10.72 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 48.76 |=================================================================== b . 49.24 |==================================================================== c . 49.04 |==================================================================== d . 48.91 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 20.51 |==================================================================== b . 20.31 |=================================================================== c . 20.39 |==================================================================== d . 20.45 |==================================================================== ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 60.14 |================================================================= b . 63.36 |==================================================================== c . 63.12 |==================================================================== d . 59.27 |================================================================ ONNX Runtime 1.14 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 16.63 |=================================================================== b . 15.78 |================================================================ c . 15.84 |================================================================ d . 16.87 |====================================================================