AMD Ryzen 7 7840HS testing with a Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS) and AMD Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600 512MB on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2402031-NE-ONNXNEW3518
onnx new
AMD Ryzen 7 7840HS testing with a Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS) and AMD Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600 512MB on Ubuntu 23.10 via the Phoronix Test Suite.
a:
Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS), Chipset: AMD Device 14e8, Memory: 2 x 8GB DRAM-5600MT/s A-DATA AD5S56008G-B, Disk: 512GB Western Digital PC SN810 SDCPNRY-512G, Graphics: AMD Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600 512MB (2208/1124MHz), Audio: AMD Navi 31 HDMI/DP, Network: MEDIATEK MT7922 802.11ax PCI
OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401210600.c3a64f~oibaf~m (git-c3a64f8 2024-01-21 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2560x1600
b:
Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS), Chipset: AMD Device 14e8, Memory: 2 x 8GB DRAM-5600MT/s A-DATA AD5S56008G-B, Disk: 512GB Western Digital PC SN810 SDCPNRY-512G, Graphics: AMD Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600 512MB (2208/1124MHz), Audio: AMD Navi 31 HDMI/DP, Network: MEDIATEK MT7922 802.11ax PCI
OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401210600.c3a64f~oibaf~m (git-c3a64f8 2024-01-21 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2560x1600
c:
Processor: AMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads), Motherboard: Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS), Chipset: AMD Device 14e8, Memory: 2 x 8GB DRAM-5600MT/s A-DATA AD5S56008G-B, Disk: 512GB Western Digital PC SN810 SDCPNRY-512G, Graphics: AMD Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600 512MB (2208/1124MHz), Audio: AMD Navi 31 HDMI/DP, Network: MEDIATEK MT7922 802.11ax PCI
OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server 1.21.1.7 + Wayland, OpenGL: 4.6 Mesa 24.1~git2401210600.c3a64f~oibaf~m (git-c3a64f8 2024-01-21 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2560x1600
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 99.64 |====================================================================
b . 99.30 |====================================================================
c . 99.08 |====================================================================
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 10.03 |====================================================================
b . 10.07 |====================================================================
c . 10.09 |====================================================================
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 117.22 |===================================================================
b . 106.36 |=============================================================
c . 115.97 |==================================================================
ONNX Runtime 1.17
Model: GPT-2 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 8.52586 |============================================================
b . 9.39668 |==================================================================
c . 8.61681 |=============================================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 5.90248 |================================================================
b . 5.85229 |================================================================
c . 6.04911 |==================================================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 169.42 |==================================================================
b . 170.95 |===================================================================
c . 165.31 |=================================================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 8.74175 |==================================================================
b . 8.75588 |==================================================================
c . 5.58049 |==========================================
ONNX Runtime 1.17
Model: yolov4 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 114.39 |===========================================
b . 114.21 |===========================================
c . 179.19 |===================================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 132.00 |===================================================================
b . 131.48 |===================================================================
c . 130.23 |==================================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 7.57486 |=================================================================
b . 7.60445 |=================================================================
c . 7.67771 |==================================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 141.30 |===================================================================
b . 141.49 |===================================================================
c . 140.23 |==================================================================
ONNX Runtime 1.17
Model: T5 Encoder - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 7.07527 |=================================================================
b . 7.06618 |=================================================================
c . 7.12971 |==================================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 8.34425 |===============================================================
b . 8.34469 |===============================================================
c . 8.68310 |==================================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 119.84 |===================================================================
b . 119.84 |===================================================================
c . 115.16 |================================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 8.02122 |====================================
b . 8.03961 |====================================
c . 14.37190 |=================================================================
ONNX Runtime 1.17
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 124.67 |===================================================================
b . 124.38 |===================================================================
c . 69.58 |=====================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 442.91 |===================================================================
b . 440.31 |===================================================================
c . 432.13 |=================================================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 2.25633 |================================================================
b . 2.26957 |=================================================================
c . 2.31263 |==================================================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 482.95 |===================================================================
b . 471.11 |=================================================================
c . 479.71 |===================================================================
ONNX Runtime 1.17
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 2.06948 |================================================================
b . 2.12402 |==================================================================
c . 2.08341 |=================================================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 0.835404 |=============================================================
b . 0.797356 |==========================================================
c . 0.892070 |=================================================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 1197.02 |===============================================================
b . 1254.40 |==================================================================
c . 1120.99 |===========================================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 1.448550 |=================================================================
b . 1.395417 |==============================================================
c . 1.453520 |=================================================================
ONNX Runtime 1.17
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 690.34 |===============================================================
b . 734.45 |===================================================================
c . 687.98 |===============================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 18.97 |====================================================================
b . 18.67 |===================================================================
c . 19.03 |====================================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 52.70 |===================================================================
b . 53.57 |====================================================================
c . 52.54 |===================================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 18.47 |===================================================
b . 24.75 |====================================================================
c . 18.51 |===================================================
ONNX Runtime 1.17
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 54.13 |====================================================================
b . 42.88 |======================================================
c . 54.02 |====================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 213.28 |==================================================================
b . 213.16 |==================================================================
c . 215.86 |===================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 4.68787 |==================================================================
b . 4.69052 |==================================================================
c . 4.63202 |=================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 231.53 |===================================================================
b . 225.30 |=================================================================
c . 223.60 |=================================================================
ONNX Runtime 1.17
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 4.31816 |================================================================
b . 4.43760 |==================================================================
c . 4.47122 |==================================================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 63.18 |====================================================================
b . 62.97 |====================================================================
c . 63.28 |====================================================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 15.83 |====================================================================
b . 15.88 |====================================================================
c . 15.80 |====================================================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 100.63 |===================================================================
b . 87.19 |==========================================================
c . 61.33 |=========================================
ONNX Runtime 1.17
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 9.93546 |========================================
b . 12.10370 |================================================
c . 16.30420 |=================================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 35.93 |====================================================================
b . 35.89 |====================================================================
c . 35.61 |===================================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 27.83 |===================================================================
b . 27.86 |===================================================================
c . 28.08 |====================================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 47.70 |====================================================================
b . 40.26 |=========================================================
c . 36.25 |====================================================
ONNX Runtime 1.17
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 20.96 |====================================================
b . 25.20 |==============================================================
c . 27.59 |====================================================================