onnx new

AMD Ryzen 7 7840U testing with a PHX Swift SFE16-43 Ray_PEU (V1.04 BIOS) and AMD Phoenix1 512MB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402036-NE-ONNXNEW5418
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 03
  41 Minutes
b
February 03
  5 Hours, 32 Minutes
c
February 03
  41 Minutes
Invert Hiding All Results Option
  2 Hours, 18 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


onnx new AMD Ryzen 7 7840U testing with a PHX Swift SFE16-43 Ray_PEU (V1.04 BIOS) and AMD Phoenix1 512MB on Ubuntu 23.10 via the Phoronix Test Suite. a: Processor: AMD Ryzen 7 7840U @ 5.29GHz (8 Cores / 16 Threads), Motherboard: PHX Swift SFE16-43 Ray_PEU (V1.04 BIOS), Chipset: AMD Device 14e8, Memory: 4 x 4GB DRAM-6400MT/s K3LKBKB0BM-MGCP, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: AMD Phoenix1 512MB (2700/800MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK MT7922 802.11ax PCI OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.1~git2401230600.42fc83~oibaf~m (git-42fc83a 2024-01-23 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3200x2000 b: Processor: AMD Ryzen 7 7840U @ 5.29GHz (8 Cores / 16 Threads), Motherboard: PHX Swift SFE16-43 Ray_PEU (V1.04 BIOS), Chipset: AMD Device 14e8, Memory: 4 x 4GB DRAM-6400MT/s K3LKBKB0BM-MGCP, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: AMD Phoenix1 512MB (2700/800MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK MT7922 802.11ax PCI OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.1~git2401230600.42fc83~oibaf~m (git-42fc83a 2024-01-23 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3200x2000 c: Processor: AMD Ryzen 7 7840U @ 5.29GHz (8 Cores / 16 Threads), Motherboard: PHX Swift SFE16-43 Ray_PEU (V1.04 BIOS), Chipset: AMD Device 14e8, Memory: 4 x 4GB DRAM-6400MT/s K3LKBKB0BM-MGCP, Disk: 1024GB Micron_3400_MTFDKBA1T0TFH, Graphics: AMD Phoenix1 512MB (2700/800MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK MT7922 802.11ax PCI OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.1~git2401230600.42fc83~oibaf~m (git-42fc83a 2024-01-23 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.56), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3200x2000 ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 8.30376 |================================================================== b . 7.38226 |========================================================== c . 8.34866 |================================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1.30753 |================================================================= b . 1.17670 |=========================================================== c . 1.32231 |================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 208.12 |================================================================== b . 191.25 |============================================================= c . 210.21 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 199.72 |=================================================================== b . 182.77 |============================================================= c . 198.33 |=================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 61.10 |==================================================================== b . 55.96 |============================================================== c . 59.74 |================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 5.61840 |================================================================ b . 5.35577 |============================================================= c . 5.81626 |================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 18.45 |==================================================================== b . 17.01 |=============================================================== c . 18.18 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 420.38 |================================================================== b . 394.10 |============================================================== c . 424.32 |=================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 438.11 |=================================================================== b . 407.24 |============================================================== c . 430.54 |================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 99.38 |============================================================== b . 100.28 |=============================================================== c . 106.85 |=================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 36.00 |=================================================================== b . 34.48 |================================================================= c . 36.29 |==================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 125.55 |=================================================================== b . 123.39 |================================================================== c . 122.08 |================================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 91.75 |=================================================================== b . 91.27 |=================================================================== c . 92.97 |==================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 116.58 |=================================================================== b . 115.71 |================================================================== c . 117.37 |=================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 23.47 |============================================================== b . 22.80 |============================================================ c . 25.68 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 42.60 |================================================================== b . 44.05 |==================================================================== c . 38.94 |============================================================ ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 27.77 |================================================================= b . 29.00 |==================================================================== c . 27.56 |================================================================= ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.20 |==================================================== b . 14.68 |==================================================================== c . 11.24 |==================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 89.27 |==================================================================== b . 70.41 |====================================================== c . 88.94 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 16.37 |============================================================== b . 17.87 |==================================================================== c . 16.74 |================================================================ ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.80345 |============================================================= b . 5.22787 |================================================================== c . 4.75585 |============================================================ ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.00581 |============================================================ b . 5.47098 |================================================================== c . 5.04136 |============================================================= ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 36.37 |============================================= b . 50.39 |============================================================== c . 55.12 |==================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 27.50 |==================================================================== b . 20.56 |=================================================== c . 18.14 |============================================= ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 54.20 |=============================================================== b . 58.78 |==================================================================== c . 55.00 |================================================================ ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 764.80 |============================================================ b . 849.86 |=================================================================== c . 756.25 |============================================================ ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1302.65 |=============================================================== b . 1368.27 |================================================================== c . 1247.80 |============================================================ ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.767664 |============================================================== b . 0.733399 |=========================================================== c . 0.801406 |================================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2.28112 |============================================================= b . 2.45823 |================================================================== c . 2.32098 |============================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2.37678 |============================================================== b . 2.53589 |================================================================== c . 2.35443 |============================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 123.98 |=================================================================== b . 106.17 |========================================================= c . 123.44 |=================================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.06563 |====================================================== b . 9.85290 |================================================================== c . 8.10071 |====================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 120.43 |============================================================ b . 135.50 |=================================================================== c . 119.78 |=========================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.96284 |================================================================ b . 8.10241 |================================================================= c . 8.18943 |================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 8.57597 |================================================================= b . 8.64193 |================================================================== c . 8.51836 |================================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 180.28 |=================================================================== b . 148.74 |======================================================= c . 179.60 |=================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 5.54675 |====================================================== b . 6.84051 |================================================================== c . 5.56792 |====================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 177.98 |================================================================ b . 186.85 |=================================================================== c . 171.93 |============================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 10.05490 |================================================================= b . 9.99957 |================================================================= c . 9.35371 |============================================================ ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.90 |==================================================================== b . 10.95 |==================================================================== c . 10.75 |===================================================================