AMD Ryzen 7 PRO 6850U testing with a LENOVO 21CM0001US (R22ET51W 1.21 BIOS) and AMD Radeon 680M 1GB on Ubuntu 22.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2302110-NE-ONNXRUNTI70
onnx runtime zstd
AMD Ryzen 7 PRO 6850U testing with a LENOVO 21CM0001US (R22ET51W 1.21 BIOS) and AMD Radeon 680M 1GB on Ubuntu 22.10 via the Phoronix Test Suite.
a:
Processor: AMD Ryzen 7 PRO 6850U @ 4.77GHz (8 Cores / 16 Threads), Motherboard: LENOVO 21CM0001US (R22ET51W 1.21 BIOS), Chipset: AMD Device 14b5, Memory: 16GB, Disk: 512GB Micron MTFDKBA512TFK, Graphics: AMD Radeon 680M 1GB (2200/400MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: Qualcomm QCNFA765
OS: Ubuntu 22.10, Kernel: 6.1.0-060100rc2daily20221028-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.49), Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1200
b:
Processor: AMD Ryzen 7 PRO 6850U @ 4.77GHz (8 Cores / 16 Threads), Motherboard: LENOVO 21CM0001US (R22ET51W 1.21 BIOS), Chipset: AMD Device 14b5, Memory: 16GB, Disk: 512GB Micron MTFDKBA512TFK, Graphics: AMD Radeon 680M 1GB (2200/400MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: Qualcomm QCNFA765
OS: Ubuntu 22.10, Kernel: 6.1.0-060100rc2daily20221028-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.49), Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1200
c:
Processor: AMD Ryzen 7 PRO 6850U @ 4.77GHz (8 Cores / 16 Threads), Motherboard: LENOVO 21CM0001US (R22ET51W 1.21 BIOS), Chipset: AMD Device 14b5, Memory: 16GB, Disk: 512GB Micron MTFDKBA512TFK, Graphics: AMD Radeon 680M 1GB (2200/400MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: Qualcomm QCNFA765
OS: Ubuntu 22.10, Kernel: 6.1.0-060100rc2daily20221028-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.49), Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1200
d:
Processor: AMD Ryzen 7 PRO 6850U @ 4.77GHz (8 Cores / 16 Threads), Motherboard: LENOVO 21CM0001US (R22ET51W 1.21 BIOS), Chipset: AMD Device 14b5, Memory: 16GB, Disk: 512GB Micron MTFDKBA512TFK, Graphics: AMD Radeon 680M 1GB (2200/400MHz), Audio: AMD Rembrandt Radeon HD Audio, Network: Qualcomm QCNFA765
OS: Ubuntu 22.10, Kernel: 6.1.0-060100rc2daily20221028-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.49), Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1200
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 936.68 |===========================================
b . 1445.07 |==================================================================
c . 1450.01 |==================================================================
d . 933.15 |==========================================
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 1.067590 |=================================================================
b . 0.692005 |==========================================
c . 0.689648 |==========================================
d . 1.071630 |=================================================================
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 1531.61 |==================================================================
b . 1513.83 |=================================================================
c . 1453.35 |===============================================================
d . 1453.61 |===============================================================
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 0.652903 |==============================================================
b . 0.660573 |==============================================================
c . 0.688063 |=================================================================
d . 0.687937 |=================================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 12.16 |===================================================================
b . 11.17 |==============================================================
c . 10.82 |============================================================
d . 12.34 |====================================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 82.18 |=============================================================
b . 89.42 |==================================================================
c . 92.34 |====================================================================
d . 80.96 |============================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 12.51 |===================================================================
b . 12.48 |===================================================================
c . 12.51 |===================================================================
d . 12.62 |====================================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 79.90 |====================================================================
b . 80.06 |====================================================================
c . 79.90 |====================================================================
d . 79.17 |===================================================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 207.82 |==================================================================
b . 204.47 |=================================================================
c . 205.91 |=================================================================
d . 212.06 |===================================================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 4.81183 |=================================================================
b . 4.89053 |==================================================================
c . 4.85635 |==================================================================
d . 4.71553 |================================================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 91.80 |=============================================
b . 90.72 |============================================
c . 91.99 |=============================================
d . 137.35 |===================================================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 10.89260 |================================================================
b . 11.02160 |=================================================================
c . 10.86950 |================================================================
d . 7.28019 |===========================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 149.98 |===============================================
b . 157.87 |=================================================
c . 215.62 |===================================================================
d . 215.03 |===================================================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 6.66706 |==================================================================
b . 6.33386 |===============================================================
c . 4.63770 |==============================================
d . 4.65036 |==============================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 136.83 |===================================================================
b . 130.11 |================================================================
c . 134.02 |==================================================================
d . 134.36 |==================================================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 7.30784 |===============================================================
b . 7.68535 |==================================================================
c . 7.46141 |================================================================
d . 7.44275 |================================================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 67.99 |===================================================================
b . 68.86 |====================================================================
c . 65.45 |=================================================================
d . 67.07 |==================================================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 14.71 |=================================================================
b . 14.52 |=================================================================
c . 15.28 |====================================================================
d . 14.91 |==================================================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 45.77 |====================================================================
b . 45.52 |====================================================================
c . 45.57 |====================================================================
d . 45.62 |====================================================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 21.85 |====================================================================
b . 21.97 |====================================================================
c . 21.94 |====================================================================
d . 21.92 |====================================================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 33.14 |============================================================
b . 37.82 |====================================================================
c . 33.38 |============================================================
d . 35.96 |=================================================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 30.17 |====================================================================
b . 26.44 |============================================================
c . 29.95 |====================================================================
d . 27.81 |===============================================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 39.60 |====================================================================
b . 39.57 |====================================================================
c . 38.95 |===================================================================
d . 38.98 |===================================================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 25.25 |===================================================================
b . 25.27 |===================================================================
c . 25.67 |====================================================================
d . 25.65 |====================================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 2.96616 |==========================================================
b . 3.35715 |==================================================================
c . 2.98143 |===========================================================
d . 2.96729 |==========================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 336.84 |===================================================================
b . 297.64 |===========================================================
c . 335.14 |===================================================================
d . 336.71 |===================================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 3.52574 |==================================================================
b . 3.43070 |================================================================
c . 3.45450 |=================================================================
d . 3.45622 |=================================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 283.47 |=================================================================
b . 291.30 |===================================================================
c . 289.29 |===================================================================
d . 289.15 |===================================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 8.31409 |==============================================================
b . 8.90396 |==================================================================
c . 8.90587 |==================================================================
d . 8.91846 |==================================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 120.23 |===================================================================
b . 112.27 |===============================================================
c . 112.24 |===============================================================
d . 112.08 |==============================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 9.07349 |==================================================================
b . 9.02988 |==================================================================
c . 9.04234 |==================================================================
d . 9.04455 |==================================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 110.18 |===================================================================
b . 110.72 |===================================================================
c . 110.56 |===================================================================
d . 110.54 |===================================================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 13.77 |================================================
b . 13.72 |===============================================
c . 13.74 |================================================
d . 19.66 |====================================================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 72.60 |====================================================================
b . 72.87 |====================================================================
c . 72.76 |====================================================================
d . 50.84 |===============================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 19.27 |====================================================================
b . 19.11 |===================================================================
c . 19.33 |====================================================================
d . 19.21 |====================================================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 51.89 |===================================================================
b . 52.33 |====================================================================
c . 51.74 |===================================================================
d . 52.05 |====================================================================
Zstd Compression 1.5.4
Compression Level: 19 - Decompression Speed
MB/s > Higher Is Better
a . 1503.9 |===================================================================
b . 1501.7 |===================================================================
c . 1503.7 |===================================================================
d . 1510.8 |===================================================================
Zstd Compression 1.5.4
Compression Level: 19 - Compression Speed
MB/s > Higher Is Better
a . 10.3 |====================================================================
b . 10.4 |=====================================================================
c . 10.4 |=====================================================================
d . 10.2 |====================================================================
Zstd Compression 1.5.4
Compression Level: 19, Long Mode - Decompression Speed
MB/s > Higher Is Better
a . 1439.0 |===================================================================
b . 1438.8 |===================================================================
c . 1433.9 |===================================================================
d . 1436.0 |===================================================================
Zstd Compression 1.5.4
Compression Level: 19, Long Mode - Compression Speed
MB/s > Higher Is Better
a . 6.05 |=====================================================================
b . 5.98 |====================================================================
c . 5.96 |====================================================================
d . 6.06 |=====================================================================
Zstd Compression 1.5.4
Compression Level: 8, Long Mode - Decompression Speed
MB/s > Higher Is Better
a . 1780.2 |===================================================================
b . 1778.5 |===================================================================
c . 1754.7 |==================================================================
d . 1764.0 |==================================================================
Zstd Compression 1.5.4
Compression Level: 8, Long Mode - Compression Speed
MB/s > Higher Is Better
a . 281.3 |====================================================================
b . 282.3 |====================================================================
c . 282.9 |====================================================================
d . 281.9 |====================================================================
Zstd Compression 1.5.4
Compression Level: 12 - Decompression Speed
MB/s > Higher Is Better
a . 1782.6 |===================================================================
b . 1790.2 |===================================================================
c . 1763.8 |==================================================================
d . 1753.1 |==================================================================
Zstd Compression 1.5.4
Compression Level: 12 - Compression Speed
MB/s > Higher Is Better
a . 99.0 |=====================================================================
b . 99.6 |=====================================================================
c . 98.8 |====================================================================
d . 99.3 |=====================================================================
Zstd Compression 1.5.4
Compression Level: 3, Long Mode - Decompression Speed
MB/s > Higher Is Better
a . 1629.9 |==================================================================
b . 1645.9 |==================================================================
c . 1658.4 |===================================================================
d . 1648.9 |===================================================================
Zstd Compression 1.5.4
Compression Level: 3, Long Mode - Compression Speed
MB/s > Higher Is Better
a . 448.2 |===========================================================
b . 448.6 |===========================================================
c . 515.3 |====================================================================
d . 455.3 |============================================================
Zstd Compression 1.5.4
Compression Level: 3 - Decompression Speed
MB/s > Higher Is Better
a . 1627.1 |==================================================================
b . 1631.2 |===================================================================
c . 1632.0 |===================================================================
d . 1640.9 |===================================================================
Zstd Compression 1.5.4
Compression Level: 3 - Compression Speed
MB/s > Higher Is Better
a . 1003.7 |===================================================================
b . 1006.6 |===================================================================
c . 1008.4 |===================================================================
d . 999.8 |==================================================================
Zstd Compression 1.5.4
Compression Level: 8 - Decompression Speed
MB/s > Higher Is Better
a . 1723.7 |==================================================================
b . 1738.6 |==================================================================
c . 1749.9 |===================================================================
d . 1754.5 |===================================================================
Zstd Compression 1.5.4
Compression Level: 8 - Compression Speed
MB/s > Higher Is Better
a . 275.6 |====================================================================
b . 276.1 |====================================================================
c . 276.8 |====================================================================
d . 275.0 |====================================================================