AMD Ryzen 9 7900 12-Core testing with a Gigabyte B650M DS3H (F4h BIOS) and Gigabyte AMD Raphael 512MB on Ubuntu 22.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2302113-NE-ONNXRUNTI86
onnx runtime 1.14 amd ryzen 7900
AMD Ryzen 9 7900 12-Core testing with a Gigabyte B650M DS3H (F4h BIOS) and Gigabyte AMD Raphael 512MB on Ubuntu 22.10 via the Phoronix Test Suite.
,,"a","b","c"
Processor,,AMD Ryzen 9 7900 12-Core @ 3.70GHz (12 Cores / 24 Threads),AMD Ryzen 9 7900 12-Core @ 3.70GHz (12 Cores / 24 Threads),AMD Ryzen 9 7900 12-Core @ 3.70GHz (12 Cores / 24 Threads)
Motherboard,,Gigabyte B650M DS3H (F4h BIOS),Gigabyte B650M DS3H (F4h BIOS),Gigabyte B650M DS3H (F4h BIOS)
Chipset,,AMD Device 14d8,AMD Device 14d8,AMD Device 14d8
Memory,,32GB,32GB,32GB
Disk,,1000GB Sabrent Rocket 4.0 Plus,1000GB Sabrent Rocket 4.0 Plus,1000GB Sabrent Rocket 4.0 Plus
Graphics,,Gigabyte AMD Raphael 512MB (2200/2400MHz),Gigabyte AMD Raphael 512MB (2200/2400MHz),Gigabyte AMD Raphael 512MB (2200/2400MHz)
Audio,,AMD Rembrandt Radeon HD Audio,AMD Rembrandt Radeon HD Audio,AMD Rembrandt Radeon HD Audio
Monitor,,ASUS VP28U,ASUS VP28U,ASUS VP28U
Network,,Realtek RTL8125 2.5GbE,Realtek RTL8125 2.5GbE,Realtek RTL8125 2.5GbE
OS,,Ubuntu 22.10,Ubuntu 22.10,Ubuntu 22.10
Kernel,,6.2.0-060200rc5daily20230129-generic (x86_64),6.2.0-060200rc5daily20230129-generic (x86_64),6.2.0-060200rc5daily20230129-generic (x86_64)
Desktop,,GNOME Shell 43.0,GNOME Shell 43.0,GNOME Shell 43.0
Display Server,,X Server 1.21.1.4 + Wayland,X Server 1.21.1.4 + Wayland,X Server 1.21.1.4 + Wayland
OpenGL,,4.6 Mesa 23.0.0-devel (git-e20564c 2022-12-12 kinetic-oibaf-ppa) (LLVM 15.0.5 DRM 3.49),4.6 Mesa 23.0.0-devel (git-e20564c 2022-12-12 kinetic-oibaf-ppa) (LLVM 15.0.5 DRM 3.49),4.6 Mesa 23.0.0-devel (git-e20564c 2022-12-12 kinetic-oibaf-ppa) (LLVM 15.0.5 DRM 3.49)
Vulkan,,1.3.235,1.3.235,1.3.235
Compiler,,GCC 12.2.0,GCC 12.2.0,GCC 12.2.0
File-System,,ext4,ext4,ext4
Screen Resolution,,3840x2160,3840x2160,3840x2160
,,"a","b","c"
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,46.1584,48.7839,45.5455
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,1.44749,1.42497,1.46501
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,28.5525,28.0126,28.4802
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,117.620,119.305,117.987
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,12.7244,12.6537,12.5677
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,747.703,756.209,747.942
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,104.954,103.984,105.142
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,342.109,343.334,339.558
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,139.851,140.464,140.869
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,7.89231,7.85662,7.88522
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,17.9581,20.6953,20.7296
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,56.4768,48.3156,48.2353
"ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,21.6724,20.4971,21.9541
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,7.57906,6.14844,8.53572
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,136.489,162.636,117.149
"ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,9.52804,9.61619,9.51022
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,2.58475,2.57769,2.78527
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,388.919,387.843,358.939
"ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,2.92224,2.91195,2.94418
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,26.0143,25.1227,28.2701
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,39.1002,39.802,35.3716
"ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,35.0229,35.6974,35.1112
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,417.423,393.062,382.71
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,2.44468,2.54412,2.61293
"ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,690.920,701.765,682.589
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,1.072977,0.98564,0.984378
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,939.153,1014.19,1015.49
"ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,1.33643,1.32121,1.33605
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,62.7802,54.7368,55.3673
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,16.3127,18.2687,18.0605
"ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,78.6000,79.0267,79.5671
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,103.9429,93.408,96.4742
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,9.78495,10.7055,10.3652
"ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,126.714,127.279,126.818
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,7.14936,7.11735,7.09665
"ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,8.49794,8.37831,8.47144