onnx runtime 1.14 amd ryzen 7900

AMD Ryzen 9 7900 12-Core testing with a Gigabyte B650M DS3H (F4h BIOS) and Gigabyte AMD Raphael 512MB on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302113-NE-ONNXRUNTI86
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 11 2023
  7 Hours, 15 Minutes
b
February 11 2023
  55 Minutes
c
February 11 2023
  55 Minutes
Invert Hiding All Results Option
  3 Hours, 1 Minute

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


onnx runtime 1.14 amd ryzen 7900OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 7900 12-Core @ 3.70GHz (12 Cores / 24 Threads)Gigabyte B650M DS3H (F4h BIOS)AMD Device 14d832GB1000GB Sabrent Rocket 4.0 PlusGigabyte AMD Raphael 512MB (2200/2400MHz)AMD Rembrandt Radeon HD AudioASUS VP28URealtek RTL8125 2.5GbEUbuntu 22.106.2.0-060200rc5daily20230129-generic (x86_64)GNOME Shell 43.0X Server 1.21.1.4 + Wayland4.6 Mesa 23.0.0-devel (git-e20564c 2022-12-12 kinetic-oibaf-ppa) (LLVM 15.0.5 DRM 3.49)1.3.235GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionOnnx Runtime 1.14 Amd Ryzen 7900 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa601203- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%110%119%129%139%ONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX Runtimesuper-resolution-10 - CPU - StandardF.R.C.R.5.F.i - CPU - StandardArcFace ResNet-100 - CPU - Standardbertsquad-12 - CPU - Standardyolov4 - CPU - StandardR.v.1.i - CPU - StandardCaffeNet 12-int8 - CPU - StandardF.R.C.R.5.F.i - CPU - Parallelfcn-resnet101-11 - CPU - Standardfcn-resnet101-11 - CPU - ParallelArcFace ResNet-100 - CPU - ParallelGPT-2 - CPU - Parallelbertsquad-12 - CPU - ParallelCaffeNet 12-int8 - CPU - Parallelsuper-resolution-10 - CPU - ParallelR.v.1.i - CPU - ParallelGPT-2 - CPU - Standardyolov4 - CPU - ParallelF.R.C.R.5.F.i - CPU - StandardF.R.C.R.5.F.i - CPU - Parallelsuper-resolution-10 - CPU - Standardsuper-resolution-10 - CPU - ParallelR.v.1.i - CPU - StandardR.v.1.i - CPU - ParallelArcFace ResNet-100 - CPU - StandardArcFace ResNet-100 - CPU - Parallelfcn-resnet101-11 - CPU - Standardfcn-resnet101-11 - CPU - ParallelCaffeNet 12-int8 - CPU - StandardCaffeNet 12-int8 - CPU - Parallelbertsquad-12 - CPU - Standardbertsquad-12 - CPU - Parallelyolov4 - CPU - Standardyolov4 - CPU - ParallelGPT-2 - CPU - StandardGPT-2 - CPU - Parallel

onnx runtime 1.14 amd ryzen 7900onnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelabc46.15841.4474928.5525117.62012.7244747.703104.954342.109139.8517.8923117.958156.476821.67247.57906136.4899.528042.58475388.9192.9222426.014339.100235.0229417.4232.44468690.9201.072977939.1531.3364362.780216.312778.6000103.94299.78495126.7147.149368.4979448.78391.4249728.0126119.30512.6537756.209103.984343.334140.4647.8566220.695348.315620.49716.14844162.6369.616192.57769387.8432.9119525.122739.80235.6974393.0622.54412701.7650.985641014.191.3212154.736818.268779.026793.40810.7055127.2797.117358.3783145.54551.4650128.4802117.98712.5677747.942105.142339.558140.8697.8852220.729648.235321.95418.53572117.1499.510222.78527358.9392.9441828.270135.371635.1112382.712.61293682.5890.9843781015.491.3360555.367318.060579.567196.474210.3652126.8187.096658.47144OpenBenchmarking.org

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallelabc1122334455SE +/- 0.57, N = 446.1648.7845.551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelabc0.32960.65920.98881.31841.648SE +/- 0.01034, N = 31.447491.424971.465011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallelabc714212835SE +/- 0.08, N = 328.5528.0128.481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Parallelabc306090120150SE +/- 0.23, N = 3117.62119.31117.991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Parallelabc3691215SE +/- 0.11, N = 312.7212.6512.571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelabc160320480640800SE +/- 0.37, N = 3747.70756.21747.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Parallelabc20406080100SE +/- 0.64, N = 3104.95103.98105.141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelabc70140210280350SE +/- 1.02, N = 3342.11343.33339.561. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standardabc306090120150SE +/- 1.09, N = 3139.85140.46140.871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Parallelabc246810SE +/- 0.05112, N = 37.892317.856627.885221. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardabc510152025SE +/- 0.58, N = 1517.9620.7020.731. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardabc1326395265SE +/- 1.75, N = 1556.4848.3248.241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standardabc246810SE +/- 0.36564, N = 157.579066.148448.535721. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standardabc4080120160200SE +/- 6.76, N = 15136.49162.64117.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardabc0.62671.25341.88012.50683.1335SE +/- 0.05054, N = 152.584752.577692.785271. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardabc80160240320400SE +/- 7.79, N = 15388.92387.84358.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardabc714212835SE +/- 1.04, N = 1526.0125.1228.271. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardabc918273645SE +/- 1.18, N = 1539.1039.8035.371. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standardabc90180270360450SE +/- 19.24, N = 14417.42393.06382.711. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standardabc0.58791.17581.76372.35162.9395SE +/- 0.08226, N = 142.444682.544122.612931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardabc0.24140.48280.72420.96561.207SE +/- 0.029300, N = 121.0729770.9856400.9843781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardabc2004006008001000SE +/- 25.25, N = 12939.151014.191015.491. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standardabc1428425670SE +/- 3.13, N = 1262.7854.7455.371. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standardabc48121620SE +/- 0.70, N = 1216.3118.2718.061. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standardabc20406080100SE +/- 3.84, N = 15103.9493.4196.471. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standardabc3691215SE +/- 0.31791, N = 159.7849510.7055010.365201. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt