OONX Sapphire Rapids

2 x Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2302115-NE-OONXSAPPH22&grw&sor&rro.

OONX Sapphire RapidsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce1008GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007 + 960GB INTEL SSDSC2KG96ASPEEDVGA HDMI4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-TUbuntu 23.045.19.0-21-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.6GCC 12.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Python Details- Python 3.11.1Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

OONX Sapphire Rapidsonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl11.53789.3645110.009973.03253209.662179.89514.935415.0267190.019153.59633.966327.2147177.925162.477684.146589.19739.769032.899486.6683106.799100.2871329.8744.769125.5574266.970566.54905.260536.5052729.440636.74335.619696.155641.461491.6951725.144030.404611.266279.4763110.46182.95924209.703180.39115.168814.9954195.149158.30588.9286105.53195.6002337.9214.768045.5424065.924366.68885.135826.3110911.20209.562089.934503.00654211.423178.52915.185514.9979194.522150.86032.609227.4259175.674164.899695.777590.79540.007031.778489.3915104.582101.0160333.0984.729375.6002165.882266.68555.154146.6230930.676236.46665.691756.063171.437031.6906724.995631.4694OpenBenchmarking.org

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.143691215SE +/- 0.11, N = 15SE +/- 0.13, N = 15SE +/- 0.02, N = 311.2011.2711.541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215SE +/- 0.08393, N = 3SE +/- 0.06211, N = 3SE +/- 0.05377, N = 39.364519.476319.562081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX3691215SE +/- 0.15346, N = 15SE +/- 0.15984, N = 15SE +/- 0.10214, N = 39.9345010.0099710.461801. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.140.68231.36462.04692.72923.4115SE +/- 0.00278, N = 3SE +/- 0.03052, N = 15SE +/- 0.04052, N = 32.959243.006543.032531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl50100150200250SE +/- 0.73, N = 3SE +/- 0.42, N = 3SE +/- 0.60, N = 3209.66209.70211.421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX4080120160200SE +/- 0.94, N = 3SE +/- 0.28, N = 3SE +/- 1.01, N = 3178.53179.90180.391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl48121620SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 914.9415.1715.191. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.1448121620SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.08, N = 315.0015.0015.031. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX4080120160200SE +/- 0.21, N = 3SE +/- 3.35, N = 13SE +/- 3.06, N = 14190.02194.52195.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX306090120150SE +/- 1.24, N = 3SE +/- 1.30, N = 3SE +/- 1.03, N = 3150.86153.60158.311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14816243240SE +/- 0.46, N = 3SE +/- 0.22, N = 332.6133.971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl612182430SE +/- 0.06, N = 3SE +/- 0.26, N = 327.2127.431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.144080120160200SE +/- 0.56, N = 3SE +/- 0.57, N = 3175.67177.931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl4080120160200SE +/- 1.61, N = 5SE +/- 1.24, N = 3162.48164.901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl150300450600750SE +/- 9.35, N = 3SE +/- 9.80, N = 3684.15695.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl130260390520650SE +/- 3.23, N = 3SE +/- 4.15, N = 3589.20590.801. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl918273645SE +/- 0.12, N = 3SE +/- 0.24, N = 339.7740.011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14816243240SE +/- 0.27, N = 3SE +/- 0.46, N = 331.7832.901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 2905.29 / Max: 3511Min: 1900 / Avg: 2910.83 / Max: 3514Min: 1900 / Avg: 2919.08 / Max: 3512

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.14110220330440550Min: 201.41 / Avg: 613.94 / Max: 639.3Min: 196.94 / Avg: 613.29 / Max: 638.63Min: 202.94 / Avg: 610.95 / Max: 637.6

ONNX Runtime

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141020304050Min: 33 / Avg: 48.39 / Max: 51Min: 29 / Avg: 48.24 / Max: 51Min: 28 / Avg: 46.25 / Max: 50

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029043509ONNX Runtime 1.14 - No AMX190029173506ONNX Runtime 1.14 use_dnnl190029213515OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX210612639ONNX Runtime 1.14 use_dnnl203612638ONNX Runtime 1.14198611637OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX37.047.750.0ONNX Runtime 1.14 use_dnnl36.047.550.0ONNX Runtime 1.1436.047.250.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX190027513515ONNX Runtime 1.14190027533514ONNX Runtime 1.14 use_dnnl190027713511OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX203674707ONNX Runtime 1.14 use_dnnl195669709ONNX Runtime 1.14122669709OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX1122334455Min: 36 / Avg: 51.5 / Max: 54Min: 36 / Avg: 51.5 / Max: 54Min: 36 / Avg: 50.98 / Max: 54

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX190029053508ONNX Runtime 1.14 use_dnnl190029163677ONNX Runtime 1.14190029223551OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX110220330440550Min: 198.53 / Avg: 612.94 / Max: 641.31Min: 194.95 / Avg: 611.31 / Max: 637.9Min: 209.54 / Avg: 611.05 / Max: 638.4

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX39.048.550.0ONNX Runtime 1.14 use_dnnl38.048.551.0ONNX Runtime 1.1439.048.250.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX190034503516ONNX Runtime 1.14190034573530ONNX Runtime 1.14 use_dnnl190034573517OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl198.9440.5454.7ONNX Runtime 1.14 - No AMX198.6438.8453.5ONNX Runtime 1.14209.4437.4451.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX37.053.255.0ONNX Runtime 1.14 use_dnnl36.053.255.0ONNX Runtime 1.1437.053.055.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX190034783517ONNX Runtime 1.14 use_dnnl190034893514ONNX Runtime 1.14190034953515OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14191.9446.6462.1ONNX Runtime 1.14 use_dnnl204.6446.4461.0ONNX Runtime 1.14 - No AMX197.3445.5460.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX35.040.945.0ONNX Runtime 1.14 use_dnnl35.039.145.0ONNX Runtime 1.1434.037.540.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190034603512ONNX Runtime 1.14 use_dnnl190034663516ONNX Runtime 1.14 - No AMX190034703512OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl199.2565.5588.4ONNX Runtime 1.14 - No AMX195.6564.4586.3ONNX Runtime 1.14201.2561.6584.0OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl32.049.251.0ONNX Runtime 1.14 - No AMX33.047.951.0ONNX Runtime 1.1432.047.650.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190031793515ONNX Runtime 1.14190031793512ONNX Runtime 1.14 - No AMX190031813510OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl212.9563.2593.5ONNX Runtime 1.14202.7562.2591.1ONNX Runtime 1.14 - No AMX194.9560.2590.7OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl37.045.849.0ONNX Runtime 1.1436.045.347.0ONNX Runtime 1.14 - No AMX36.045.148.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190034823515ONNX Runtime 1.14 - No AMX190034863524ONNX Runtime 1.14 use_dnnl190034943520OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1480160240320400Min: 104.64 / Avg: 436.21 / Max: 453.4Min: 193.29 / Avg: 435.51 / Max: 452.19Min: 194.32 / Avg: 434.53 / Max: 450.05

ONNX Runtime

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141020304050Min: 35 / Avg: 49.32 / Max: 52Min: 36 / Avg: 48.93 / Max: 51Min: 35 / Avg: 48.1 / Max: 50

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX190034753510ONNX Runtime 1.14 use_dnnl190034783511ONNX Runtime 1.14190034943516OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX201.2439.1454.6ONNX Runtime 1.14202.4438.8455.7ONNX Runtime 1.14 use_dnnl203.6438.6458.8OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl34.038.142.0ONNX Runtime 1.14 - No AMX34.037.540.0ONNX Runtime 1.1434.037.442.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029323512ONNX Runtime 1.14290029353514OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14202622649ONNX Runtime 1.14 use_dnnl199621652OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl32.048.952.0ONNX Runtime 1.1433.048.452.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029113514ONNX Runtime 1.14 use_dnnl190029293510OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl201625651ONNX Runtime 1.14207623650OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl37.048.451.0ONNX Runtime 1.1437.047.950.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029043506ONNX Runtime 1.14 use_dnnl190029053503OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl208617639ONNX Runtime 1.14202616637OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl36.050.753.0ONNX Runtime 1.1436.050.152.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190028893507ONNX Runtime 1.14190028903666OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl289638660ONNX Runtime 1.14290633659OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1438.050.753.0ONNX Runtime 1.14 use_dnnl40.048.951.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029083400ONNX Runtime 1.14 use_dnnl190029143501OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl283607628ONNX Runtime 1.14289602623OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.049.751.0ONNX Runtime 1.1440.048.851.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029033501ONNX Runtime 1.14190029083500OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl272617637ONNX Runtime 1.14278615633OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.048.151.0ONNX Runtime 1.1440.046.949.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029823510ONNX Runtime 1.14 use_dnnl190029973509OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl257.9583.0603.0ONNX Runtime 1.14239.7582.2599.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.047.950.0ONNX Runtime 1.1437.046.348.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190030343508ONNX Runtime 1.14190030703525OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14236.5561.9589.2ONNX Runtime 1.14 use_dnnl267.7561.7588.0OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.046.051.0ONNX Runtime 1.1437.045.148.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

CPU Peak Freq (Highest CPU Core Frequency) Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14 use_dnnlONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 3073.43 / Max: 3677Min: 1900 / Avg: 3026.38 / Max: 3666

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl120240360480600Min: 122.34 / Avg: 577.11 / Max: 709.46Min: 104.64 / Avg: 571.11 / Max: 709.06

CPU Temperature Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14 use_dnnlONNX Runtime 1.141122334455Min: 32 / Avg: 48.36 / Max: 55Min: 28 / Avg: 47.63 / Max: 55

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1420406080100SE +/- 0.92, N = 15SE +/- 1.09, N = 15SE +/- 0.12, N = 389.3988.9386.671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl20406080100SE +/- 0.95, N = 3SE +/- 0.70, N = 3SE +/- 0.59, N = 3106.80105.53104.581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX20406080100SE +/- 1.67, N = 15SE +/- 1.75, N = 15SE +/- 0.93, N = 3101.02100.2995.601. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.1470140210280350SE +/- 0.32, N = 3SE +/- 3.47, N = 15SE +/- 4.47, N = 3337.92333.10329.871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl1.07312.14623.21934.29245.3655SE +/- 0.01656, N = 3SE +/- 0.00962, N = 3SE +/- 0.01348, N = 34.769124.768044.729371. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX1.262.523.785.046.3SE +/- 0.02952, N = 3SE +/- 0.00850, N = 3SE +/- 0.03108, N = 35.600215.557425.542401. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl1530456075SE +/- 0.75, N = 3SE +/- 0.16, N = 3SE +/- 0.49, N = 966.9765.9265.881. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.141530456075SE +/- 0.38, N = 3SE +/- 0.65, N = 3SE +/- 0.34, N = 366.6966.6966.551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX1.18362.36723.55084.73445.918SE +/- 0.00587, N = 3SE +/- 0.07549, N = 13SE +/- 0.06843, N = 145.260535.154145.135821. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX246810SE +/- 0.05441, N = 3SE +/- 0.05508, N = 3SE +/- 0.04118, N = 36.623096.505276.311091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14714212835SE +/- 0.44, N = 3SE +/- 0.19, N = 330.6829.441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl816243240SE +/- 0.08, N = 3SE +/- 0.35, N = 336.7436.471. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.141.28062.56123.84185.12246.403SE +/- 0.01821, N = 3SE +/- 0.01809, N = 35.691755.619691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl246810SE +/- 0.06169, N = 5SE +/- 0.04559, N = 36.155646.063171. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl0.32880.65760.98641.31521.644SE +/- 0.02000, N = 3SE +/- 0.02005, N = 31.461491.437031. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl0.38140.76281.14421.52561.907SE +/- 0.00918, N = 3SE +/- 0.01190, N = 31.695171.690671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl612182430SE +/- 0.08, N = 3SE +/- 0.15, N = 325.1425.001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14714212835SE +/- 0.26, N = 3SE +/- 0.42, N = 331.4730.401. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt


Phoronix Test Suite v10.8.5