OONX Sapphire Rapids

2 x Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2302115-NE-OONXSAPPH22&grw&rdt&rro.

OONX Sapphire RapidsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce1008GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007 + 960GB INTEL SSDSC2KG96ASPEEDVGA HDMI4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-TUbuntu 23.045.19.0-21-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.6GCC 12.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Python Details- Python 3.11.1Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

OONX Sapphire Rapidsonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl11.53789.3645110.009973.03253209.662179.89514.935415.0267190.019153.59633.966327.2147177.925162.477684.146589.19739.769032.899486.6683106.799100.2871329.8744.769125.5574266.970566.54905.260536.5052729.440636.74335.619696.155641.461491.6951725.144030.404611.266279.4763110.46182.95924209.703180.39115.168814.9954195.149158.30588.9286105.53195.6002337.9214.768045.5424065.924366.68885.135826.3110911.20209.562089.934503.00654211.423178.52915.185514.9979194.522150.86032.609227.4259175.674164.899695.777590.79540.007031.778489.3915104.582101.0160333.0984.729375.6002165.882266.68555.154146.6230930.676236.46665.691756.063171.437031.6906724.995631.4694OpenBenchmarking.org

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.143691215SE +/- 0.11, N = 15SE +/- 0.13, N = 15SE +/- 0.02, N = 311.2011.2711.541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.143691215SE +/- 0.05377, N = 3SE +/- 0.06211, N = 3SE +/- 0.08393, N = 39.562089.476319.364511. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.143691215SE +/- 0.15346, N = 15SE +/- 0.10214, N = 3SE +/- 0.15984, N = 159.9345010.4618010.009971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.140.68231.36462.04692.72923.4115SE +/- 0.03052, N = 15SE +/- 0.00278, N = 3SE +/- 0.04052, N = 33.006542.959243.032531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1450100150200250SE +/- 0.60, N = 3SE +/- 0.42, N = 3SE +/- 0.73, N = 3211.42209.70209.661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.144080120160200SE +/- 0.94, N = 3SE +/- 1.01, N = 3SE +/- 0.28, N = 3178.53180.39179.901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1448121620SE +/- 0.12, N = 9SE +/- 0.04, N = 3SE +/- 0.17, N = 315.1915.1714.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1448121620SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 315.0015.0015.031. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.144080120160200SE +/- 3.35, N = 13SE +/- 3.06, N = 14SE +/- 0.21, N = 3194.52195.15190.021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.14306090120150SE +/- 1.24, N = 3SE +/- 1.03, N = 3SE +/- 1.30, N = 3150.86158.31153.601. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14816243240SE +/- 0.46, N = 3SE +/- 0.22, N = 332.6133.971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14612182430SE +/- 0.26, N = 3SE +/- 0.06, N = 327.4327.211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.144080120160200SE +/- 0.56, N = 3SE +/- 0.57, N = 3175.67177.931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.144080120160200SE +/- 1.24, N = 3SE +/- 1.61, N = 5164.90162.481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14150300450600750SE +/- 9.80, N = 3SE +/- 9.35, N = 3695.78684.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14130260390520650SE +/- 4.15, N = 3SE +/- 3.23, N = 3590.80589.201. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14918273645SE +/- 0.24, N = 3SE +/- 0.12, N = 340.0139.771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14816243240SE +/- 0.27, N = 3SE +/- 0.46, N = 331.7832.901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 2910.83 / Max: 3514Min: 1900 / Avg: 2905.29 / Max: 3511Min: 1900 / Avg: 2919.08 / Max: 3512

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.14110220330440550Min: 196.94 / Avg: 613.29 / Max: 638.63Min: 201.41 / Avg: 613.94 / Max: 639.3Min: 202.94 / Avg: 610.95 / Max: 637.6

ONNX Runtime

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141020304050Min: 33 / Avg: 48.39 / Max: 51Min: 29 / Avg: 48.24 / Max: 51Min: 28 / Avg: 46.25 / Max: 50

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029213515ONNX Runtime 1.14 - No AMX190029173506ONNX Runtime 1.14190029043509OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl203612638ONNX Runtime 1.14 - No AMX210612639ONNX Runtime 1.14198611637OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl36.047.550.0ONNX Runtime 1.14 - No AMX37.047.750.0ONNX Runtime 1.1436.047.250.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 2770.59 / Max: 3511Min: 1900 / Avg: 2751.37 / Max: 3515Min: 1900 / Avg: 2752.96 / Max: 3514

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.14120240360480600Min: 195.07 / Avg: 668.74 / Max: 709.06Min: 202.65 / Avg: 673.64 / Max: 706.82Min: 122.34 / Avg: 668.72 / Max: 709.46

ONNX Runtime

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141122334455Min: 36 / Avg: 51.5 / Max: 54Min: 36 / Avg: 50.98 / Max: 54Min: 36 / Avg: 51.5 / Max: 54

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 2916.07 / Max: 3677Min: 1900 / Avg: 2905.18 / Max: 3508Min: 1900 / Avg: 2921.95 / Max: 3551

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.14110220330440550Min: 198.53 / Avg: 612.94 / Max: 641.31Min: 209.54 / Avg: 611.05 / Max: 638.4Min: 194.95 / Avg: 611.31 / Max: 637.9

ONNX Runtime

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141020304050Min: 38 / Avg: 48.5 / Max: 51Min: 39 / Avg: 48.53 / Max: 50Min: 39 / Avg: 48.15 / Max: 50

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190034573517ONNX Runtime 1.14 - No AMX190034503516ONNX Runtime 1.14190034573530OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl198.9440.5454.7ONNX Runtime 1.14 - No AMX198.6438.8453.5ONNX Runtime 1.14209.4437.4451.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl36.053.255.0ONNX Runtime 1.14 - No AMX37.053.255.0ONNX Runtime 1.1437.053.055.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190034893514ONNX Runtime 1.14 - No AMX190034783517ONNX Runtime 1.14190034953515OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl204.6446.4461.0ONNX Runtime 1.14 - No AMX197.3445.5460.4ONNX Runtime 1.14191.9446.6462.1OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl35.039.145.0ONNX Runtime 1.14 - No AMX35.040.945.0ONNX Runtime 1.1434.037.540.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190034663516ONNX Runtime 1.14 - No AMX190034703512ONNX Runtime 1.14190034603512OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl199.2565.5588.4ONNX Runtime 1.14 - No AMX195.6564.4586.3ONNX Runtime 1.14201.2561.6584.0OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl32.049.251.0ONNX Runtime 1.14 - No AMX33.047.951.0ONNX Runtime 1.1432.047.650.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190031793515ONNX Runtime 1.14 - No AMX190031813510ONNX Runtime 1.14190031793512OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl212.9563.2593.5ONNX Runtime 1.14 - No AMX194.9560.2590.7ONNX Runtime 1.14202.7562.2591.1OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl37.045.849.0ONNX Runtime 1.14 - No AMX36.045.148.0ONNX Runtime 1.1436.045.347.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 3493.87 / Max: 3520Min: 1900 / Avg: 3486.47 / Max: 3524Min: 1900 / Avg: 3481.53 / Max: 3515

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1480160240320400Min: 104.64 / Avg: 436.21 / Max: 453.4Min: 193.29 / Avg: 435.51 / Max: 452.19Min: 194.32 / Avg: 434.53 / Max: 450.05

ONNX Runtime

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141020304050Min: 35 / Avg: 49.32 / Max: 52Min: 36 / Avg: 48.93 / Max: 51Min: 35 / Avg: 48.1 / Max: 50

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190034783511ONNX Runtime 1.14 - No AMX190034753510ONNX Runtime 1.14190034943516OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl203.6438.6458.8ONNX Runtime 1.14 - No AMX201.2439.1454.6ONNX Runtime 1.14202.4438.8455.7OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl34.038.142.0ONNX Runtime 1.14 - No AMX34.037.540.0ONNX Runtime 1.1434.037.442.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029323512ONNX Runtime 1.14290029353514OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl199621652ONNX Runtime 1.14202622649OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl32.048.952.0ONNX Runtime 1.1433.048.452.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029293510ONNX Runtime 1.14190029113514OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl201625651ONNX Runtime 1.14207623650OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl37.048.451.0ONNX Runtime 1.1437.047.950.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029053503ONNX Runtime 1.14190029043506OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl208617639ONNX Runtime 1.14202616637OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl36.050.753.0ONNX Runtime 1.1436.050.152.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190028893507ONNX Runtime 1.14190028903666OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl289638660ONNX Runtime 1.14290633659OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.048.951.0ONNX Runtime 1.1438.050.753.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029143501ONNX Runtime 1.14190029083400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl283607628ONNX Runtime 1.14289602623OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.049.751.0ONNX Runtime 1.1440.048.851.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029033501ONNX Runtime 1.14190029083500OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl272617637ONNX Runtime 1.14278615633OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.048.151.0ONNX Runtime 1.1440.046.949.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029973509ONNX Runtime 1.14190029823510OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl257.9583.0603.0ONNX Runtime 1.14239.7582.2599.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.047.950.0ONNX Runtime 1.1437.046.348.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190030343508ONNX Runtime 1.14190030703525OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl267.7561.7588.0ONNX Runtime 1.14236.5561.9589.2OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.046.051.0ONNX Runtime 1.1437.045.148.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

CPU Peak Freq (Highest CPU Core Frequency) Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14 use_dnnlONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 3073.43 / Max: 3677Min: 1900 / Avg: 3026.38 / Max: 3666

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14 use_dnnlONNX Runtime 1.14120240360480600Min: 104.64 / Avg: 571.11 / Max: 709.06Min: 122.34 / Avg: 577.11 / Max: 709.46

CPU Temperature Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14 use_dnnlONNX Runtime 1.141122334455Min: 32 / Avg: 48.36 / Max: 55Min: 28 / Avg: 47.63 / Max: 55

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1420406080100SE +/- 0.92, N = 15SE +/- 1.09, N = 15SE +/- 0.12, N = 389.3988.9386.671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1420406080100SE +/- 0.59, N = 3SE +/- 0.70, N = 3SE +/- 0.95, N = 3104.58105.53106.801. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1420406080100SE +/- 1.67, N = 15SE +/- 0.93, N = 3SE +/- 1.75, N = 15101.0295.60100.291. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1470140210280350SE +/- 3.47, N = 15SE +/- 0.32, N = 3SE +/- 4.47, N = 3333.10337.92329.871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141.07312.14623.21934.29245.3655SE +/- 0.01348, N = 3SE +/- 0.00962, N = 3SE +/- 0.01656, N = 34.729374.768044.769121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141.262.523.785.046.3SE +/- 0.02952, N = 3SE +/- 0.03108, N = 3SE +/- 0.00850, N = 35.600215.542405.557421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141530456075SE +/- 0.49, N = 9SE +/- 0.16, N = 3SE +/- 0.75, N = 365.8865.9266.971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141530456075SE +/- 0.65, N = 3SE +/- 0.38, N = 3SE +/- 0.34, N = 366.6966.6966.551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141.18362.36723.55084.73445.918SE +/- 0.07549, N = 13SE +/- 0.06843, N = 14SE +/- 0.00587, N = 35.154145.135825.260531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.14246810SE +/- 0.05441, N = 3SE +/- 0.04118, N = 3SE +/- 0.05508, N = 36.623096.311096.505271. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14714212835SE +/- 0.44, N = 3SE +/- 0.19, N = 330.6829.441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14816243240SE +/- 0.35, N = 3SE +/- 0.08, N = 336.4736.741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.141.28062.56123.84185.12246.403SE +/- 0.01821, N = 3SE +/- 0.01809, N = 35.691755.619691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14246810SE +/- 0.04559, N = 3SE +/- 0.06169, N = 56.063176.155641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.140.32880.65760.98641.31521.644SE +/- 0.02005, N = 3SE +/- 0.02000, N = 31.437031.461491. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.140.38140.76281.14421.52561.907SE +/- 0.01190, N = 3SE +/- 0.00918, N = 31.690671.695171. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14612182430SE +/- 0.15, N = 3SE +/- 0.08, N = 325.0025.141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14714212835SE +/- 0.26, N = 3SE +/- 0.42, N = 331.4730.401. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt


Phoronix Test Suite v10.8.5