OONX Sapphire Rapids

2 x Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2302115-NE-OONXSAPPH22&gru&sor.

OONX Sapphire RapidsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce1008GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007 + 960GB INTEL SSDSC2KG96ASPEEDVGA HDMI4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-TUbuntu 23.045.19.0-21-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.6GCC 12.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Python Details- Python 3.11.1Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

OONX Sapphire Rapidsonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl11.53789.3645110.009973.03253209.662179.89514.935415.0267190.019153.59633.966327.2147177.925162.477684.146589.19739.769032.899486.6683106.799100.2871329.8744.769125.5574266.970566.54905.260536.5052729.440636.74335.619696.155641.461491.6951725.144030.404611.266279.4763110.46182.95924209.703180.39115.168814.9954195.149158.30588.9286105.53195.6002337.9214.768045.5424065.924366.68885.135826.3110911.20209.562089.934503.00654211.423178.52915.185514.9979194.522150.86032.609227.4259175.674164.899695.777590.79540.007031.778489.3915104.582101.0160333.0984.729375.6002165.882266.68555.154146.6230930.676236.46665.691756.063171.437031.6906724.995631.4694OpenBenchmarking.org

CPU Temperature Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl1122334455Min: 28 / Avg: 47.63 / Max: 55Min: 32 / Avg: 48.36 / Max: 55

CPU Peak Freq (Highest CPU Core Frequency) Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl6001200180024003000Min: 1900 / Avg: 3026.38 / Max: 3666Min: 1900 / Avg: 3073.43 / Max: 3677

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14 use_dnnlONNX Runtime 1.14120240360480600Min: 104.64 / Avg: 571.11 / Max: 709.06Min: 122.34 / Avg: 577.11 / Max: 709.46

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215SE +/- 0.02, N = 3SE +/- 0.13, N = 15SE +/- 0.11, N = 1511.5411.2711.201. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.143691215SE +/- 0.05377, N = 3SE +/- 0.06211, N = 3SE +/- 0.08393, N = 39.562089.476319.364511. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14 - No AMXONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl3691215SE +/- 0.10214, N = 3SE +/- 0.15984, N = 15SE +/- 0.15346, N = 1510.4618010.009979.934501. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX0.68231.36462.04692.72923.4115SE +/- 0.04052, N = 3SE +/- 0.03052, N = 15SE +/- 0.00278, N = 33.032533.006542.959241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1450100150200250SE +/- 0.60, N = 3SE +/- 0.42, N = 3SE +/- 0.73, N = 3211.42209.70209.661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl4080120160200SE +/- 1.01, N = 3SE +/- 0.28, N = 3SE +/- 0.94, N = 3180.39179.90178.531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1448121620SE +/- 0.12, N = 9SE +/- 0.04, N = 3SE +/- 0.17, N = 315.1915.1714.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX48121620SE +/- 0.08, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 315.0315.0015.001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.144080120160200SE +/- 3.06, N = 14SE +/- 3.35, N = 13SE +/- 0.21, N = 3195.15194.52190.021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl306090120150SE +/- 1.03, N = 3SE +/- 1.30, N = 3SE +/- 1.24, N = 3158.31153.60150.861. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl816243240SE +/- 0.22, N = 3SE +/- 0.46, N = 333.9732.611. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14612182430SE +/- 0.26, N = 3SE +/- 0.06, N = 327.4327.211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl4080120160200SE +/- 0.57, N = 3SE +/- 0.56, N = 3177.93175.671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.144080120160200SE +/- 1.24, N = 3SE +/- 1.61, N = 5164.90162.481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14150300450600750SE +/- 9.80, N = 3SE +/- 9.35, N = 3695.78684.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14130260390520650SE +/- 4.15, N = 3SE +/- 3.23, N = 3590.80589.201. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14918273645SE +/- 0.24, N = 3SE +/- 0.12, N = 340.0139.771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl816243240SE +/- 0.46, N = 3SE +/- 0.27, N = 332.9031.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029193512ONNX Runtime 1.14 use_dnnl190029113514ONNX Runtime 1.14 - No AMX190029053511OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029213515ONNX Runtime 1.14 - No AMX190029173506ONNX Runtime 1.14190029043509OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14ONNX Runtime 1.14 - No AMX6001200180024003000Min: 1900 / Avg: 2770.59 / Max: 3511Min: 1900 / Avg: 2752.96 / Max: 3514Min: 1900 / Avg: 2751.37 / Max: 3515

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029223551ONNX Runtime 1.14 use_dnnl190029163677ONNX Runtime 1.14 - No AMX190029053508OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190034573517ONNX Runtime 1.14190034573530ONNX Runtime 1.14 - No AMX190034503516OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190034953515ONNX Runtime 1.14 use_dnnl190034893514ONNX Runtime 1.14 - No AMX190034783517OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX190034703512ONNX Runtime 1.14 use_dnnl190034663516ONNX Runtime 1.14190034603512OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX190031813510ONNX Runtime 1.14190031793512ONNX Runtime 1.14 use_dnnl190031793515OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.146001200180024003000Min: 1900 / Avg: 3493.87 / Max: 3520Min: 1900 / Avg: 3486.47 / Max: 3524Min: 1900 / Avg: 3481.53 / Max: 3515

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190034943516ONNX Runtime 1.14 use_dnnl190034783511ONNX Runtime 1.14 - No AMX190034753510OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14290029353514ONNX Runtime 1.14 use_dnnl190029323512OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029293510ONNX Runtime 1.14190029113514OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029053503ONNX Runtime 1.14190029043506OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190028903666ONNX Runtime 1.14 use_dnnl190028893507OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029143501ONNX Runtime 1.14190029083400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190029083500ONNX Runtime 1.14 use_dnnl190029033501OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl190029973509ONNX Runtime 1.14190029823510OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxONNX Runtime 1.14190030703525ONNX Runtime 1.14 use_dnnl190030343508OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1428.046.250.0ONNX Runtime 1.14 - No AMX29.048.251.0ONNX Runtime 1.14 use_dnnl33.048.451.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1436.047.250.0ONNX Runtime 1.14 use_dnnl36.047.550.0ONNX Runtime 1.14 - No AMX37.047.750.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX36.051.054.0ONNX Runtime 1.14 use_dnnl36.051.554.0ONNX Runtime 1.1436.051.554.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1439.048.250.0ONNX Runtime 1.14 use_dnnl38.048.551.0ONNX Runtime 1.14 - No AMX39.048.550.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1437.053.055.0ONNX Runtime 1.14 use_dnnl36.053.255.0ONNX Runtime 1.14 - No AMX37.053.255.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1434.037.540.0ONNX Runtime 1.14 use_dnnl35.039.145.0ONNX Runtime 1.14 - No AMX35.040.945.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1432.047.650.0ONNX Runtime 1.14 - No AMX33.047.951.0ONNX Runtime 1.14 use_dnnl32.049.251.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX36.045.148.0ONNX Runtime 1.1436.045.347.0ONNX Runtime 1.14 use_dnnl37.045.849.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1435.048.150.0ONNX Runtime 1.14 - No AMX36.048.951.0ONNX Runtime 1.14 use_dnnl35.049.352.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1434.037.442.0ONNX Runtime 1.14 - No AMX34.037.540.0ONNX Runtime 1.14 use_dnnl34.038.142.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1433.048.452.0ONNX Runtime 1.14 use_dnnl32.048.952.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1437.047.950.0ONNX Runtime 1.14 use_dnnl37.048.451.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1436.050.152.0ONNX Runtime 1.14 use_dnnl36.050.753.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl40.048.951.0ONNX Runtime 1.1438.050.753.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1440.048.851.0ONNX Runtime 1.14 use_dnnl40.049.751.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1440.046.949.0ONNX Runtime 1.14 use_dnnl40.048.151.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1437.046.348.0ONNX Runtime 1.14 use_dnnl40.047.950.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxONNX Runtime 1.1437.045.148.0ONNX Runtime 1.14 use_dnnl40.046.051.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl20406080100SE +/- 0.12, N = 3SE +/- 1.09, N = 15SE +/- 0.92, N = 1586.6788.9389.391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.1420406080100SE +/- 0.59, N = 3SE +/- 0.70, N = 3SE +/- 0.95, N = 3104.58105.53106.801. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14 - No AMXONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl20406080100SE +/- 0.93, N = 3SE +/- 1.75, N = 15SE +/- 1.67, N = 1595.60100.29101.021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX70140210280350SE +/- 4.47, N = 3SE +/- 3.47, N = 15SE +/- 0.32, N = 3329.87333.10337.921. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141.07312.14623.21934.29245.3655SE +/- 0.01348, N = 3SE +/- 0.00962, N = 3SE +/- 0.01656, N = 34.729374.768044.769121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl1.262.523.785.046.3SE +/- 0.03108, N = 3SE +/- 0.00850, N = 3SE +/- 0.02952, N = 35.542405.557425.600211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMXONNX Runtime 1.141530456075SE +/- 0.49, N = 9SE +/- 0.16, N = 3SE +/- 0.75, N = 365.8865.9266.971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX1530456075SE +/- 0.34, N = 3SE +/- 0.65, N = 3SE +/- 0.38, N = 366.5566.6966.691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlONNX Runtime 1.141.18362.36723.55084.73445.918SE +/- 0.06843, N = 14SE +/- 0.07549, N = 13SE +/- 0.00587, N = 35.135825.154145.260531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14 - No AMXONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl246810SE +/- 0.04118, N = 3SE +/- 0.05508, N = 3SE +/- 0.05441, N = 36.311096.505276.623091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl714212835SE +/- 0.19, N = 3SE +/- 0.44, N = 329.4430.681. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14816243240SE +/- 0.35, N = 3SE +/- 0.08, N = 336.4736.741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl1.28062.56123.84185.12246.403SE +/- 0.01809, N = 3SE +/- 0.01821, N = 35.619695.691751. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.14246810SE +/- 0.04559, N = 3SE +/- 0.06169, N = 56.063176.155641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.140.32880.65760.98641.31521.644SE +/- 0.02005, N = 3SE +/- 0.02000, N = 31.437031.461491. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14 use_dnnlONNX Runtime 1.140.38140.76281.14421.52561.907SE +/- 0.01190, N = 3SE +/- 0.00918, N = 31.690671.695171. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14 use_dnnlONNX Runtime 1.14612182430SE +/- 0.15, N = 3SE +/- 0.08, N = 325.0025.141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl714212835SE +/- 0.42, N = 3SE +/- 0.26, N = 330.4031.471. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14203611638ONNX Runtime 1.14 use_dnnl197613639ONNX Runtime 1.14 - No AMX201614639OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14198611637ONNX Runtime 1.14 use_dnnl203612638ONNX Runtime 1.14 - No AMX210612639OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14ONNX Runtime 1.14 use_dnnlONNX Runtime 1.14 - No AMX120240360480600Min: 122.34 / Avg: 668.72 / Max: 709.46Min: 195.07 / Avg: 668.74 / Max: 709.06Min: 202.65 / Avg: 673.64 / Max: 706.82

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX210611638ONNX Runtime 1.14195611638ONNX Runtime 1.14 use_dnnl199613641OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14209.4437.4451.4ONNX Runtime 1.14 - No AMX198.6438.8453.5ONNX Runtime 1.14 use_dnnl198.9440.5454.7OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX197.3445.5460.4ONNX Runtime 1.14 use_dnnl204.6446.4461.0ONNX Runtime 1.14191.9446.6462.1OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14201.2561.6584.0ONNX Runtime 1.14 - No AMX195.6564.4586.3ONNX Runtime 1.14 use_dnnl199.2565.5588.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 - No AMX194.9560.2590.7ONNX Runtime 1.14202.7562.2591.1ONNX Runtime 1.14 use_dnnl212.9563.2593.5OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14194.3434.5450.1ONNX Runtime 1.14 - No AMX193.3435.5452.2ONNX Runtime 1.14 use_dnnl104.6436.2453.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl203.6438.6458.8ONNX Runtime 1.14202.4438.8455.7ONNX Runtime 1.14 - No AMX201.2439.1454.6OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl199621652ONNX Runtime 1.14202622649OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14207623650ONNX Runtime 1.14 use_dnnl201625651OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14202616637ONNX Runtime 1.14 use_dnnl208617639OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14290633659ONNX Runtime 1.14 use_dnnl289638660OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14289602623ONNX Runtime 1.14 use_dnnl283607628OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14278615633ONNX Runtime 1.14 use_dnnl272617637OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14239.7582.2599.4ONNX Runtime 1.14 use_dnnl257.9583.0603.0OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxONNX Runtime 1.14 use_dnnl267.7561.7588.0ONNX Runtime 1.14236.5561.9589.2OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800


Phoronix Test Suite v10.8.5