OONX Sapphire Rapids

2 x Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302115-NE-OONXSAPPH22
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
ONNX Runtime 1.14
February 11 2023
  3 Hours, 26 Minutes
ONNX Runtime 1.14 - No AMX
February 11 2023
  2 Hours, 41 Minutes
ONNX Runtime 1.14 use_dnnl
February 11 2023
  5 Hours, 22 Minutes
Invert Hiding All Results Option
  3 Hours, 50 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


OONX Sapphire RapidsOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce1008GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007 + 960GB INTEL SSDSC2KG96ASPEEDVGA HDMI4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-TUbuntu 23.045.19.0-21-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.6GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionOONX Sapphire Rapids PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0 - Python 3.11.1- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

ONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnlResult OverviewPhoronix Test Suite100%101%103%104%ONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX Runtimefcn-resnet101-11 - CPU - StandardGPT-2 - CPU - Parallelyolov4 - CPU - StandardGPT-2 - CPU - Parallelfcn-resnet101-11 - CPU - Standardyolov4 - CPU - Standard

OONX Sapphire Rapidsonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl153.59633.966332.899411.53783.032539.36451684.14614.9354162.477177.925179.895209.66227.214739.7690589.19715.026730.404625.14401.695171.461496.155645.6196936.743329.44066.505275.26053190.01966.549066.97055.557424.76912329.874100.287110.00997106.79986.6683158.30511.266272.959249.4763115.1688180.391209.70314.99546.311095.13582195.14966.688865.92435.542404.76804337.92195.600210.4618105.53188.9286150.86032.609231.778411.20203.006549.56208695.77715.1855164.899175.674178.529211.42327.425940.0070590.79514.997931.469424.99561.690671.437036.063175.6917536.466630.67626.623095.15414194.52266.685565.88225.600214.72937333.098101.01609.93450104.58289.3915OpenBenchmarking.org

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl306090120150SE +/- 1.30, N = 3SE +/- 1.03, N = 3SE +/- 1.24, N = 3153.60158.31150.861. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl306090120150Min: 151.26 / Avg: 153.6 / Max: 155.75Min: 156.58 / Avg: 158.31 / Max: 160.15Min: 148.98 / Avg: 150.86 / Max: 153.211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl816243240SE +/- 0.22, N = 3SE +/- 0.46, N = 333.9732.611. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl714212835Min: 33.57 / Avg: 33.97 / Max: 34.32Min: 31.69 / Avg: 32.61 / Max: 33.111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl816243240SE +/- 0.46, N = 3SE +/- 0.27, N = 332.9031.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl714212835Min: 32.31 / Avg: 32.9 / Max: 33.8Min: 31.44 / Avg: 31.78 / Max: 32.31. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215SE +/- 0.02, N = 3SE +/- 0.13, N = 15SE +/- 0.11, N = 1511.5411.2711.201. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215Min: 11.51 / Avg: 11.54 / Max: 11.56Min: 9.92 / Avg: 11.27 / Max: 11.67Min: 10.18 / Avg: 11.2 / Max: 11.581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl0.68231.36462.04692.72923.4115SE +/- 0.04052, N = 3SE +/- 0.00278, N = 3SE +/- 0.03052, N = 153.032532.959243.006541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl246810Min: 2.95 / Avg: 3.03 / Max: 3.08Min: 2.95 / Avg: 2.96 / Max: 2.96Min: 2.77 / Avg: 3.01 / Max: 3.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215SE +/- 0.08393, N = 3SE +/- 0.06211, N = 3SE +/- 0.05377, N = 39.364519.476319.562081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215Min: 9.27 / Avg: 9.36 / Max: 9.53Min: 9.35 / Avg: 9.48 / Max: 9.55Min: 9.47 / Avg: 9.56 / Max: 9.661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl150300450600750SE +/- 9.35, N = 3SE +/- 9.80, N = 3684.15695.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl120240360480600Min: 667.75 / Avg: 684.15 / Max: 700.13Min: 681.95 / Avg: 695.78 / Max: 714.731. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl48121620SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 914.9415.1715.191. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl48121620Min: 14.63 / Avg: 14.94 / Max: 15.2Min: 15.1 / Avg: 15.17 / Max: 15.22Min: 14.97 / Avg: 15.19 / Max: 16.111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl4080120160200SE +/- 1.61, N = 5SE +/- 1.24, N = 3162.48164.901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl306090120150Min: 156.89 / Avg: 162.48 / Max: 166.69Min: 162.57 / Avg: 164.9 / Max: 166.771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl4080120160200SE +/- 0.57, N = 3SE +/- 0.56, N = 3177.93175.671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl306090120150Min: 176.94 / Avg: 177.93 / Max: 178.92Min: 174.96 / Avg: 175.67 / Max: 176.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl4080120160200SE +/- 0.28, N = 3SE +/- 1.01, N = 3SE +/- 0.94, N = 3179.90180.39178.531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl306090120150Min: 179.59 / Avg: 179.9 / Max: 180.44Min: 178.67 / Avg: 180.39 / Max: 182.17Min: 176.81 / Avg: 178.53 / Max: 180.061. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl50100150200250SE +/- 0.73, N = 3SE +/- 0.42, N = 3SE +/- 0.60, N = 3209.66209.70211.421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl4080120160200Min: 208.78 / Avg: 209.66 / Max: 211.11Min: 208.9 / Avg: 209.7 / Max: 210.34Min: 210.25 / Avg: 211.42 / Max: 212.221. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl612182430SE +/- 0.06, N = 3SE +/- 0.26, N = 327.2127.431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl612182430Min: 27.1 / Avg: 27.21 / Max: 27.3Min: 26.91 / Avg: 27.43 / Max: 27.781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl918273645SE +/- 0.12, N = 3SE +/- 0.24, N = 339.7740.011. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl816243240Min: 39.53 / Avg: 39.77 / Max: 39.94Min: 39.52 / Avg: 40.01 / Max: 40.311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl130260390520650SE +/- 3.23, N = 3SE +/- 4.15, N = 3589.20590.801. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl100200300400500Min: 584.53 / Avg: 589.2 / Max: 595.39Min: 583.38 / Avg: 590.8 / Max: 597.741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl48121620SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 315.0315.0015.001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: ParallelONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl48121620Min: 14.89 / Avg: 15.03 / Max: 15.15Min: 14.85 / Avg: 15 / Max: 15.15Min: 14.71 / Avg: 15 / Max: 15.161. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

CPU Temperature Monitor

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl1122334455Min: 28 / Avg: 47.63 / Max: 55Min: 32 / Avg: 48.36 / Max: 55

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl120240360480600Min: 122.34 / Avg: 577.11 / Max: 709.46Min: 104.64 / Avg: 571.11 / Max: 709.06

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringONNX Runtime 1.14ONNX Runtime 1.14 use_dnnl6001200180024003000Min: 1900 / Avg: 3026.38 / Max: 3666Min: 1900 / Avg: 3073.43 / Max: 3677

ONNX Runtime

MinAvgMaxONNX Runtime 1.1437.045.148.0ONNX Runtime 1.14 use_dnnl40.046.051.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14236.5561.9589.2ONNX Runtime 1.14 use_dnnl267.7561.7588.0OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

MinAvgMaxONNX Runtime 1.14190030703525ONNX Runtime 1.14 use_dnnl190030343508OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1437.046.348.0ONNX Runtime 1.14 use_dnnl40.047.950.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

MinAvgMaxONNX Runtime 1.14239.7582.2599.4ONNX Runtime 1.14 use_dnnl257.9583.0603.0OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

MinAvgMaxONNX Runtime 1.14190029823510ONNX Runtime 1.14 use_dnnl190029973509OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1440.046.949.0ONNX Runtime 1.14 use_dnnl40.048.151.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14278615633ONNX Runtime 1.14 use_dnnl272617637OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190029083500ONNX Runtime 1.14 use_dnnl190029033501OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1440.048.851.0ONNX Runtime 1.14 use_dnnl40.049.751.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14289602623ONNX Runtime 1.14 use_dnnl283607628OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190029083400ONNX Runtime 1.14 use_dnnl190029143501OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1438.050.753.0ONNX Runtime 1.14 use_dnnl40.048.951.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14290633659ONNX Runtime 1.14 use_dnnl289638660OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190028903666ONNX Runtime 1.14 use_dnnl190028893507OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1436.050.152.0ONNX Runtime 1.14 use_dnnl36.050.753.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14202616637ONNX Runtime 1.14 use_dnnl208617639OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190029043506ONNX Runtime 1.14 use_dnnl190029053503OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1437.047.950.0ONNX Runtime 1.14 use_dnnl37.048.451.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14207623650ONNX Runtime 1.14 use_dnnl201625651OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190029113514ONNX Runtime 1.14 use_dnnl190029293510OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1433.048.452.0ONNX Runtime 1.14 use_dnnl32.048.952.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14202622649ONNX Runtime 1.14 use_dnnl199621652OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14290029353514ONNX Runtime 1.14 use_dnnl190029323512OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1434.037.442.0ONNX Runtime 1.14 - No AMX34.037.540.0ONNX Runtime 1.14 use_dnnl34.038.142.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

MinAvgMaxONNX Runtime 1.14202.4438.8455.7ONNX Runtime 1.14 - No AMX201.2439.1454.6ONNX Runtime 1.14 use_dnnl203.6438.6458.8OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

MinAvgMaxONNX Runtime 1.14190034943516ONNX Runtime 1.14 - No AMX190034753510ONNX Runtime 1.14 use_dnnl190034783511OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1435.048.150.0ONNX Runtime 1.14 - No AMX36.048.951.0ONNX Runtime 1.14 use_dnnl35.049.352.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14194.3434.5450.1ONNX Runtime 1.14 - No AMX193.3435.5452.2ONNX Runtime 1.14 use_dnnl104.6436.2453.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

MinAvgMaxONNX Runtime 1.14190034823515ONNX Runtime 1.14 - No AMX190034863524ONNX Runtime 1.14 use_dnnl190034943520OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl1.18362.36723.55084.73445.918SE +/- 0.00587, N = 3SE +/- 0.06843, N = 14SE +/- 0.07549, N = 135.260535.135825.154141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl246810Min: 5.25 / Avg: 5.26 / Max: 5.27Min: 4.26 / Avg: 5.14 / Max: 5.26Min: 4.27 / Avg: 5.15 / Max: 5.281. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl4080120160200SE +/- 0.21, N = 3SE +/- 3.06, N = 14SE +/- 3.35, N = 13190.02195.15194.521. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl4080120160200Min: 189.73 / Avg: 190.02 / Max: 190.43Min: 190.1 / Avg: 195.15 / Max: 234.52Min: 189.2 / Avg: 194.52 / Max: 234.231. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

MinAvgMaxONNX Runtime 1.1436.045.347.0ONNX Runtime 1.14 - No AMX36.045.148.0ONNX Runtime 1.14 use_dnnl37.045.849.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

MinAvgMaxONNX Runtime 1.14202.7562.2591.1ONNX Runtime 1.14 - No AMX194.9560.2590.7ONNX Runtime 1.14 use_dnnl212.9563.2593.5OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

MinAvgMaxONNX Runtime 1.14190031793512ONNX Runtime 1.14 - No AMX190031813510ONNX Runtime 1.14 use_dnnl190031793515OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1432.047.650.0ONNX Runtime 1.14 - No AMX33.047.951.0ONNX Runtime 1.14 use_dnnl32.049.251.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14201.2561.6584.0ONNX Runtime 1.14 - No AMX195.6564.4586.3ONNX Runtime 1.14 use_dnnl199.2565.5588.4OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor160320480640800

MinAvgMaxONNX Runtime 1.14190034603512ONNX Runtime 1.14 - No AMX190034703512ONNX Runtime 1.14 use_dnnl190034663516OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1434.037.540.0ONNX Runtime 1.14 - No AMX35.040.945.0ONNX Runtime 1.14 use_dnnl35.039.145.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1224364860

MinAvgMaxONNX Runtime 1.14191.9446.6462.1ONNX Runtime 1.14 - No AMX197.3445.5460.4ONNX Runtime 1.14 use_dnnl204.6446.4461.0OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

MinAvgMaxONNX Runtime 1.14190034953515ONNX Runtime 1.14 - No AMX190034783517ONNX Runtime 1.14 use_dnnl190034893514OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1437.053.055.0ONNX Runtime 1.14 - No AMX37.053.255.0ONNX Runtime 1.14 use_dnnl36.053.255.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14209.4437.4451.4ONNX Runtime 1.14 - No AMX198.6438.8453.5ONNX Runtime 1.14 use_dnnl198.9440.5454.7OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor120240360480600

MinAvgMaxONNX Runtime 1.14190034573530ONNX Runtime 1.14 - No AMX190034503516ONNX Runtime 1.14 use_dnnl190034573517OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1439.048.250.0ONNX Runtime 1.14 - No AMX39.048.550.0ONNX Runtime 1.14 use_dnnl38.048.551.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14195611638ONNX Runtime 1.14 - No AMX210611638ONNX Runtime 1.14 use_dnnl199613641OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190029223551ONNX Runtime 1.14 - No AMX190029053508ONNX Runtime 1.14 use_dnnl190029163677OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature MonitorONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl1122334455Min: 36 / Avg: 51.5 / Max: 54Min: 36 / Avg: 50.98 / Max: 54Min: 36 / Avg: 51.5 / Max: 54

OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption MonitorONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl120240360480600Min: 122.34 / Avg: 668.72 / Max: 709.46Min: 202.65 / Avg: 673.64 / Max: 706.82Min: 195.07 / Avg: 668.74 / Max: 709.06

OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) MonitorONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl6001200180024003000Min: 1900 / Avg: 2752.96 / Max: 3514Min: 1900 / Avg: 2751.37 / Max: 3515Min: 1900 / Avg: 2770.59 / Max: 3511

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl20406080100SE +/- 1.75, N = 15SE +/- 0.93, N = 3SE +/- 1.67, N = 15100.2995.60101.021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl20406080100Min: 95.63 / Avg: 100.29 / Max: 115.38Min: 93.88 / Avg: 95.6 / Max: 97.07Min: 94.77 / Avg: 101.02 / Max: 118.161. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215SE +/- 0.15984, N = 15SE +/- 0.10214, N = 3SE +/- 0.15346, N = 1510.0099710.461809.934501. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardONNX Runtime 1.14ONNX Runtime 1.14 - No AMXONNX Runtime 1.14 use_dnnl3691215Min: 8.67 / Avg: 10.01 / Max: 10.46Min: 10.3 / Avg: 10.46 / Max: 10.65Min: 8.46 / Avg: 9.93 / Max: 10.551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

MinAvgMaxONNX Runtime 1.1436.047.250.0ONNX Runtime 1.14 - No AMX37.047.750.0ONNX Runtime 1.14 use_dnnl36.047.550.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1428425670

MinAvgMaxONNX Runtime 1.14198611637ONNX Runtime 1.14 - No AMX210612639ONNX Runtime 1.14 use_dnnl203612638OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190029043509ONNX Runtime 1.14 - No AMX190029173506ONNX Runtime 1.14 use_dnnl190029213515OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

MinAvgMaxONNX Runtime 1.1428.046.250.0ONNX Runtime 1.14 - No AMX29.048.251.0ONNX Runtime 1.14 use_dnnl33.048.451.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.14CPU Temperature Monitor1530456075

MinAvgMaxONNX Runtime 1.14203611638ONNX Runtime 1.14 - No AMX201614639ONNX Runtime 1.14 use_dnnl197613639OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor2004006008001000

MinAvgMaxONNX Runtime 1.14190029193512ONNX Runtime 1.14 - No AMX190029053511ONNX Runtime 1.14 use_dnnl190029113514OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.14CPU Peak Freq (Highest CPU Core Frequency) Monitor10002000300040005000

77 Results Shown

ONNX Runtime:
  GPT-2 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Standard
  Faster R-CNN R-50-FPN-int8 - CPU - Parallel
  yolov4 - CPU - Standard
  fcn-resnet101-11 - CPU - Parallel
  yolov4 - CPU - Parallel
  CaffeNet 12-int8 - CPU - Standard
  bertsquad-12 - CPU - Standard
  ResNet50 v1-12-int8 - CPU - Parallel
  ResNet50 v1-12-int8 - CPU - Standard
  super-resolution-10 - CPU - Parallel
  super-resolution-10 - CPU - Standard
  ArcFace ResNet-100 - CPU - Parallel
  Faster R-CNN R-50-FPN-int8 - CPU - Standard
  CaffeNet 12-int8 - CPU - Parallel
  bertsquad-12 - CPU - Parallel
CPU Temperature Monitor:
  Phoronix Test Suite System Monitoring:
    Celsius
    Watts
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
  CPU Temp Monitor:
    Celsius
  CPU Power Consumption Monitor:
    Watts
  CPU Peak Freq (Highest CPU Core Frequency) Monitor:
    Megahertz
ONNX Runtime:
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
ONNX Runtime:
  CPU Temp Monitor
  CPU Power Consumption Monitor
  CPU Peak Freq (Highest CPU Core Frequency) Monitor
  CPU Temp Monitor
  CPU Power Consumption Monitor
  CPU Peak Freq (Highest CPU Core Frequency) Monitor
  CPU Temp Monitor
  CPU Power Consumption Monitor
  CPU Peak Freq (Highest CPU Core Frequency) Monitor
  CPU Temp Monitor
  CPU Power Consumption Monitor
  CPU Peak Freq (Highest CPU Core Frequency) Monitor
  CPU Temp Monitor
  CPU Power Consumption Monitor
  CPU Peak Freq (Highest CPU Core Frequency) Monitor
  CPU Temp Monitor
  CPU Power Consumption Monitor
  CPU Peak Freq (Highest CPU Core Frequency) Monitor
ONNX Runtime:
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
ONNX Runtime:
  CPU Temp Monitor
  CPU Power Consumption Monitor
  CPU Peak Freq (Highest CPU Core Fre