AMD EPYC Turin AI/ML Tuning Guide

AMD EPYC 9655P following AMD tuning guide for AI/ML workloads - https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/tuning-guides/58467_amd-epyc-9005-tg-bios-and-workload.pdf Benchmarks by Michael Larabel for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2411286-NE-AMDEPYCTU24&grr.

AMD EPYC Turin AI/ML Tuning GuideProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionStockAI/ML Tuning RecommendationsAMD EPYC 9655P 96-Core @ 2.60GHz (96 Cores / 192 Threads)Supermicro Super Server H13SSL-N v1.01 (3.0 BIOS)AMD 1Ah12 x 64GB DDR5-6000MT/s Micron MTC40F2046S1RC64BDY QSFF3201GB Micron_7450_MTFDKCB3T2TFSASPEED2 x Broadcom NetXtreme BCM5720 PCIeUbuntu 24.106.12.0-rc7-linux-pm-next-phx (x86_64)GNOME Shell 47.0X ServerGCC 14.2.0ext41024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2,rust --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xb002116 Python Details- Python 3.12.7Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AMD EPYC Turin AI/ML Tuning Guidewhisper-cpp: ggml-medium.en - 2016 State of the Uniononnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardtensorflow: CPU - 512 - ResNet-50llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048litert: NASNet Mobilewhisper-cpp: ggml-small.en - 2016 State of the Unionxnnpack: QS8MobileNetV2xnnpack: FP16MobileNetV3Smallxnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV1xnnpack: FP32MobileNetV3Smallxnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV1whisperfile: Mediumllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024tensorflow: CPU - 256 - ResNet-50llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048onnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardpytorch: CPU - 512 - ResNet-152pytorch: CPU - 256 - ResNet-152numpy: whisperfile: Smallllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048onednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512litert: Inception V4litert: Mobilenet Floatlitert: SqueezeNetllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512pytorch: CPU - 256 - ResNet-50pytorch: CPU - 512 - ResNet-50openvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time To First Tokenopenvino-genai: Gemma-7b-int4-ov - CPUwhisperfile: Tinyopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPUonednn: Deconvolution Batch shapes_1d - CPUllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128onednn: IP Shapes 1D - CPUllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPUonednn: IP Shapes 3D - CPUllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128onednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_3d - CPUStockAI/ML Tuning Recommendations454.28794165.0946.09788231.18149.00733737221.629181004210966909246341048892034539200.4829297.19204.33306.513.62341276.07620.6020.78885.5090.67175154.60144.53425.718276.37913.686747.4856.24852.8366.89716.2096.997.096720.984.5310525.8618.992517.890.45140497.343.9423587.025.778270.9926.943559.876.7214035.7672.443898.94438.527035.7472.9951.6151.4826.4336.1337.8431.8856019.6029.0751.016.7089745.840.53587448.0717.9824.1755.630.26556492.820.3414750.718484449.70262158.5286.35626235.35150.29689396214.62792981310323902845131038790624471197.24095101.71207.38308.323.50697285.10421.7121.79887.7588.03640155.73152.92406.453262.51713.386891.6454.43881.2265.56730.73101.776.667144.844.4110800.1218.162630.420.43146329.803.7125050.475.498691.0026.283649.636.0915437.7977.0543824.94335.676926.8876.4153.1353.3426.3135.4338.0031.4357319.5630.7051.126.6548246.450.50735548.7317.7125.6656.460.25412395.420.3218330.677050OpenBenchmarking.org

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the UnionStockAI/ML Tuning Recommendations100200300400500SE +/- 1.45, N = 3SE +/- 1.24, N = 3454.29449.701. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: StandardStockAI/ML Tuning Recommendations4080120160200SE +/- 3.66, N = 15SE +/- 3.81, N = 15165.09158.531. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: StandardStockAI/ML Tuning Recommendations246810SE +/- 0.13191, N = 15SE +/- 0.14383, N = 156.097886.356261. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: ResNet-50StockAI/ML Tuning Recommendations50100150200250SE +/- 0.28, N = 3SE +/- 0.20, N = 3231.18235.35

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048StockAI/ML Tuning Recommendations306090120150SE +/- 2.06, N = 3SE +/- 1.36, N = 15149.00150.291. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

LiteRT

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet MobileStockAI/ML Tuning Recommendations160K320K480K640K800KSE +/- 17324.48, N = 15SE +/- 22050.37, N = 12733737689396

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the UnionStockAI/ML Tuning Recommendations50100150200250SE +/- 2.33, N = 3SE +/- 0.68, N = 3221.63214.631. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

XNNPACK

Model: QS8MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2StockAI/ML Tuning Recommendations2K4K6K8K10KSE +/- 154.48, N = 3SE +/- 87.21, N = 31004298131. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3SmallStockAI/ML Tuning Recommendations2K4K6K8K10KSE +/- 410.82, N = 3SE +/- 21.33, N = 310966103231. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2StockAI/ML Tuning Recommendations2K4K6K8K10KSE +/- 89.20, N = 3SE +/- 94.57, N = 3909290281. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1StockAI/ML Tuning Recommendations10002000300040005000SE +/- 15.14, N = 3SE +/- 10.73, N = 3463445131. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3SmallStockAI/ML Tuning Recommendations2K4K6K8K10KSE +/- 11.35, N = 3SE +/- 28.22, N = 310488103871. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2StockAI/ML Tuning Recommendations2K4K6K8K10KSE +/- 32.13, N = 3SE +/- 25.31, N = 3920390621. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1StockAI/ML Tuning Recommendations10002000300040005000SE +/- 42.62, N = 3SE +/- 54.67, N = 3453944711. (CXX) g++ options: -O3 -lrt -lm

Whisperfile

Model Size: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: MediumStockAI/ML Tuning Recommendations4080120160200SE +/- 0.88, N = 3SE +/- 0.71, N = 3200.48197.24

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024StockAI/ML Tuning Recommendations20406080100SE +/- 1.13, N = 4SE +/- 0.67, N = 1597.19101.711. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: ResNet-50StockAI/ML Tuning Recommendations50100150200250SE +/- 0.57, N = 3SE +/- 0.72, N = 3204.33207.38

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048StockAI/ML Tuning Recommendations70140210280350SE +/- 2.62, N = 3SE +/- 2.23, N = 15306.51308.321. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardStockAI/ML Tuning Recommendations0.81531.63062.44593.26124.0765SE +/- 0.03395, N = 7SE +/- 0.01794, N = 33.623413.506971. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardStockAI/ML Tuning Recommendations60120180240300SE +/- 2.53, N = 7SE +/- 1.46, N = 3276.08285.101. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152StockAI/ML Tuning Recommendations510152025SE +/- 0.10, N = 3SE +/- 0.18, N = 320.6021.71MIN: 19.3 / MAX: 21.02MIN: 20.13 / MAX: 22.23

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152StockAI/ML Tuning Recommendations510152025SE +/- 0.04, N = 3SE +/- 0.27, N = 320.7821.79MIN: 19.72 / MAX: 21.04MIN: 20.38 / MAX: 22.54

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkStockAI/ML Tuning Recommendations2004006008001000SE +/- 1.94, N = 3SE +/- 0.72, N = 3885.50887.75

Whisperfile

Model Size: Small

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: SmallStockAI/ML Tuning Recommendations20406080100SE +/- 0.71, N = 3SE +/- 0.57, N = 390.6788.04

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512StockAI/ML Tuning Recommendations306090120150SE +/- 2.60, N = 12SE +/- 3.23, N = 12154.60155.731. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048StockAI/ML Tuning Recommendations306090120150SE +/- 0.97, N = 3SE +/- 1.38, N = 3144.53152.921. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUStockAI/ML Tuning Recommendations90180270360450SE +/- 0.52, N = 3SE +/- 0.37, N = 3425.72406.45MIN: 419.47MIN: 400.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUStockAI/ML Tuning Recommendations60120180240300SE +/- 0.68, N = 3SE +/- 0.27, N = 3276.38262.52MIN: 269.7MIN: 257.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Noise Suppression Poconet-Like FP16 - Device: CPUStockAI/ML Tuning Recommendations48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 313.6813.38MIN: 7.09 / MAX: 36.01MIN: 6.98 / MAX: 34.621. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Noise Suppression Poconet-Like FP16 - Device: CPUStockAI/ML Tuning Recommendations15003000450060007500SE +/- 11.12, N = 3SE +/- 9.90, N = 36747.486891.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Machine Translation EN To DE FP16 - Device: CPUStockAI/ML Tuning Recommendations1326395265SE +/- 0.02, N = 3SE +/- 0.03, N = 356.2454.43MIN: 29.3 / MAX: 94.69MIN: 28.33 / MAX: 92.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Machine Translation EN To DE FP16 - Device: CPUStockAI/ML Tuning Recommendations2004006008001000SE +/- 0.22, N = 3SE +/- 0.40, N = 3852.83881.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Person Detection FP16 - Device: CPUStockAI/ML Tuning Recommendations1530456075SE +/- 0.06, N = 3SE +/- 0.07, N = 366.8965.56MIN: 34.58 / MAX: 130MIN: 32.6 / MAX: 131.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Person Detection FP16 - Device: CPUStockAI/ML Tuning Recommendations160320480640800SE +/- 0.66, N = 3SE +/- 0.74, N = 3716.20730.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024StockAI/ML Tuning Recommendations20406080100SE +/- 0.34, N = 3SE +/- 1.16, N = 396.99101.771. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Person Vehicle Bike Detection FP16 - Device: CPUStockAI/ML Tuning Recommendations246810SE +/- 0.01, N = 3SE +/- 0.01, N = 37.096.66MIN: 4.15 / MAX: 20.16MIN: 3.65 / MAX: 22.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Person Vehicle Bike Detection FP16 - Device: CPUStockAI/ML Tuning Recommendations15003000450060007500SE +/- 13.33, N = 3SE +/- 8.11, N = 36720.987144.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Person Re-Identification Retail FP16 - Device: CPUStockAI/ML Tuning Recommendations1.01932.03863.05794.07725.0965SE +/- 0.01, N = 3SE +/- 0.00, N = 34.534.41MIN: 1.95 / MAX: 23.94MIN: 2.45 / MAX: 17.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Person Re-Identification Retail FP16 - Device: CPUStockAI/ML Tuning Recommendations2K4K6K8K10KSE +/- 12.77, N = 3SE +/- 6.08, N = 310525.8610800.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Road Segmentation ADAS FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations510152025SE +/- 0.06, N = 3SE +/- 0.02, N = 318.9918.16MIN: 9.19 / MAX: 39MIN: 7.68 / MAX: 40.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Road Segmentation ADAS FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations6001200180024003000SE +/- 7.45, N = 3SE +/- 2.42, N = 32517.892630.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUStockAI/ML Tuning Recommendations0.10130.20260.30390.40520.5065SE +/- 0.01, N = 3SE +/- 0.00, N = 30.450.43MIN: 0.16 / MAX: 25.14MIN: 0.15 / MAX: 23.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUStockAI/ML Tuning Recommendations30K60K90K120K150KSE +/- 313.81, N = 3SE +/- 528.15, N = 3140497.34146329.801. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Face Detection Retail FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations0.88651.7732.65953.5464.4325SE +/- 0.01, N = 3SE +/- 0.00, N = 33.943.71MIN: 1.76 / MAX: 17.65MIN: 1.71 / MAX: 19.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Face Detection Retail FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations5K10K15K20K25KSE +/- 17.66, N = 3SE +/- 17.84, N = 323587.0225050.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Vehicle Detection FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations1.29832.59663.89495.19326.4915SE +/- 0.00, N = 3SE +/- 0.01, N = 35.775.49MIN: 2.28 / MAX: 21.36MIN: 2.47 / MAX: 19.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Vehicle Detection FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations2K4K6K8K10KSE +/- 3.96, N = 3SE +/- 7.22, N = 38270.998691.001. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Handwritten English Recognition FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 326.9426.28MIN: 15.65 / MAX: 45.15MIN: 15.86 / MAX: 40.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Handwritten English Recognition FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations8001600240032004000SE +/- 2.96, N = 3SE +/- 3.35, N = 33559.873649.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Weld Porosity Detection FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations246810SE +/- 0.00, N = 3SE +/- 0.00, N = 36.726.09MIN: 2.22 / MAX: 22.18MIN: 2.21 / MAX: 23.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Weld Porosity Detection FP16-INT8 - Device: CPUStockAI/ML Tuning Recommendations3K6K9K12K15KSE +/- 9.10, N = 3SE +/- 14.35, N = 314035.7615437.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512StockAI/ML Tuning Recommendations20406080100SE +/- 0.83, N = 3SE +/- 0.77, N = 572.4077.051. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

LiteRT

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4StockAI/ML Tuning Recommendations9K18K27K36K45KSE +/- 47.06, N = 3SE +/- 159.42, N = 343898.943824.9

LiteRT

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet FloatStockAI/ML Tuning Recommendations10002000300040005000SE +/- 11.59, N = 3SE +/- 7.66, N = 34438.524335.67

LiteRT

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetStockAI/ML Tuning Recommendations15003000450060007500SE +/- 31.31, N = 3SE +/- 31.55, N = 37035.746926.88

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512StockAI/ML Tuning Recommendations20406080100SE +/- 0.98, N = 3SE +/- 0.99, N = 372.9976.411. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50StockAI/ML Tuning Recommendations1224364860SE +/- 0.15, N = 3SE +/- 0.28, N = 351.6153.13MIN: 45.56 / MAX: 52.57MIN: 46.57 / MAX: 54.35

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50StockAI/ML Tuning Recommendations1224364860SE +/- 0.15, N = 3SE +/- 0.26, N = 351.4853.34MIN: 46.04 / MAX: 52.41MIN: 49 / MAX: 54.59

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output TokenStockAI/ML Tuning Recommendations612182430SE +/- 0.09, N = 3SE +/- 0.06, N = 326.4326.31

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time To First TokenStockAI/ML Tuning Recommendations816243240SE +/- 0.20, N = 3SE +/- 0.08, N = 336.1335.43

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPUStockAI/ML Tuning Recommendations918273645SE +/- 0.13, N = 3SE +/- 0.09, N = 337.8438.00

Whisperfile

Model Size: Tiny

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: TinyStockAI/ML Tuning Recommendations714212835SE +/- 0.26, N = 3SE +/- 0.25, N = 331.8931.44

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output TokenStockAI/ML Tuning Recommendations510152025SE +/- 0.04, N = 3SE +/- 0.07, N = 319.6019.56

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First TokenStockAI/ML Tuning Recommendations714212835SE +/- 0.03, N = 3SE +/- 0.18, N = 329.0730.70

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPUStockAI/ML Tuning Recommendations1224364860SE +/- 0.09, N = 3SE +/- 0.19, N = 351.0151.12

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUStockAI/ML Tuning Recommendations246810SE +/- 0.03430, N = 3SE +/- 0.01789, N = 36.708976.65482MIN: 6.07MIN: 3.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128StockAI/ML Tuning Recommendations1122334455SE +/- 0.05, N = 4SE +/- 0.09, N = 445.8446.451. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUStockAI/ML Tuning Recommendations0.12060.24120.36180.48240.603SE +/- 0.001151, N = 4SE +/- 0.001097, N = 40.5358740.507355MIN: 0.49MIN: 0.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128StockAI/ML Tuning Recommendations1122334455SE +/- 0.07, N = 4SE +/- 0.05, N = 448.0748.731. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output TokenStockAI/ML Tuning Recommendations48121620SE +/- 0.05, N = 4SE +/- 0.04, N = 417.9817.71

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First TokenStockAI/ML Tuning Recommendations612182430SE +/- 0.14, N = 4SE +/- 0.07, N = 424.1725.66

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPUStockAI/ML Tuning Recommendations1326395265SE +/- 0.16, N = 4SE +/- 0.14, N = 455.6356.46

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUStockAI/ML Tuning Recommendations0.05980.11960.17940.23920.299SE +/- 0.000944, N = 5SE +/- 0.000491, N = 50.2655640.254123MIN: 0.24MIN: 0.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128StockAI/ML Tuning Recommendations20406080100SE +/- 0.49, N = 6SE +/- 0.43, N = 692.8295.421. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUStockAI/ML Tuning Recommendations0.07680.15360.23040.30720.384SE +/- 0.000295, N = 7SE +/- 0.001024, N = 70.3414750.321833MIN: 0.32MIN: 0.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUStockAI/ML Tuning Recommendations0.16170.32340.48510.64680.8085SE +/- 0.001205, N = 9SE +/- 0.000482, N = 90.7184840.677050MIN: 0.62MIN: 0.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

XNNPACK

System Power Consumption Monitor

MinAvgMaxStock99.6377.9403.8AI/ML Tuning Recommendations100.4424.2458.8OpenBenchmarking.orgWatts, Fewer Is BetterXNNPACK b7b048System Power Consumption Monitor120240360480600

XNNPACK

CPU Power Consumption Monitor

MinAvgMaxStock1.9241.3263.5AI/ML Tuning Recommendations0.0279.8305.1OpenBenchmarking.orgWatts, Fewer Is BetterXNNPACK b7b048CPU Power Consumption Monitor80160240320400

ONNX Runtime

System Power Consumption Monitor

MinAvgMaxStock98.9429.9477.0AI/ML Tuning Recommendations98.5460.1509.9OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19System Power Consumption Monitor130260390520650

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxStock3.0248.9279.9AI/ML Tuning Recommendations77.0270.8300.9OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19CPU Power Consumption Monitor80160240320400

ONNX Runtime

System Power Consumption Monitor

MinAvgMaxStock98.4328.4350.0AI/ML Tuning Recommendations99.3375.5395.9OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19System Power Consumption Monitor110220330440550

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxStock1.5203.6221.2AI/ML Tuning Recommendations82.2237.3256.8OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19CPU Power Consumption Monitor70140210280350

Numpy Benchmark

System Power Consumption Monitor

MinAvgMaxStock97.9171.2178.1AI/ML Tuning Recommendations98.9176.8188.2OpenBenchmarking.orgWatts, Fewer Is BetterNumpy BenchmarkSystem Power Consumption Monitor50100150200250

Numpy Benchmark

CPU Power Consumption Monitor

MinAvgMaxStock3.078.885.7AI/ML Tuning Recommendations47.980.785.3OpenBenchmarking.orgWatts, Fewer Is BetterNumpy BenchmarkCPU Power Consumption Monitor20406080100

oneDNN

System Power Consumption Monitor

MinAvgMaxStock98.9329.4466.9AI/ML Tuning Recommendations98.9361.1498.3OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor130260390520650

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock0.1197.3289.9AI/ML Tuning Recommendations127.7222.2267.0OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.3337.3448.7AI/ML Tuning Recommendations97.8365.3470.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor120240360480600

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock1.1203.6264.7AI/ML Tuning Recommendations103.9226.6275.0OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.5274.5366.1AI/ML Tuning Recommendations97.2303.0419.6OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor110220330440550

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock0.3155.2226.2AI/ML Tuning Recommendations85.8188.4252.5OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.4255.0348.1AI/ML Tuning Recommendations97.5268.5401.0OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor110220330440550

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock3.8146.3204.5AI/ML Tuning Recommendations72.3169.6230.8OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor60120180240300

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.3220.2359.1AI/ML Tuning Recommendations97.1186.6377.1OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor100200300400500

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock3.3107.2148.5AI/ML Tuning Recommendations103.2116.5135.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor4080120160200

oneDNN

System Power Consumption Monitor

MinAvgMaxStock96.9310.1409.7AI/ML Tuning Recommendations98.1331.7435.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor110220330440550

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock2.2174.0251.2AI/ML Tuning Recommendations100.5202.2263.1OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.9288.6451.9AI/ML Tuning Recommendations98.8244.9408.8OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor120240360480600

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock1.4147.2241.7AI/ML Tuning Recommendations79.0177.2273.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

PyTorch

System Power Consumption Monitor

MinAvgMaxStock98.2427.3460.6AI/ML Tuning Recommendations98.8480.2527.5OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock52.1269.5294.9AI/ML Tuning Recommendations58.3313.0344.4OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

PyTorch

System Power Consumption Monitor

MinAvgMaxStock98.8395.8456.8AI/ML Tuning Recommendations99.0441.7516.7OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock1.9237.5291.1AI/ML Tuning Recommendations53.6285.5336.3OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

PyTorch

System Power Consumption Monitor

MinAvgMaxStock96.8426.1462.5AI/ML Tuning Recommendations95.7486.4526.2OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock0.7264.7294.6AI/ML Tuning Recommendations52.9311.9342.9OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

PyTorch

System Power Consumption Monitor

MinAvgMaxStock98.0393.2455.2AI/ML Tuning Recommendations98.4440.0515.9OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock2.9234.6290.6AI/ML Tuning Recommendations55.1281.1335.8OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

LiteRT

System Power Consumption Monitor

MinAvgMaxStock98.4400.2424.0AI/ML Tuning Recommendations98.1454.4495.8OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor130260390520650

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock1.4243.5272.8AI/ML Tuning Recommendations101.0303.8326.5OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor80160240320400

LiteRT

System Power Consumption Monitor

MinAvgMaxStock99.2373.3408.2AI/ML Tuning Recommendations99.6442.1460.5OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor120240360480600

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock3.6237.5264.4AI/ML Tuning Recommendations87.0279.8303.9OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor80160240320400

LiteRT

System Power Consumption Monitor

MinAvgMaxStock97.1339.2364.8AI/ML Tuning Recommendations97.7390.9414.6OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor110220330440550

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock3.7221.3238.5AI/ML Tuning Recommendations99.2261.1279.6OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor70140210280350

LiteRT

System Power Consumption Monitor

MinAvgMaxStock101.7395.1407.3AI/ML Tuning Recommendations100.3420.8461.0OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor120240360480600

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock1.1236.1263.3AI/ML Tuning Recommendations124.4282.6304.2OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor80160240320400

TensorFlow

System Power Consumption Monitor

MinAvgMaxStock100.5472.2518.6AI/ML Tuning Recommendations101.6487.8535.3OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1System Power Consumption Monitor140280420560700

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxStock1.3254.8268.0AI/ML Tuning Recommendations53.7264.7276.0OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1CPU Power Consumption Monitor70140210280350

TensorFlow

System Power Consumption Monitor

MinAvgMaxStock99.1442.9475.7AI/ML Tuning Recommendations98.6460.6494.4OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1System Power Consumption Monitor130260390520650

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxStock4.1240.6255.8AI/ML Tuning Recommendations53.0253.1264.6OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1CPU Power Consumption Monitor70140210280350

Whisper.cpp

System Power Consumption Monitor

MinAvgMaxStock98.2389.5469.4AI/ML Tuning Recommendations98.7440.8504.3OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2System Power Consumption Monitor130260390520650

Whisper.cpp

CPU Power Consumption Monitor

MinAvgMaxStock3.4244.1261.6AI/ML Tuning Recommendations46.7283.0299.0OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2CPU Power Consumption Monitor80160240320400

Whisper.cpp

System Power Consumption Monitor

MinAvgMaxStock98.5357.9406.6AI/ML Tuning Recommendations98.5391.4464.3OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2System Power Consumption Monitor120240360480600

Whisper.cpp

CPU Power Consumption Monitor

MinAvgMaxStock0.8220.1235.8AI/ML Tuning Recommendations57.9253.6273.4OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2CPU Power Consumption Monitor70140210280350

Whisperfile

System Power Consumption Monitor

MinAvgMaxStock98.2340.8387.8AI/ML Tuning Recommendations98.4361.7409.6OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24System Power Consumption Monitor110220330440550

Whisperfile

CPU Power Consumption Monitor

MinAvgMaxStock3.1188.6206.7AI/ML Tuning Recommendations52.4204.4221.7OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24CPU Power Consumption Monitor60120180240300

Whisperfile

System Power Consumption Monitor

MinAvgMaxStock99.1327.7371.6AI/ML Tuning Recommendations98.0351.0405.4OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24System Power Consumption Monitor110220330440550

Whisperfile

CPU Power Consumption Monitor

MinAvgMaxStock5.5180.6208.1AI/ML Tuning Recommendations46.8200.1231.1OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24CPU Power Consumption Monitor60120180240300

Whisperfile

System Power Consumption Monitor

MinAvgMaxStock99.3256.2339.0AI/ML Tuning Recommendations98.6285.0376.0OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24System Power Consumption Monitor100200300400500

Whisperfile

CPU Power Consumption Monitor

MinAvgMaxStock2.7137.0193.3AI/ML Tuning Recommendations0.4155.0217.0OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24CPU Power Consumption Monitor60120180240300

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock99.2415.9506.6AI/ML Tuning Recommendations98.9463.4547.0OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor140280420560700

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock2.7248.2291.5AI/ML Tuning Recommendations86.4288.4323.2OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.9375.5446.8AI/ML Tuning Recommendations98.9444.6524.3OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock3.1232.5266.0AI/ML Tuning Recommendations92.1285.4337.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.9372.0414.5AI/ML Tuning Recommendations100.3442.0512.9OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock9.9224.4269.8AI/ML Tuning Recommendations94.7289.4335.4OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98428552AI/ML Tuning Recommendations99462644OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor2004006008001000

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock6.0238.9331.9AI/ML Tuning Recommendations56.8293.0399.7OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor110220330440550

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.1411.1499.8AI/ML Tuning Recommendations98.2467.8533.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor140280420560700

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock12.7254.5302.2AI/ML Tuning Recommendations95.6307.1351.1OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor100200300400500

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock97.9371.5450.0AI/ML Tuning Recommendations97.3433.6515.3OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock14.5244.4300.1AI/ML Tuning Recommendations97.3292.2349.7OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor100200300400500

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock96.7256.1410.9AI/ML Tuning Recommendations99.9323.5464.8OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor120240360480600

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock11.3165.3269.9AI/ML Tuning Recommendations86.4203.0305.1OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock97.9410.1502.2AI/ML Tuning Recommendations100.3454.2543.8OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor140280420560700

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock11.3247.5289.5AI/ML Tuning Recommendations114.5287.8315.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.5382.8438.4AI/ML Tuning Recommendations98.4450.9513.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock17.5231.5260.6AI/ML Tuning Recommendations123.4289.4326.4OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.4365.8443.9AI/ML Tuning Recommendations100.4456.3525.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock7.3221.9263.5AI/ML Tuning Recommendations126.7294.6332.7OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98390558AI/ML Tuning Recommendations99536650OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor2004006008001000

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock0.6236.1334.7AI/ML Tuning Recommendations57.2321.1401.3OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor110220330440550

OpenVINO GenAI

System Power Consumption Monitor

MinAvgMaxStock99.9420.1486.7AI/ML Tuning Recommendations97.9483.3551.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5System Power Consumption Monitor140280420560700

OpenVINO GenAI

CPU Power Consumption Monitor

MinAvgMaxStock14.0235.6293.0AI/ML Tuning Recommendations46.1291.1341.4OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO GenAI

System Power Consumption Monitor

MinAvgMaxStock98.2426.7509.8AI/ML Tuning Recommendations98.1490.1585.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5System Power Consumption Monitor160320480640800

OpenVINO GenAI

CPU Power Consumption Monitor

MinAvgMaxStock6.1239.0309.8AI/ML Tuning Recommendations52.7293.6364.5OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO GenAI

System Power Consumption Monitor

MinAvgMaxStock99.1331.8453.9AI/ML Tuning Recommendations98.1380.9543.2OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5System Power Consumption Monitor140280420560700

OpenVINO GenAI

CPU Power Consumption Monitor

MinAvgMaxStock0.8190.8280.5AI/ML Tuning Recommendations69.4248.5348.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock98.8523.1566.9AI/ML Tuning Recommendations99.0545.2588.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor160320480640800

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock26.6283.4308.8AI/ML Tuning Recommendations106.9299.0322.1OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100.6506.3535.9AI/ML Tuning Recommendations100.0523.2551.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock0.2301.5333.3AI/ML Tuning Recommendations81.7320.5341.9OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock98.2485.5529.0AI/ML Tuning Recommendations101.1512.1561.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock5.3278.3306.1AI/ML Tuning Recommendations81.9300.1323.0OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock99.3549.0578.2AI/ML Tuning Recommendations100.2560.5599.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor160320480640800

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock6.9317.3351.6AI/ML Tuning Recommendations117.3335.4361.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock101.1472.8504.1AI/ML Tuning Recommendations101.2511.1536.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock13.5282.0312.1AI/ML Tuning Recommendations90.1309.0331.5OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100556612AI/ML Tuning Recommendations99584636OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock16.6313.1351.4AI/ML Tuning Recommendations79.4334.8365.0OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock99.3441.8469.2AI/ML Tuning Recommendations98.7461.9496.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor130260390520650

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock32.7255.4281.9AI/ML Tuning Recommendations79.1276.2299.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100.0485.3521.8AI/ML Tuning Recommendations99.2521.4553.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock49.3290.5319.5AI/ML Tuning Recommendations132.1314.2336.5OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock102.3440.1472.4AI/ML Tuning Recommendations105.1501.8525.4OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor130260390520650

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock83.1271.0295.4AI/ML Tuning Recommendations94.6304.8328.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100594647AI/ML Tuning Recommendations100615672OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock62.9324.7356.2AI/ML Tuning Recommendations78.6339.7368.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock99.1436.2470.9AI/ML Tuning Recommendations100.1457.0491.2OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor130260390520650

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock1.5262.5291.6AI/ML Tuning Recommendations3.7275.9306.1OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400


Phoronix Test Suite v10.8.5