AMD EPYC Turin AI/ML Tuning Guide

AMD EPYC 9655P following AMD tuning guide for AI/ML workloads - https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/tuning-guides/58467_amd-epyc-9005-tg-bios-and-workload.pdf Benchmarks by Michael Larabel for a future article.

HTML result view exported from: https://openbenchmarking.org/result/2411286-NE-AMDEPYCTU24&sor&grs.

AMD EPYC Turin AI/ML Tuning GuideProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionStockAI/ML Tuning RecommendationsAMD EPYC 9655P 96-Core @ 2.60GHz (96 Cores / 192 Threads)Supermicro Super Server H13SSL-N v1.01 (3.0 BIOS)AMD 1Ah12 x 64GB DDR5-6000MT/s Micron MTC40F2046S1RC64BDY QSFF3201GB Micron_7450_MTFDKCB3T2TFSASPEED2 x Broadcom NetXtreme BCM5720 PCIeUbuntu 24.106.12.0-rc7-linux-pm-next-phx (x86_64)GNOME Shell 47.0X ServerGCC 14.2.0ext41024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2,rust --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xb002116 Python Details- Python 3.12.7Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AMD EPYC Turin AI/ML Tuning Guideopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Convolution Batch Shapes Auto - CPUllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048onednn: IP Shapes 1D - CPUpytorch: CPU - 512 - ResNet-152onednn: Recurrent Neural Network Inference - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024pytorch: CPU - 256 - ResNet-152onednn: Recurrent Neural Network Training - CPUllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512openvino: Age Gender Recognition Retail 0013 FP16 - CPUllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024openvino: Road Segmentation ADAS FP16-INT8 - CPUonednn: IP Shapes 3D - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUpytorch: CPU - 512 - ResNet-50openvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUonnx: ResNet50 v1-12-int8 - CPU - Standardwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisperfile: Smallpytorch: CPU - 256 - ResNet-50llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128openvino: Person Re-Identification Retail FP16 - CPUxnnpack: FP16MobileNetV1openvino: Person Re-Identification Retail FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUlitert: Mobilenet Floatxnnpack: QS8MobileNetV2openvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUtensorflow: CPU - 512 - ResNet-50whisperfile: Mediumlitert: SqueezeNetxnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV1tensorflow: CPU - 256 - ResNet-50openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPUwhisperfile: Tinyllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128whisper-cpp: ggml-medium.en - 2016 State of the Unionxnnpack: FP32MobileNetV3Smallllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048onednn: Deconvolution Batch shapes_1d - CPUxnnpack: FP16MobileNetV2llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048openvino-genai: Gemma-7b-int4-ov - CPUnumpy: openvino-genai: Falcon-7b-instruct-int4-ov - CPUlitert: Inception V4xnnpack: FP16MobileNetV3Smallonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardlitert: NASNet Mobilellama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512openvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time To First Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First TokenStockAI/ML Tuning Recommendations6.7214035.767.0972.46720.9823587.023.940.7184840.341475144.530.53587420.60276.3795.778270.9996.9920.78425.71872.990.4597.1918.990.2655642517.89140497.3451.48852.8356.24276.076221.6291890.6717551.6192.824.53463410525.863559.8726.944438.521004213.686747.48716.2066.89231.18200.482927035.7492034539204.3355.6331.8856048.0745.84454.2879410488149.006.708979092306.5137.84885.5051.0143898.910966165.0946.097883.62341733737154.6026.4336.1319.6029.0717.9824.176.0915437.796.6677.057144.8425050.473.710.6770500.321833152.920.50735521.71262.5175.498691.00101.7721.79406.45376.410.43101.7118.160.2541232630.42146329.8053.34881.2254.43285.104214.6279288.0364053.1395.424.41451310800.123649.6326.284335.67981313.386891.64730.7365.56235.35197.240956926.8890624471207.3856.4631.4357348.7346.45449.7026210387150.296.654829028308.3238.00887.7551.1243824.910323158.5286.356263.50697689396155.7326.3135.4319.5630.7017.7125.66OpenBenchmarking.org

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Weld Porosity Detection FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock246810SE +/- 0.00, N = 3SE +/- 0.00, N = 36.096.72MIN: 2.21 / MAX: 23.86MIN: 2.22 / MAX: 22.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Weld Porosity Detection FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock3K6K9K12K15KSE +/- 14.35, N = 3SE +/- 9.10, N = 315437.7914035.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Person Vehicle Bike Detection FP16 - Device: CPUAI/ML Tuning RecommendationsStock246810SE +/- 0.01, N = 3SE +/- 0.01, N = 36.667.09MIN: 3.65 / MAX: 22.09MIN: 4.15 / MAX: 20.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512AI/ML Tuning RecommendationsStock20406080100SE +/- 0.77, N = 5SE +/- 0.83, N = 377.0572.401. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Person Vehicle Bike Detection FP16 - Device: CPUAI/ML Tuning RecommendationsStock15003000450060007500SE +/- 8.11, N = 3SE +/- 13.33, N = 37144.846720.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Face Detection Retail FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock5K10K15K20K25KSE +/- 17.84, N = 3SE +/- 17.66, N = 325050.4723587.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Face Detection Retail FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock0.88651.7732.65953.5464.4325SE +/- 0.00, N = 3SE +/- 0.01, N = 33.713.94MIN: 1.71 / MAX: 19.33MIN: 1.76 / MAX: 17.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUAI/ML Tuning RecommendationsStock0.16170.32340.48510.64680.8085SE +/- 0.000482, N = 9SE +/- 0.001205, N = 90.6770500.718484MIN: 0.58MIN: 0.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUAI/ML Tuning RecommendationsStock0.07680.15360.23040.30720.384SE +/- 0.001024, N = 7SE +/- 0.000295, N = 70.3218330.341475MIN: 0.31MIN: 0.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048AI/ML Tuning RecommendationsStock306090120150SE +/- 1.38, N = 3SE +/- 0.97, N = 3152.92144.531. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUAI/ML Tuning RecommendationsStock0.12060.24120.36180.48240.603SE +/- 0.001097, N = 4SE +/- 0.001151, N = 40.5073550.535874MIN: 0.46MIN: 0.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152AI/ML Tuning RecommendationsStock510152025SE +/- 0.18, N = 3SE +/- 0.10, N = 321.7120.60MIN: 20.13 / MAX: 22.23MIN: 19.3 / MAX: 21.02

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUAI/ML Tuning RecommendationsStock60120180240300SE +/- 0.27, N = 3SE +/- 0.68, N = 3262.52276.38MIN: 257.81MIN: 269.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Vehicle Detection FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock1.29832.59663.89495.19326.4915SE +/- 0.01, N = 3SE +/- 0.00, N = 35.495.77MIN: 2.47 / MAX: 19.56MIN: 2.28 / MAX: 21.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Vehicle Detection FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock2K4K6K8K10KSE +/- 7.22, N = 3SE +/- 3.96, N = 38691.008270.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024AI/ML Tuning RecommendationsStock20406080100SE +/- 1.16, N = 3SE +/- 0.34, N = 3101.7796.991. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152AI/ML Tuning RecommendationsStock510152025SE +/- 0.27, N = 3SE +/- 0.04, N = 321.7920.78MIN: 20.38 / MAX: 22.54MIN: 19.72 / MAX: 21.04

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUAI/ML Tuning RecommendationsStock90180270360450SE +/- 0.37, N = 3SE +/- 0.52, N = 3406.45425.72MIN: 400.13MIN: 419.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512AI/ML Tuning RecommendationsStock20406080100SE +/- 0.99, N = 3SE +/- 0.98, N = 376.4172.991. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAI/ML Tuning RecommendationsStock0.10130.20260.30390.40520.5065SE +/- 0.00, N = 3SE +/- 0.01, N = 30.430.45MIN: 0.15 / MAX: 23.94MIN: 0.16 / MAX: 25.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024AI/ML Tuning RecommendationsStock20406080100SE +/- 0.67, N = 15SE +/- 1.13, N = 4101.7197.191. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Road Segmentation ADAS FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock510152025SE +/- 0.02, N = 3SE +/- 0.06, N = 318.1618.99MIN: 7.68 / MAX: 40.38MIN: 9.19 / MAX: 391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUAI/ML Tuning RecommendationsStock0.05980.11960.17940.23920.299SE +/- 0.000491, N = 5SE +/- 0.000944, N = 50.2541230.265564MIN: 0.24MIN: 0.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Road Segmentation ADAS FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock6001200180024003000SE +/- 2.42, N = 3SE +/- 7.45, N = 32630.422517.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAI/ML Tuning RecommendationsStock30K60K90K120K150KSE +/- 528.15, N = 3SE +/- 313.81, N = 3146329.80140497.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50AI/ML Tuning RecommendationsStock1224364860SE +/- 0.26, N = 3SE +/- 0.15, N = 353.3451.48MIN: 49 / MAX: 54.59MIN: 46.04 / MAX: 52.41

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Machine Translation EN To DE FP16 - Device: CPUAI/ML Tuning RecommendationsStock2004006008001000SE +/- 0.40, N = 3SE +/- 0.22, N = 3881.22852.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Machine Translation EN To DE FP16 - Device: CPUAI/ML Tuning RecommendationsStock1326395265SE +/- 0.03, N = 3SE +/- 0.02, N = 354.4356.24MIN: 28.33 / MAX: 92.29MIN: 29.3 / MAX: 94.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardAI/ML Tuning RecommendationsStock60120180240300SE +/- 1.46, N = 3SE +/- 2.53, N = 7285.10276.081. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the UnionAI/ML Tuning RecommendationsStock50100150200250SE +/- 0.68, N = 3SE +/- 2.33, N = 3214.63221.631. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Whisperfile

Model Size: Small

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: SmallAI/ML Tuning RecommendationsStock20406080100SE +/- 0.57, N = 3SE +/- 0.71, N = 388.0490.67

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50AI/ML Tuning RecommendationsStock1224364860SE +/- 0.28, N = 3SE +/- 0.15, N = 353.1351.61MIN: 46.57 / MAX: 54.35MIN: 45.56 / MAX: 52.57

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128AI/ML Tuning RecommendationsStock20406080100SE +/- 0.43, N = 6SE +/- 0.49, N = 695.4292.821. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Person Re-Identification Retail FP16 - Device: CPUAI/ML Tuning RecommendationsStock1.01932.03863.05794.07725.0965SE +/- 0.00, N = 3SE +/- 0.01, N = 34.414.53MIN: 2.45 / MAX: 17.46MIN: 1.95 / MAX: 23.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

XNNPACK

Model: FP16MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1AI/ML Tuning RecommendationsStock10002000300040005000SE +/- 10.73, N = 3SE +/- 15.14, N = 3451346341. (CXX) g++ options: -O3 -lrt -lm

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Person Re-Identification Retail FP16 - Device: CPUAI/ML Tuning RecommendationsStock2K4K6K8K10KSE +/- 6.08, N = 3SE +/- 12.77, N = 310800.1210525.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Handwritten English Recognition FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock8001600240032004000SE +/- 3.35, N = 3SE +/- 2.96, N = 33649.633559.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Handwritten English Recognition FP16-INT8 - Device: CPUAI/ML Tuning RecommendationsStock612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 326.2826.94MIN: 15.86 / MAX: 40.89MIN: 15.65 / MAX: 45.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

LiteRT

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet FloatAI/ML Tuning RecommendationsStock10002000300040005000SE +/- 7.66, N = 3SE +/- 11.59, N = 34335.674438.52

XNNPACK

Model: QS8MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2AI/ML Tuning RecommendationsStock2K4K6K8K10KSE +/- 87.21, N = 3SE +/- 154.48, N = 39813100421. (CXX) g++ options: -O3 -lrt -lm

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Noise Suppression Poconet-Like FP16 - Device: CPUAI/ML Tuning RecommendationsStock48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 313.3813.68MIN: 6.98 / MAX: 34.62MIN: 7.09 / MAX: 36.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Noise Suppression Poconet-Like FP16 - Device: CPUAI/ML Tuning RecommendationsStock15003000450060007500SE +/- 9.90, N = 3SE +/- 11.12, N = 36891.646747.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.5Model: Person Detection FP16 - Device: CPUAI/ML Tuning RecommendationsStock160320480640800SE +/- 0.74, N = 3SE +/- 0.66, N = 3730.73716.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.5Model: Person Detection FP16 - Device: CPUAI/ML Tuning RecommendationsStock1530456075SE +/- 0.07, N = 3SE +/- 0.06, N = 365.5666.89MIN: 32.6 / MAX: 131.97MIN: 34.58 / MAX: 1301. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl -lstdc++fs

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: ResNet-50AI/ML Tuning RecommendationsStock50100150200250SE +/- 0.20, N = 3SE +/- 0.28, N = 3235.35231.18

Whisperfile

Model Size: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: MediumAI/ML Tuning RecommendationsStock4080120160200SE +/- 0.71, N = 3SE +/- 0.88, N = 3197.24200.48

LiteRT

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetAI/ML Tuning RecommendationsStock15003000450060007500SE +/- 31.55, N = 3SE +/- 31.31, N = 36926.887035.74

XNNPACK

Model: FP32MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2AI/ML Tuning RecommendationsStock2K4K6K8K10KSE +/- 25.31, N = 3SE +/- 32.13, N = 3906292031. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1AI/ML Tuning RecommendationsStock10002000300040005000SE +/- 54.67, N = 3SE +/- 42.62, N = 3447145391. (CXX) g++ options: -O3 -lrt -lm

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: ResNet-50AI/ML Tuning RecommendationsStock50100150200250SE +/- 0.72, N = 3SE +/- 0.57, N = 3207.38204.33

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPUAI/ML Tuning RecommendationsStock1326395265SE +/- 0.14, N = 4SE +/- 0.16, N = 456.4655.63

Whisperfile

Model Size: Tiny

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: TinyAI/ML Tuning RecommendationsStock714212835SE +/- 0.25, N = 3SE +/- 0.26, N = 331.4431.89

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128AI/ML Tuning RecommendationsStock1122334455SE +/- 0.05, N = 4SE +/- 0.07, N = 448.7348.071. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128AI/ML Tuning RecommendationsStock1122334455SE +/- 0.09, N = 4SE +/- 0.05, N = 446.4545.841. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the UnionAI/ML Tuning RecommendationsStock100200300400500SE +/- 1.24, N = 3SE +/- 1.45, N = 3449.70454.291. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

XNNPACK

Model: FP32MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3SmallAI/ML Tuning RecommendationsStock2K4K6K8K10KSE +/- 28.22, N = 3SE +/- 11.35, N = 310387104881. (CXX) g++ options: -O3 -lrt -lm

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048AI/ML Tuning RecommendationsStock306090120150SE +/- 1.36, N = 15SE +/- 2.06, N = 3150.29149.001. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUAI/ML Tuning RecommendationsStock246810SE +/- 0.01789, N = 3SE +/- 0.03430, N = 36.654826.70897MIN: 3.91MIN: 6.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

XNNPACK

Model: FP16MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2AI/ML Tuning RecommendationsStock2K4K6K8K10KSE +/- 94.57, N = 3SE +/- 89.20, N = 3902890921. (CXX) g++ options: -O3 -lrt -lm

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048AI/ML Tuning RecommendationsStock70140210280350SE +/- 2.23, N = 15SE +/- 2.62, N = 3308.32306.511. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPUAI/ML Tuning RecommendationsStock918273645SE +/- 0.09, N = 3SE +/- 0.13, N = 338.0037.84

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkAI/ML Tuning RecommendationsStock2004006008001000SE +/- 0.72, N = 3SE +/- 1.94, N = 3887.75885.50

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPUAI/ML Tuning RecommendationsStock1224364860SE +/- 0.19, N = 3SE +/- 0.09, N = 351.1251.01

LiteRT

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4AI/ML Tuning RecommendationsStock9K18K27K36K45KSE +/- 159.42, N = 3SE +/- 47.06, N = 343824.943898.9

XNNPACK

System Power Consumption Monitor

MinAvgMaxStock99.6377.9403.8AI/ML Tuning Recommendations100.4424.2458.8OpenBenchmarking.orgWatts, Fewer Is BetterXNNPACK b7b048System Power Consumption Monitor120240360480600

XNNPACK

CPU Power Consumption Monitor

MinAvgMaxStock1.9241.3263.5AI/ML Tuning Recommendations0.0279.8305.1OpenBenchmarking.orgWatts, Fewer Is BetterXNNPACK b7b048CPU Power Consumption Monitor80160240320400

XNNPACK

Model: FP16MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3SmallAI/ML Tuning RecommendationsStock2K4K6K8K10KSE +/- 21.33, N = 3SE +/- 410.82, N = 310323109661. (CXX) g++ options: -O3 -lrt -lm

ONNX Runtime

System Power Consumption Monitor

MinAvgMaxStock98.9429.9477.0AI/ML Tuning Recommendations98.5460.1509.9OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19System Power Consumption Monitor130260390520650

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxStock3.0248.9279.9AI/ML Tuning Recommendations77.0270.8300.9OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19CPU Power Consumption Monitor80160240320400

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: StandardAI/ML Tuning RecommendationsStock4080120160200SE +/- 3.81, N = 15SE +/- 3.66, N = 15158.53165.091. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: StandardAI/ML Tuning RecommendationsStock246810SE +/- 0.14383, N = 15SE +/- 0.13191, N = 156.356266.097881. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

System Power Consumption Monitor

MinAvgMaxStock98.4328.4350.0AI/ML Tuning Recommendations99.3375.5395.9OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19System Power Consumption Monitor110220330440550

ONNX Runtime

CPU Power Consumption Monitor

MinAvgMaxStock1.5203.6221.2AI/ML Tuning Recommendations82.2237.3256.8OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.19CPU Power Consumption Monitor70140210280350

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardAI/ML Tuning RecommendationsStock0.81531.63062.44593.26124.0765SE +/- 0.01794, N = 3SE +/- 0.03395, N = 73.506973.623411. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Numpy Benchmark

System Power Consumption Monitor

MinAvgMaxStock97.9171.2178.1AI/ML Tuning Recommendations98.9176.8188.2OpenBenchmarking.orgWatts, Fewer Is BetterNumpy BenchmarkSystem Power Consumption Monitor50100150200250

Numpy Benchmark

CPU Power Consumption Monitor

MinAvgMaxStock3.078.885.7AI/ML Tuning Recommendations47.980.785.3OpenBenchmarking.orgWatts, Fewer Is BetterNumpy BenchmarkCPU Power Consumption Monitor20406080100

oneDNN

System Power Consumption Monitor

MinAvgMaxStock98.9329.4466.9AI/ML Tuning Recommendations98.9361.1498.3OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor130260390520650

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock0.1197.3289.9AI/ML Tuning Recommendations127.7222.2267.0OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.3337.3448.7AI/ML Tuning Recommendations97.8365.3470.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor120240360480600

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock1.1203.6264.7AI/ML Tuning Recommendations103.9226.6275.0OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.5274.5366.1AI/ML Tuning Recommendations97.2303.0419.6OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor110220330440550

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock0.3155.2226.2AI/ML Tuning Recommendations85.8188.4252.5OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxStock97.4255.0348.1AI/ML Tuning Recommendations97.5268.5401.0OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor110220330440550

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock3.8146.3204.5AI/ML Tuning Recommendations72.3169.6230.8OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor60120180240300

oneDNN

System Power Consumption Monitor

MinAvgMaxAI/ML Tuning Recommendations97.1186.6377.1Stock97.3220.2359.1OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor100200300400500

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock3.3107.2148.5AI/ML Tuning Recommendations103.2116.5135.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor4080120160200

oneDNN

System Power Consumption Monitor

MinAvgMaxStock96.9310.1409.7AI/ML Tuning Recommendations98.1331.7435.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor110220330440550

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock2.2174.0251.2AI/ML Tuning Recommendations100.5202.2263.1OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

oneDNN

System Power Consumption Monitor

MinAvgMaxAI/ML Tuning Recommendations98.8244.9408.8Stock97.9288.6451.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6System Power Consumption Monitor120240360480600

oneDNN

CPU Power Consumption Monitor

MinAvgMaxStock1.4147.2241.7AI/ML Tuning Recommendations79.0177.2273.9OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.6CPU Power Consumption Monitor70140210280350

PyTorch

System Power Consumption Monitor

MinAvgMaxStock98.2427.3460.6AI/ML Tuning Recommendations98.8480.2527.5OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock52.1269.5294.9AI/ML Tuning Recommendations58.3313.0344.4OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

PyTorch

System Power Consumption Monitor

MinAvgMaxStock98.8395.8456.8AI/ML Tuning Recommendations99.0441.7516.7OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock1.9237.5291.1AI/ML Tuning Recommendations53.6285.5336.3OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

PyTorch

System Power Consumption Monitor

MinAvgMaxStock96.8426.1462.5AI/ML Tuning Recommendations95.7486.4526.2OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock0.7264.7294.6AI/ML Tuning Recommendations52.9311.9342.9OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

PyTorch

System Power Consumption Monitor

MinAvgMaxStock98.0393.2455.2AI/ML Tuning Recommendations98.4440.0515.9OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1System Power Consumption Monitor130260390520650

PyTorch

CPU Power Consumption Monitor

MinAvgMaxStock2.9234.6290.6AI/ML Tuning Recommendations55.1281.1335.8OpenBenchmarking.orgWatts, Fewer Is BetterPyTorch 2.2.1CPU Power Consumption Monitor80160240320400

LiteRT

System Power Consumption Monitor

MinAvgMaxStock98.4400.2424.0AI/ML Tuning Recommendations98.1454.4495.8OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor130260390520650

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock1.4243.5272.8AI/ML Tuning Recommendations101.0303.8326.5OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor80160240320400

LiteRT

System Power Consumption Monitor

MinAvgMaxStock99.2373.3408.2AI/ML Tuning Recommendations99.6442.1460.5OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor120240360480600

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock3.6237.5264.4AI/ML Tuning Recommendations87.0279.8303.9OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor80160240320400

LiteRT

System Power Consumption Monitor

MinAvgMaxStock97.1339.2364.8AI/ML Tuning Recommendations97.7390.9414.6OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor110220330440550

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock3.7221.3238.5AI/ML Tuning Recommendations99.2261.1279.6OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor70140210280350

LiteRT

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet MobileAI/ML Tuning RecommendationsStock160K320K480K640K800KSE +/- 22050.37, N = 12SE +/- 17324.48, N = 15689396733737

LiteRT

System Power Consumption Monitor

MinAvgMaxStock101.7395.1407.3AI/ML Tuning Recommendations100.3420.8461.0OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15System Power Consumption Monitor120240360480600

LiteRT

CPU Power Consumption Monitor

MinAvgMaxStock1.1236.1263.3AI/ML Tuning Recommendations124.4282.6304.2OpenBenchmarking.orgWatts, Fewer Is BetterLiteRT 2024-10-15CPU Power Consumption Monitor80160240320400

TensorFlow

System Power Consumption Monitor

MinAvgMaxStock100.5472.2518.6AI/ML Tuning Recommendations101.6487.8535.3OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1System Power Consumption Monitor140280420560700

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxStock1.3254.8268.0AI/ML Tuning Recommendations53.7264.7276.0OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1CPU Power Consumption Monitor70140210280350

TensorFlow

System Power Consumption Monitor

MinAvgMaxStock99.1442.9475.7AI/ML Tuning Recommendations98.6460.6494.4OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1System Power Consumption Monitor130260390520650

TensorFlow

CPU Power Consumption Monitor

MinAvgMaxStock4.1240.6255.8AI/ML Tuning Recommendations53.0253.1264.6OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow 2.16.1CPU Power Consumption Monitor70140210280350

Whisper.cpp

System Power Consumption Monitor

MinAvgMaxStock98.2389.5469.4AI/ML Tuning Recommendations98.7440.8504.3OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2System Power Consumption Monitor130260390520650

Whisper.cpp

CPU Power Consumption Monitor

MinAvgMaxStock3.4244.1261.6AI/ML Tuning Recommendations46.7283.0299.0OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2CPU Power Consumption Monitor80160240320400

Whisper.cpp

System Power Consumption Monitor

MinAvgMaxStock98.5357.9406.6AI/ML Tuning Recommendations98.5391.4464.3OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2System Power Consumption Monitor120240360480600

Whisper.cpp

CPU Power Consumption Monitor

MinAvgMaxStock0.8220.1235.8AI/ML Tuning Recommendations57.9253.6273.4OpenBenchmarking.orgWatts, Fewer Is BetterWhisper.cpp 1.6.2CPU Power Consumption Monitor70140210280350

Whisperfile

System Power Consumption Monitor

MinAvgMaxStock98.2340.8387.8AI/ML Tuning Recommendations98.4361.7409.6OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24System Power Consumption Monitor110220330440550

Whisperfile

CPU Power Consumption Monitor

MinAvgMaxStock3.1188.6206.7AI/ML Tuning Recommendations52.4204.4221.7OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24CPU Power Consumption Monitor60120180240300

Whisperfile

System Power Consumption Monitor

MinAvgMaxStock99.1327.7371.6AI/ML Tuning Recommendations98.0351.0405.4OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24System Power Consumption Monitor110220330440550

Whisperfile

CPU Power Consumption Monitor

MinAvgMaxStock5.5180.6208.1AI/ML Tuning Recommendations46.8200.1231.1OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24CPU Power Consumption Monitor60120180240300

Whisperfile

System Power Consumption Monitor

MinAvgMaxStock99.3256.2339.0AI/ML Tuning Recommendations98.6285.0376.0OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24System Power Consumption Monitor100200300400500

Whisperfile

CPU Power Consumption Monitor

MinAvgMaxStock2.7137.0193.3AI/ML Tuning Recommendations0.4155.0217.0OpenBenchmarking.orgWatts, Fewer Is BetterWhisperfile 20Aug24CPU Power Consumption Monitor60120180240300

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock99.2415.9506.6AI/ML Tuning Recommendations98.9463.4547.0OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor140280420560700

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock2.7248.2291.5AI/ML Tuning Recommendations86.4288.4323.2OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.9375.5446.8AI/ML Tuning Recommendations98.9444.6524.3OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock3.1232.5266.0AI/ML Tuning Recommendations92.1285.4337.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.9372.0414.5AI/ML Tuning Recommendations100.3442.0512.9OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock9.9224.4269.8AI/ML Tuning Recommendations94.7289.4335.4OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98428552AI/ML Tuning Recommendations99462644OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor2004006008001000

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock6.0238.9331.9AI/ML Tuning Recommendations56.8293.0399.7OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor110220330440550

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.1411.1499.8AI/ML Tuning Recommendations98.2467.8533.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor140280420560700

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock12.7254.5302.2AI/ML Tuning Recommendations95.6307.1351.1OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor100200300400500

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock97.9371.5450.0AI/ML Tuning Recommendations97.3433.6515.3OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock14.5244.4300.1AI/ML Tuning Recommendations97.3292.2349.7OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor100200300400500

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512AI/ML Tuning RecommendationsStock306090120150SE +/- 3.23, N = 12SE +/- 2.60, N = 12155.73154.601. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock96.7256.1410.9AI/ML Tuning Recommendations99.9323.5464.8OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor120240360480600

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock11.3165.3269.9AI/ML Tuning Recommendations86.4203.0305.1OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock97.9410.1502.2AI/ML Tuning Recommendations100.3454.2543.8OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor140280420560700

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock11.3247.5289.5AI/ML Tuning Recommendations114.5287.8315.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.5382.8438.4AI/ML Tuning Recommendations98.4450.9513.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock17.5231.5260.6AI/ML Tuning Recommendations123.4289.4326.4OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98.4365.8443.9AI/ML Tuning Recommendations100.4456.3525.6OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor130260390520650

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock7.3221.9263.5AI/ML Tuning Recommendations126.7294.6332.7OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor80160240320400

Llama.cpp

System Power Consumption Monitor

MinAvgMaxStock98390558AI/ML Tuning Recommendations99536650OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154System Power Consumption Monitor2004006008001000

Llama.cpp

CPU Power Consumption Monitor

MinAvgMaxStock0.6236.1334.7AI/ML Tuning Recommendations57.2321.1401.3OpenBenchmarking.orgWatts, Fewer Is BetterLlama.cpp b4154CPU Power Consumption Monitor110220330440550

OpenVINO GenAI

System Power Consumption Monitor

MinAvgMaxStock99.9420.1486.7AI/ML Tuning Recommendations97.9483.3551.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5System Power Consumption Monitor140280420560700

OpenVINO GenAI

CPU Power Consumption Monitor

MinAvgMaxStock14.0235.6293.0AI/ML Tuning Recommendations46.1291.1341.4OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output TokenAI/ML Tuning RecommendationsStock612182430SE +/- 0.06, N = 3SE +/- 0.09, N = 326.3126.43

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time To First TokenAI/ML Tuning RecommendationsStock816243240SE +/- 0.08, N = 3SE +/- 0.20, N = 335.4336.13

OpenVINO GenAI

System Power Consumption Monitor

MinAvgMaxStock98.2426.7509.8AI/ML Tuning Recommendations98.1490.1585.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5System Power Consumption Monitor160320480640800

OpenVINO GenAI

CPU Power Consumption Monitor

MinAvgMaxStock6.1239.0309.8AI/ML Tuning Recommendations52.7293.6364.5OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output TokenAI/ML Tuning RecommendationsStock510152025SE +/- 0.07, N = 3SE +/- 0.04, N = 319.5619.60

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First TokenStockAI/ML Tuning Recommendations714212835SE +/- 0.03, N = 3SE +/- 0.18, N = 329.0730.70

OpenVINO GenAI

System Power Consumption Monitor

MinAvgMaxStock99.1331.8453.9AI/ML Tuning Recommendations98.1380.9543.2OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5System Power Consumption Monitor140280420560700

OpenVINO GenAI

CPU Power Consumption Monitor

MinAvgMaxStock0.8190.8280.5AI/ML Tuning Recommendations69.4248.5348.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO GenAI 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output TokenAI/ML Tuning RecommendationsStock48121620SE +/- 0.04, N = 4SE +/- 0.05, N = 417.7117.98

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First TokenStockAI/ML Tuning Recommendations612182430SE +/- 0.14, N = 4SE +/- 0.07, N = 424.1725.66

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock98.8523.1566.9AI/ML Tuning Recommendations99.0545.2588.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor160320480640800

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock26.6283.4308.8AI/ML Tuning Recommendations106.9299.0322.1OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100.6506.3535.9AI/ML Tuning Recommendations100.0523.2551.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock0.2301.5333.3AI/ML Tuning Recommendations81.7320.5341.9OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock98.2485.5529.0AI/ML Tuning Recommendations101.1512.1561.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock5.3278.3306.1AI/ML Tuning Recommendations81.9300.1323.0OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock99.3549.0578.2AI/ML Tuning Recommendations100.2560.5599.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor160320480640800

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock6.9317.3351.6AI/ML Tuning Recommendations117.3335.4361.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock101.1472.8504.1AI/ML Tuning Recommendations101.2511.1536.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock13.5282.0312.1AI/ML Tuning Recommendations90.1309.0331.5OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100556612AI/ML Tuning Recommendations99584636OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock16.6313.1351.4AI/ML Tuning Recommendations79.4334.8365.0OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock99.3441.8469.2AI/ML Tuning Recommendations98.7461.9496.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor130260390520650

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock32.7255.4281.9AI/ML Tuning Recommendations79.1276.2299.8OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100.0485.3521.8AI/ML Tuning Recommendations99.2521.4553.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor140280420560700

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock49.3290.5319.5AI/ML Tuning Recommendations132.1314.2336.5OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock102.3440.1472.4AI/ML Tuning Recommendations105.1501.8525.4OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor130260390520650

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock83.1271.0295.4AI/ML Tuning Recommendations94.6304.8328.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock100594647AI/ML Tuning Recommendations100615672OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor2004006008001000

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock62.9324.7356.2AI/ML Tuning Recommendations78.6339.7368.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor100200300400500

OpenVINO

System Power Consumption Monitor

MinAvgMaxStock99.1436.2470.9AI/ML Tuning Recommendations100.1457.0491.2OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5System Power Consumption Monitor130260390520650

OpenVINO

CPU Power Consumption Monitor

MinAvgMaxStock1.5262.5291.6AI/ML Tuning Recommendations3.7275.9306.1OpenBenchmarking.orgWatts, Fewer Is BetterOpenVINO 2024.5CPU Power Consumption Monitor80160240320400


Phoronix Test Suite v10.8.5