kubuntu-2404-nvme

ARMv8 Cortex-A76 testing with a Raspberry Pi 5 Model B Rev 1.0 and V3D 7.1.7 8GB on Ubuntu 24.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2410133-NE-KUBUNTU2425&gru.

kubuntu-2404-nvmeProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLFile-SystemScreen ResolutionARMv8 Cortex-A76ARMv8 Cortex-A76 @ 2.40GHz (4 Cores)Raspberry Pi 5 Model B Rev 1.0Broadcom BCM27128GB500GB KINGSTON SNV2S500GV3D 7.1.7 8GBPA247CVRaspberry Pi RP1 PCIe 2.0 South BridgeUbuntu 24.046.8.0-1012-raspi (aarch64)KDE Plasma 5.27.11X Server 1.21.1.113.1 Mesa 24.0.9-0ubuntu0.1ext41920x1080OpenBenchmarking.org- Transparent Huge Pages: madvise- Scaling Governor: cpufreq-dt ondemand- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

kubuntu-2404-nvmeonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: T5 Encoder - CPU - Parallelonnx: ZFNet-512 - CPU - Parallelonnx: ResNet101_DUC_HDC-12 - CPU - Parallelnumpy: llama-cpp: Meta-Llama-3-8B-Instruct-Q8_0.ggufllamafile: mistral-7b-instruct-v0.2.Q5_K_M - CPUllamafile: mistral-7b-instruct-v0.2.Q5_K_M - GPU AUTOllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - CPUllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - GPU AUTOllamafile: llava-v1.6-mistral-7b.Q8_0 - CPUllamafile: llava-v1.6-mistral-7b.Q8_0 - GPU AUTOonnx: yolov4 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: T5 Encoder - CPU - Parallelonnx: ZFNet-512 - CPU - Parallelonnx: ResNet101_DUC_HDC-12 - CPU - Paralleltensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: NASNet Mobiletensorflow-lite: SqueezeNettensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4mnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDetopencv: DNN - Deep Neural Networkopencv: Features 2Dopencv: Object Detectionopencv: Coreopencv: Image Processingopencv: Stitchingopencv: Videoonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUbuild-apache: Time To Compilebuild-ffmpeg: Time To Compilerbenchmark: whisper-cpp: ggml-base.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-medium.en - 2016 State of the Unionwhisperfile: Tinywhisperfile: Smallwhisperfile: Mediumxnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV3Smallxnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV3Smallxnnpack: QU8MobileNetV2xnnpack: QU8MobileNetV3Largexnnpack: QU8MobileNetV3SmallARMv8 Cortex-A760.5335760.1004198.6943412.497445.173916.9079.599580.0432072136.850.121.891.88154.381.961.951874.149958.27115.01180.010922.131959.1419104.16523144.318684.65774.2452846.626872.621525324655028.9073.97514.305122.55730.06814.9621.55150.3848.6613.9410.083.839.1615.991.5734.79163.1424.9525.6957.9448.6657.2143.1112.82627.866.3248.8314.129.763.979.1316.061.934.49163.3225.0125.8958.3148.8357.0542.5112.86629.586.1347223524088861443281470628264439495161859264.111560.132103.12258.65262.980265557.735072.3104.932462.7250.6038646.2341928.494125899.9365332.120881779.118625162.446205222035474921016610478404611904103383337OpenBenchmarking.org

CPU Temperature Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringARMv8 Cortex-A761428425670Min: 44.65 / Avg: 55.93 / Max: 74.35

System Temperature Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgCelsiusSystem Temperature MonitorPhoronix Test Suite System MonitoringARMv8 Cortex-A761428425670Min: 44.7 / Avg: 55.89 / Max: 73.8

Memory Usage Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgMegabytesMemory Usage MonitorPhoronix Test Suite System MonitoringARMv8 Cortex-A766001200180024003000Min: 247 / Avg: 1954.03 / Max: 3664

CPU Peak Freq (Highest CPU Core Frequency) Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringARMv8 Cortex-A76400800120016002000Min: 1500 / Avg: 2086.36 / Max: 2400

CPU Usage (Summary) Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgPercentCPU Usage (Summary) MonitorPhoronix Test Suite System MonitoringARMv8 Cortex-A7620406080100Min: 0 / Avg: 87.88 / Max: 100

CPU Fan Speed Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgRPMCPU Fan Speed MonitorPhoronix Test Suite System MonitoringARMv8 Cortex-A7614002800420056007000Min: 11 / Avg: 4822.05 / Max: 8294

System Fan Speed Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgRPMSystem Fan Speed MonitorPhoronix Test Suite System MonitoringARMv8 Cortex-A7614002800420056007000Min: 11 / Avg: 4822.03 / Max: 8294

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: ParallelARMv8 Cortex-A760.12010.24020.36030.48040.60050.5335761. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelARMv8 Cortex-A760.02260.04520.06780.09040.1130.1004191. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: ParallelARMv8 Cortex-A762468108.694341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelARMv8 Cortex-A76369121512.501. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelARMv8 Cortex-A76102030405045.171. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: ParallelARMv8 Cortex-A764812162016.911. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ZFNet-512 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: ParallelARMv8 Cortex-A7636912159.599581. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: ParallelARMv8 Cortex-A760.00970.01940.02910.03880.04850.04320721. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Timed Apache Compilation

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150022852400OpenBenchmarking.orgMegahertz, More Is BetterTimed Apache Compilation 2.4.41CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Timed FFmpeg Compilation

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023862400OpenBenchmarking.orgMegahertz, More Is BetterTimed FFmpeg Compilation 7.0CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Mobile Neural Network

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023922400OpenBenchmarking.orgMegahertz, More Is BetterMobile Neural Network 2.9.b11b7037dCPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

NCNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023692400OpenBenchmarking.orgMegahertz, More Is BetterNCNN 20230517CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

NCNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023702400OpenBenchmarking.orgMegahertz, More Is BetterNCNN 20230517CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

OpenCV

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023922400OpenBenchmarking.orgMegahertz, More Is BetterOpenCV 4.7CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

OpenCV

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023812400OpenBenchmarking.orgMegahertz, More Is BetterOpenCV 4.7CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

OpenCV

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023372400OpenBenchmarking.orgMegahertz, More Is BetterOpenCV 4.7CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

OpenCV

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023842400OpenBenchmarking.orgMegahertz, More Is BetterOpenCV 4.7CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

OpenCV

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023922400OpenBenchmarking.orgMegahertz, More Is BetterOpenCV 4.7CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

OpenCV

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023902400OpenBenchmarking.orgMegahertz, More Is BetterOpenCV 4.7CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

OpenCV

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023722400OpenBenchmarking.orgMegahertz, More Is BetterOpenCV 4.7CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

R Benchmark

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023582400OpenBenchmarking.orgMegahertz, More Is BetterR BenchmarkCPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Numpy Benchmark

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023872400OpenBenchmarking.orgMegahertz, More Is BetterNumpy BenchmarkCPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

TensorFlow Lite

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023312400OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow Lite 2022-05-18CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

TensorFlow Lite

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023432400OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow Lite 2022-05-18CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

TensorFlow Lite

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023302400OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow Lite 2022-05-18CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

TensorFlow Lite

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023442400OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow Lite 2022-05-18CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

TensorFlow Lite

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023312400OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow Lite 2022-05-18CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

TensorFlow Lite

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023332400OpenBenchmarking.orgMegahertz, More Is BetterTensorFlow Lite 2022-05-18CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150020822400OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.4CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150022302400OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.4CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150020432400OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.4CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150022252400OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.4CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150021002400OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.4CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023862400OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.4CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

oneDNN

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023842400OpenBenchmarking.orgMegahertz, More Is BetteroneDNN 3.4CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023322400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023372400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023172400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023052400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023312400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023162400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023182400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

ONNX Runtime

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023412400OpenBenchmarking.orgMegahertz, More Is BetterONNX Runtime 1.19CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Whisper.cpp

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023842400OpenBenchmarking.orgMegahertz, More Is BetterWhisper.cpp 1.6.2CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Whisper.cpp

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterWhisper.cpp 1.6.2CPU Peak Freq (Highest CPU Core Frequency) MonitorARMv8 Cortex-A76400800120016002000Min: 1500 / Avg: 2395.06 / Max: 2400

Whisper.cpp

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterWhisper.cpp 1.6.2CPU Peak Freq (Highest CPU Core Frequency) MonitorARMv8 Cortex-A76400800120016002000Min: 1500 / Avg: 2398.08 / Max: 2400

Whisperfile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023662400OpenBenchmarking.orgMegahertz, More Is BetterWhisperfile 20Aug24CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Whisperfile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterWhisperfile 20Aug24CPU Peak Freq (Highest CPU Core Frequency) MonitorARMv8 Cortex-A76400800120016002000Min: 1500 / Avg: 2393.29 / Max: 2400

Whisperfile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterWhisperfile 20Aug24CPU Peak Freq (Highest CPU Core Frequency) MonitorARMv8 Cortex-A76400800120016002000Min: 1500 / Avg: 2397.76 / Max: 2400

Llama.cpp

CPU Peak Freq (Highest CPU Core Frequency) Monitor

OpenBenchmarking.orgMegahertz, More Is BetterLlama.cpp b3067CPU Peak Freq (Highest CPU Core Frequency) MonitorARMv8 Cortex-A76400800120016002000Min: 1500 / Avg: 2033.32 / Max: 2400

Llamafile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023772400OpenBenchmarking.orgMegahertz, More Is BetterLlamafile 0.8.6CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Llamafile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023922400OpenBenchmarking.orgMegahertz, More Is BetterLlamafile 0.8.6CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Llamafile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150017502400OpenBenchmarking.orgMegahertz, More Is BetterLlamafile 0.8.6CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Llamafile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76160020552400OpenBenchmarking.orgMegahertz, More Is BetterLlamafile 0.8.6CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Llamafile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023842400OpenBenchmarking.orgMegahertz, More Is BetterLlamafile 0.8.6CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Llamafile

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023802400OpenBenchmarking.orgMegahertz, More Is BetterLlamafile 0.8.6CPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

XNNPACK

CPU Peak Freq (Highest CPU Core Frequency) Monitor

MinAvgMaxARMv8 Cortex-A76150023832400OpenBenchmarking.orgMegahertz, More Is BetterXNNPACK 2cd86bCPU Peak Freq (Highest CPU Core Frequency) Monitor6001200180024003000

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkARMv8 Cortex-A76306090120150136.85

Llama.cpp

Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b3067Model: Meta-Llama-3-8B-Instruct-Q8_0.ggufARMv8 Cortex-A760.0270.0540.0810.1080.1350.121. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

Llamafile

Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPUARMv8 Cortex-A760.42530.85061.27591.70122.12651.89

Llamafile

Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: GPU AUTO

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: GPU AUTOARMv8 Cortex-A760.4230.8461.2691.6922.1151.88

Llamafile

Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPU

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPUARMv8 Cortex-A764812162015

Llamafile

Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: GPU AUTO

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: GPU AUTOARMv8 Cortex-A760.98551.9712.95653.9424.92754.38

Llamafile

Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPU

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPUARMv8 Cortex-A760.4410.8821.3231.7642.2051.96

Llamafile

Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: GPU AUTO

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: GPU AUTOARMv8 Cortex-A760.43880.87761.31641.75522.1941.95

Timed Apache Compilation

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7642.551.457.3OpenBenchmarking.orgCelsius, Fewer Is BetterTimed Apache Compilation 2.4.41CPU Temperature Monitor1632486480

Timed Apache Compilation

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7642.551.456.8OpenBenchmarking.orgCelsius, Fewer Is BetterTimed Apache Compilation 2.4.41System Temperature Monitor1632486480

Timed FFmpeg Compilation

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7645.858.862.3OpenBenchmarking.orgCelsius, Fewer Is BetterTimed FFmpeg Compilation 7.0CPU Temperature Monitor20406080100

Timed FFmpeg Compilation

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7645.258.962.3OpenBenchmarking.orgCelsius, Fewer Is BetterTimed FFmpeg Compilation 7.0System Temperature Monitor20406080100

Mobile Neural Network

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7647.461.364.5OpenBenchmarking.orgCelsius, Fewer Is BetterMobile Neural Network 2.9.b11b7037dCPU Temperature Monitor20406080100

Mobile Neural Network

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7645.861.463.9OpenBenchmarking.orgCelsius, Fewer Is BetterMobile Neural Network 2.9.b11b7037dSystem Temperature Monitor20406080100

NCNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.262.268.3OpenBenchmarking.orgCelsius, Fewer Is BetterNCNN 20230517CPU Temperature Monitor20406080100

NCNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.262.167.8OpenBenchmarking.orgCelsius, Fewer Is BetterNCNN 20230517System Temperature Monitor20406080100

NCNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.362.367.8OpenBenchmarking.orgCelsius, Fewer Is BetterNCNN 20230517CPU Temperature Monitor20406080100

NCNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.362.467.8OpenBenchmarking.orgCelsius, Fewer Is BetterNCNN 20230517System Temperature Monitor20406080100

OpenCV

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.764.471.6OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7CPU Temperature Monitor20406080100

OpenCV

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.764.371.6OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7System Temperature Monitor20406080100

OpenCV

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.157.763.9OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7CPU Temperature Monitor20406080100

OpenCV

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.257.765.0OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7System Temperature Monitor20406080100

OpenCV

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.655.561.2OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7CPU Temperature Monitor20406080100

OpenCV

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.755.561.2OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7System Temperature Monitor20406080100

OpenCV

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.154.262.3OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7CPU Temperature Monitor20406080100

OpenCV

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.154.361.2OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7System Temperature Monitor20406080100

OpenCV

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.056.863.9OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7CPU Temperature Monitor20406080100

OpenCV

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.256.865.6OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7System Temperature Monitor20406080100

OpenCV

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.356.260.6OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7CPU Temperature Monitor20406080100

OpenCV

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.656.260.1OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7System Temperature Monitor1632486480

OpenCV

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.658.361.2OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7CPU Temperature Monitor20406080100

OpenCV

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.658.461.7OpenBenchmarking.orgCelsius, Fewer Is BetterOpenCV 4.7System Temperature Monitor20406080100

R Benchmark

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.654.461.7OpenBenchmarking.orgCelsius, Fewer Is BetterR BenchmarkCPU Temperature Monitor20406080100

R Benchmark

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.654.461.2OpenBenchmarking.orgCelsius, Fewer Is BetterR BenchmarkSystem Temperature Monitor20406080100

Numpy Benchmark

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.053.356.8OpenBenchmarking.orgCelsius, Fewer Is BetterNumpy BenchmarkCPU Temperature Monitor1632486480

Numpy Benchmark

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.553.356.8OpenBenchmarking.orgCelsius, Fewer Is BetterNumpy BenchmarkSystem Temperature Monitor1632486480

TensorFlow Lite

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7646.361.565.6OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Temperature Monitor20406080100

TensorFlow Lite

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7645.861.765.6OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18System Temperature Monitor20406080100

TensorFlow Lite

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.661.764.5OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Temperature Monitor20406080100

TensorFlow Lite

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.661.464.5OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18System Temperature Monitor20406080100

TensorFlow Lite

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.360.563.4OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Temperature Monitor20406080100

TensorFlow Lite

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.760.763.9OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18System Temperature Monitor20406080100

TensorFlow Lite

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.664.067.8OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Temperature Monitor20406080100

TensorFlow Lite

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.264.067.8OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18System Temperature Monitor20406080100

TensorFlow Lite

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.369.574.4OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Temperature Monitor20406080100

TensorFlow Lite

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.369.273.8OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18System Temperature Monitor20406080100

TensorFlow Lite

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7652.469.674.4OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Temperature Monitor20406080100

TensorFlow Lite

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7652.969.673.8OpenBenchmarking.orgCelsius, Fewer Is BetterTensorFlow Lite 2022-05-18System Temperature Monitor20406080100

oneDNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7653.556.959.5OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4CPU Temperature Monitor1632486480

oneDNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7653.557.260.1OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4System Temperature Monitor1632486480

oneDNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.760.666.1OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4CPU Temperature Monitor20406080100

oneDNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.260.665.6OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4System Temperature Monitor20406080100

oneDNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.755.362.3OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4CPU Temperature Monitor20406080100

oneDNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.355.261.2OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4System Temperature Monitor20406080100

oneDNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.159.065.6OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4CPU Temperature Monitor20406080100

oneDNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.658.664.5OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4System Temperature Monitor20406080100

oneDNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.656.762.8OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4CPU Temperature Monitor20406080100

oneDNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.656.462.3OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4System Temperature Monitor20406080100

oneDNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.164.567.2OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4CPU Temperature Monitor20406080100

oneDNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.164.467.2OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4System Temperature Monitor20406080100

oneDNN

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.264.566.7OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4CPU Temperature Monitor20406080100

oneDNN

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.764.366.7OpenBenchmarking.orgCelsius, Fewer Is BetteroneDNN 3.4System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7652.963.066.1OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.862.867.2OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.363.867.8OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.363.767.2OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7651.364.368.3OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7652.464.468.3OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.559.463.4OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.559.362.3OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.160.063.4OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.660.062.8OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.158.261.2OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.158.361.2OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.260.665.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.160.465.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

ONNX Runtime

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7649.161.865.6OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19CPU Temperature Monitor20406080100

ONNX Runtime

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7650.261.765.0OpenBenchmarking.orgCelsius, Fewer Is BetterONNX Runtime 1.19System Temperature Monitor20406080100

Whisper.cpp

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.565.268.9OpenBenchmarking.orgCelsius, Fewer Is BetterWhisper.cpp 1.6.2CPU Temperature Monitor20406080100

Whisper.cpp

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.565.268.9OpenBenchmarking.orgCelsius, Fewer Is BetterWhisper.cpp 1.6.2System Temperature Monitor20406080100

Whisper.cpp

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisper.cpp 1.6.2CPU Temperature MonitorARMv8 Cortex-A761428425670Min: 50.7 / Avg: 68.83 / Max: 72.7

Whisper.cpp

System Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisper.cpp 1.6.2System Temperature MonitorARMv8 Cortex-A761428425670Min: 51.8 / Avg: 68.8 / Max: 72.2

Whisper.cpp

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisper.cpp 1.6.2CPU Temperature MonitorARMv8 Cortex-A761428425670Min: 53.45 / Avg: 70.54 / Max: 74.35

Whisper.cpp

System Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisper.cpp 1.6.2System Temperature MonitorARMv8 Cortex-A761428425670Min: 53.5 / Avg: 70.49 / Max: 73.8

Whisperfile

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7654.063.667.2OpenBenchmarking.orgCelsius, Fewer Is BetterWhisperfile 20Aug24CPU Temperature Monitor20406080100

Whisperfile

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7654.663.567.8OpenBenchmarking.orgCelsius, Fewer Is BetterWhisperfile 20Aug24System Temperature Monitor20406080100

Whisperfile

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisperfile 20Aug24CPU Temperature MonitorARMv8 Cortex-A761326395265Min: 50.7 / Avg: 64.69 / Max: 69.4

Whisperfile

System Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisperfile 20Aug24System Temperature MonitorARMv8 Cortex-A761326395265Min: 51.8 / Avg: 64.68 / Max: 68.9

Whisperfile

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisperfile 20Aug24CPU Temperature MonitorARMv8 Cortex-A761428425670Min: 51.8 / Avg: 65.1 / Max: 70.5

Whisperfile

System Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterWhisperfile 20Aug24System Temperature MonitorARMv8 Cortex-A761428425670Min: 51.3 / Avg: 65.04 / Max: 70.5

Llama.cpp

CPU Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterLlama.cpp b3067CPU Temperature MonitorARMv8 Cortex-A761122334455Min: 45.2 / Avg: 52.56 / Max: 56.75

Llama.cpp

System Temperature Monitor

OpenBenchmarking.orgCelsius, Fewer Is BetterLlama.cpp b3067System Temperature MonitorARMv8 Cortex-A761122334455Min: 45.2 / Avg: 52.55 / Max: 56.8

Llamafile

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.568.572.2OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6CPU Temperature Monitor20406080100

Llamafile

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.568.471.6OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6System Temperature Monitor20406080100

Llamafile

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7653.570.372.7OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6CPU Temperature Monitor20406080100

Llamafile

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7653.570.272.7OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6System Temperature Monitor20406080100

Llamafile

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7647.449.255.1OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6CPU Temperature Monitor1530456075

Llamafile

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7647.449.756.2OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6System Temperature Monitor1632486480

Llamafile

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7646.952.257.3OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6CPU Temperature Monitor1632486480

Llamafile

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.052.157.3OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6System Temperature Monitor1632486480

Llamafile

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7648.067.271.6OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6CPU Temperature Monitor20406080100

Llamafile

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7647.467.271.6OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6System Temperature Monitor20406080100

Llamafile

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7654.070.373.8OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6CPU Temperature Monitor20406080100

Llamafile

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7653.570.273.3OpenBenchmarking.orgCelsius, Fewer Is BetterLlamafile 0.8.6System Temperature Monitor20406080100

XNNPACK

CPU Temperature Monitor

MinAvgMaxARMv8 Cortex-A7647.462.967.8OpenBenchmarking.orgCelsius, Fewer Is BetterXNNPACK 2cd86bCPU Temperature Monitor20406080100

XNNPACK

System Temperature Monitor

MinAvgMaxARMv8 Cortex-A7647.463.068.3OpenBenchmarking.orgCelsius, Fewer Is BetterXNNPACK 2cd86bSystem Temperature Monitor20406080100

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: ParallelARMv8 Cortex-A764008001200160020001874.141. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelARMv8 Cortex-A762K4K6K8K10K9958.271. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: ParallelARMv8 Cortex-A76306090120150115.011. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelARMv8 Cortex-A762040608010080.011. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelARMv8 Cortex-A7651015202522.131. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: ParallelARMv8 Cortex-A76132639526559.141. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ZFNet-512 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: ParallelARMv8 Cortex-A7620406080100104.171. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: ParallelARMv8 Cortex-A765K10K15K20K25K23144.31. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Timed Apache Compilation

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76160717001793OpenBenchmarking.orgMegabytes, Fewer Is BetterTimed Apache Compilation 2.4.41Memory Usage Monitor5001000150020002500

Timed FFmpeg Compilation

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76153717502201OpenBenchmarking.orgMegabytes, Fewer Is BetterTimed FFmpeg Compilation 7.0Memory Usage Monitor6001200180024003000

Mobile Neural Network

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76621769912OpenBenchmarking.orgMegabytes, Fewer Is BetterMobile Neural Network 2.9.b11b7037dMemory Usage Monitor2004006008001000

NCNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766268701186OpenBenchmarking.orgMegabytes, Fewer Is BetterNCNN 20230517Memory Usage Monitor400800120016002000

NCNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766258701172OpenBenchmarking.orgMegabytes, Fewer Is BetterNCNN 20230517Memory Usage Monitor400800120016002000

OpenCV

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766268001237OpenBenchmarking.orgMegabytes, Fewer Is BetterOpenCV 4.7Memory Usage Monitor400800120016002000

OpenCV

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766449571265OpenBenchmarking.orgMegabytes, Fewer Is BetterOpenCV 4.7Memory Usage Monitor400800120016002000

OpenCV

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76646736765OpenBenchmarking.orgMegabytes, Fewer Is BetterOpenCV 4.7Memory Usage Monitor2004006008001000

OpenCV

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76637704776OpenBenchmarking.orgMegabytes, Fewer Is BetterOpenCV 4.7Memory Usage Monitor2004006008001000

OpenCV

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766538041212OpenBenchmarking.orgMegabytes, Fewer Is BetterOpenCV 4.7Memory Usage Monitor400800120016002000

OpenCV

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766487971468OpenBenchmarking.orgMegabytes, Fewer Is BetterOpenCV 4.7Memory Usage Monitor400800120016002000

OpenCV

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766538271051OpenBenchmarking.orgMegabytes, Fewer Is BetterOpenCV 4.7Memory Usage Monitor2004006008001000

R Benchmark

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766549471269OpenBenchmarking.orgMegabytes, Fewer Is BetterR BenchmarkMemory Usage Monitor400800120016002000

Numpy Benchmark

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76651672775OpenBenchmarking.orgMegabytes, Fewer Is BetterNumpy BenchmarkMemory Usage Monitor2004006008001000

TensorFlow Lite

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76655680682OpenBenchmarking.orgMegabytes, Fewer Is BetterTensorFlow Lite 2022-05-18Memory Usage Monitor2004006008001000

TensorFlow Lite

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76652654655OpenBenchmarking.orgMegabytes, Fewer Is BetterTensorFlow Lite 2022-05-18Memory Usage Monitor2004006008001000

TensorFlow Lite

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76650701709OpenBenchmarking.orgMegabytes, Fewer Is BetterTensorFlow Lite 2022-05-18Memory Usage Monitor2004006008001000

TensorFlow Lite

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76654672676OpenBenchmarking.orgMegabytes, Fewer Is BetterTensorFlow Lite 2022-05-18Memory Usage Monitor2004006008001000

TensorFlow Lite

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76655794812OpenBenchmarking.orgMegabytes, Fewer Is BetterTensorFlow Lite 2022-05-18Memory Usage Monitor2004006008001000

TensorFlow Lite

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76656822839OpenBenchmarking.orgMegabytes, Fewer Is BetterTensorFlow Lite 2022-05-18Memory Usage Monitor2004006008001000

oneDNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76661777976OpenBenchmarking.orgMegabytes, Fewer Is BetteroneDNN 3.4Memory Usage Monitor2004006008001000

oneDNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76664673681OpenBenchmarking.orgMegabytes, Fewer Is BetteroneDNN 3.4Memory Usage Monitor2004006008001000

oneDNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76658680706OpenBenchmarking.orgMegabytes, Fewer Is BetteroneDNN 3.4Memory Usage Monitor2004006008001000

oneDNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76663702782OpenBenchmarking.orgMegabytes, Fewer Is BetteroneDNN 3.4Memory Usage Monitor2004006008001000

oneDNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766547751155OpenBenchmarking.orgMegabytes, Fewer Is BetteroneDNN 3.4Memory Usage Monitor2004006008001000

oneDNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766589521256OpenBenchmarking.orgMegabytes, Fewer Is BetteroneDNN 3.4Memory Usage Monitor400800120016002000

oneDNN

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A766548301004OpenBenchmarking.orgMegabytes, Fewer Is BetteroneDNN 3.4Memory Usage Monitor2004006008001000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7665711811245OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor400800120016002000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7664811721238OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor400800120016002000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76645684689OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor2004006008001000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76647713722OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor2004006008001000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76653780816OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor2004006008001000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7664613391434OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor400800120016002000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7664312591347OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor400800120016002000

ONNX Runtime

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7664515061596OpenBenchmarking.orgMegabytes, Fewer Is BetterONNX Runtime 1.19Memory Usage Monitor400800120016002000

Whisper.cpp

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7664814702037OpenBenchmarking.orgMegabytes, Fewer Is BetterWhisper.cpp 1.6.2Memory Usage Monitor5001000150020002500

Whisper.cpp

Memory Usage Monitor

OpenBenchmarking.orgMegabytes, Fewer Is BetterWhisper.cpp 1.6.2Memory Usage MonitorARMv8 Cortex-A76400800120016002000Min: 642 / Avg: 2017.42 / Max: 2445

Whisper.cpp

Memory Usage Monitor

OpenBenchmarking.orgMegabytes, Fewer Is BetterWhisper.cpp 1.6.2Memory Usage MonitorARMv8 Cortex-A766001200180024003000Min: 637 / Avg: 3571.12 / Max: 3616

Whisperfile

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7666613071975OpenBenchmarking.orgMegabytes, Fewer Is BetterWhisperfile 20Aug24Memory Usage Monitor5001000150020002500

Whisperfile

Memory Usage Monitor

OpenBenchmarking.orgMegabytes, Fewer Is BetterWhisperfile 20Aug24Memory Usage MonitorARMv8 Cortex-A76400800120016002000Min: 663 / Avg: 2025.8 / Max: 2456

Whisperfile

Memory Usage Monitor

OpenBenchmarking.orgMegabytes, Fewer Is BetterWhisperfile 20Aug24Memory Usage MonitorARMv8 Cortex-A766001200180024003000Min: 644 / Avg: 3565.7 / Max: 3629

Llama.cpp

Memory Usage Monitor

OpenBenchmarking.orgMegabytes, Fewer Is BetterLlama.cpp b3067Memory Usage MonitorARMv8 Cortex-A76130260390520650Min: 248 / Avg: 334.85 / Max: 738

Llamafile

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76253.0568.2585.0OpenBenchmarking.orgMegabytes, Fewer Is BetterLlamafile 0.8.6Memory Usage Monitor160320480640800

Llamafile

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76300.0578.6590.0OpenBenchmarking.orgMegabytes, Fewer Is BetterLlamafile 0.8.6Memory Usage Monitor160320480640800

Llamafile

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76328.0349.8381.0OpenBenchmarking.orgMegabytes, Fewer Is BetterLlamafile 0.8.6Memory Usage Monitor100200300400500

Llamafile

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76331.0364.1386.0OpenBenchmarking.orgMegabytes, Fewer Is BetterLlamafile 0.8.6Memory Usage Monitor100200300400500

Llamafile

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7634616471707OpenBenchmarking.orgMegabytes, Fewer Is BetterLlamafile 0.8.6Memory Usage Monitor5001000150020002500

Llamafile

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A7635016521707OpenBenchmarking.orgMegabytes, Fewer Is BetterLlamafile 0.8.6Memory Usage Monitor5001000150020002500

XNNPACK

Memory Usage Monitor

MinAvgMaxARMv8 Cortex-A76346709886OpenBenchmarking.orgMegabytes, Fewer Is BetterXNNPACK 2cd86bMemory Usage Monitor2004006008001000

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatARMv8 Cortex-A764K8K12K16K20K18684.6

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantARMv8 Cortex-A76120024003600480060005774.24

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileARMv8 Cortex-A7611K22K33K44K55K52846.6

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetARMv8 Cortex-A766K12K18K24K30K26872.6

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2ARMv8 Cortex-A7650K100K150K200K250K215253

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4ARMv8 Cortex-A7650K100K150K200K250K246550

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: nasnetARMv8 Cortex-A7671421283528.91MIN: 26.83 / MAX: 75.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenetV3ARMv8 Cortex-A760.89441.78882.68323.57764.4723.975MIN: 3.66 / MAX: 12.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: squeezenetv1.1ARMv8 Cortex-A764812162014.31MIN: 13.36 / MAX: 34.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: resnet-v2-50ARMv8 Cortex-A76306090120150122.56MIN: 112.59 / MAX: 278.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: SqueezeNetV1.0ARMv8 Cortex-A7671421283530.07MIN: 28.25 / MAX: 64.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: MobileNetV2_224ARMv8 Cortex-A764812162014.96MIN: 13.95 / MAX: 32.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenet-v1-1.0ARMv8 Cortex-A7651015202521.55MIN: 20.1 / MAX: 44.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: inception-v3ARMv8 Cortex-A76306090120150150.38MIN: 141.2 / MAX: 217.081. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetARMv8 Cortex-A76112233445548.66MIN: 45.43 / MAX: 106.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ARMv8 Cortex-A764812162013.94MIN: 12.76 / MAX: 75.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ARMv8 Cortex-A76369121510.08MIN: 9.01 / MAX: 81.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ARMv8 Cortex-A760.86181.72362.58543.44724.3093.83MIN: 3.41 / MAX: 68.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetARMv8 Cortex-A7636912159.16MIN: 8.33 / MAX: 41.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ARMv8 Cortex-A764812162015.99MIN: 14.54 / MAX: 78.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceARMv8 Cortex-A760.35330.70661.05991.41321.76651.57MIN: 1.5 / MAX: 8.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetARMv8 Cortex-A7681624324034.79MIN: 32.19 / MAX: 113.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ARMv8 Cortex-A764080120160200163.14MIN: 155.59 / MAX: 207.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ARMv8 Cortex-A7661218243024.95MIN: 23.29 / MAX: 74.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetARMv8 Cortex-A7661218243025.69MIN: 24.31 / MAX: 55.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ARMv8 Cortex-A76132639526557.94MIN: 53.77 / MAX: 1151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3ARMv8 Cortex-A76112233445548.66MIN: 45.43 / MAX: 106.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyARMv8 Cortex-A76132639526557.21MIN: 54.26 / MAX: 106.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdARMv8 Cortex-A76102030405043.11MIN: 39.81 / MAX: 109.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mARMv8 Cortex-A76369121512.82MIN: 11.59 / MAX: 75.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerARMv8 Cortex-A76140280420560700627.86MIN: 594.98 / MAX: 660.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetARMv8 Cortex-A762468106.32MIN: 5.61 / MAX: 67.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetARMv8 Cortex-A76112233445548.83MIN: 45.66 / MAX: 115.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ARMv8 Cortex-A764812162014.12MIN: 12.74 / MAX: 76.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3ARMv8 Cortex-A7636912159.76MIN: 8.95 / MAX: 77.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2ARMv8 Cortex-A760.89331.78662.67993.57324.46653.97MIN: 3.46 / MAX: 65.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetARMv8 Cortex-A7636912159.13MIN: 8.28 / MAX: 66.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0ARMv8 Cortex-A764812162016.06MIN: 14.54 / MAX: 81.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceARMv8 Cortex-A760.42750.8551.28251.712.13751.9MIN: 1.52 / MAX: 40.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetARMv8 Cortex-A7681624324034.49MIN: 32.05 / MAX: 115.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16ARMv8 Cortex-A764080120160200163.32MIN: 155.28 / MAX: 202.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18ARMv8 Cortex-A7661218243025.01MIN: 23.31 / MAX: 70.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetARMv8 Cortex-A7661218243025.89MIN: 24.36 / MAX: 55.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50ARMv8 Cortex-A76132639526558.31MIN: 53.92 / MAX: 113.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3ARMv8 Cortex-A76112233445548.83MIN: 45.66 / MAX: 115.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyARMv8 Cortex-A76132639526557.05MIN: 54.01 / MAX: 101.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdARMv8 Cortex-A76102030405042.51MIN: 39.68 / MAX: 110.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mARMv8 Cortex-A76369121512.86MIN: 11.6 / MAX: 77.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerARMv8 Cortex-A76140280420560700629.58MIN: 585.91 / MAX: 719.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetARMv8 Cortex-A762468106.13MIN: 5.6 / MAX: 44.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenCV

Test: DNN - Deep Neural Network

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural NetworkARMv8 Cortex-A76100K200K300K400K500K4722351. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenCV

Test: Features 2D

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Features 2DARMv8 Cortex-A7650K100K150K200K250K2408881. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenCV

Test: Object Detection

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Object DetectionARMv8 Cortex-A7613K26K39K52K65K614431. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenCV

Test: Core

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: CoreARMv8 Cortex-A7660K120K180K240K300K2814701. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenCV

Test: Image Processing

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Image ProcessingARMv8 Cortex-A76130K260K390K520K650K6282641. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenCV

Test: Stitching

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: StitchingARMv8 Cortex-A7690K180K270K360K450K4394951. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenCV

Test: Video

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: VideoARMv8 Cortex-A7630K60K90K120K150K1618591. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUARMv8 Cortex-A7660120180240300264.11MIN: 254.721. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUARMv8 Cortex-A76120240360480600560.13MIN: 541.241. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUARMv8 Cortex-A7620406080100103.12MIN: 95.691. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUARMv8 Cortex-A76132639526558.65MIN: 54.791. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUARMv8 Cortex-A76142842567062.98MIN: 60.871. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUARMv8 Cortex-A7614K28K42K56K70K65557.7MIN: 65037.41. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUARMv8 Cortex-A768K16K24K32K40K35072.3MIN: 34622.41. (CXX) g++ options: -O3 -march=native -fopenmp -mcpu=generic -fPIC -pie -ldl

Timed Apache Compilation

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A764.561.7100.0OpenBenchmarking.orgPercent, Fewer Is BetterTimed Apache Compilation 2.4.41CPU Usage (Summary) Monitor20406080100

Timed FFmpeg Compilation

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A763.095.9100.0OpenBenchmarking.orgPercent, Fewer Is BetterTimed FFmpeg Compilation 7.0CPU Usage (Summary) Monitor20406080100

Mobile Neural Network

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A762.098.6100.0OpenBenchmarking.orgPercent, Fewer Is BetterMobile Neural Network 2.9.b11b7037dCPU Usage (Summary) Monitor20406080100

NCNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A763.093.5100.0OpenBenchmarking.orgPercent, Fewer Is BetterNCNN 20230517CPU Usage (Summary) Monitor20406080100

NCNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.593.5100.0OpenBenchmarking.orgPercent, Fewer Is BetterNCNN 20230517CPU Usage (Summary) Monitor20406080100

OpenCV

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.091.6100.0OpenBenchmarking.orgPercent, Fewer Is BetterOpenCV 4.7CPU Usage (Summary) Monitor20406080100

OpenCV

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.063.2100.0OpenBenchmarking.orgPercent, Fewer Is BetterOpenCV 4.7CPU Usage (Summary) Monitor20406080100

OpenCV

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A763.039.695.0OpenBenchmarking.orgPercent, Fewer Is BetterOpenCV 4.7CPU Usage (Summary) Monitor20406080100

OpenCV

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.030.6100.0OpenBenchmarking.orgPercent, Fewer Is BetterOpenCV 4.7CPU Usage (Summary) Monitor20406080100

OpenCV

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.055.5100.0OpenBenchmarking.orgPercent, Fewer Is BetterOpenCV 4.7CPU Usage (Summary) Monitor20406080100

OpenCV

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A765.048.6100.0OpenBenchmarking.orgPercent, Fewer Is BetterOpenCV 4.7CPU Usage (Summary) Monitor20406080100

OpenCV

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A762.586.4100.0OpenBenchmarking.orgPercent, Fewer Is BetterOpenCV 4.7CPU Usage (Summary) Monitor20406080100

R Benchmark

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.034.4100.0OpenBenchmarking.orgPercent, Fewer Is BetterR BenchmarkCPU Usage (Summary) Monitor20406080100

Numpy Benchmark

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A763.027.252.8OpenBenchmarking.orgPercent, Fewer Is BetterNumpy BenchmarkCPU Usage (Summary) Monitor1530456075

TensorFlow Lite

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A763.594.9100.0OpenBenchmarking.orgPercent, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Usage (Summary) Monitor20406080100

TensorFlow Lite

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.091.0100.0OpenBenchmarking.orgPercent, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Usage (Summary) Monitor20406080100

TensorFlow Lite

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A762.086.694.5OpenBenchmarking.orgPercent, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Usage (Summary) Monitor20406080100

TensorFlow Lite

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A762.591.5100.0OpenBenchmarking.orgPercent, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Usage (Summary) Monitor20406080100

TensorFlow Lite

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.590.9100.0OpenBenchmarking.orgPercent, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Usage (Summary) Monitor20406080100

TensorFlow Lite

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.091.4100.0OpenBenchmarking.orgPercent, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Usage (Summary) Monitor20406080100

oneDNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.552.198.5OpenBenchmarking.orgPercent, Fewer Is BetteroneDNN 3.4CPU Usage (Summary) Monitor20406080100

oneDNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.577.8100.0OpenBenchmarking.orgPercent, Fewer Is BetteroneDNN 3.4CPU Usage (Summary) Monitor20406080100

oneDNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.073.6100.0OpenBenchmarking.orgPercent, Fewer Is BetteroneDNN 3.4CPU Usage (Summary) Monitor20406080100

oneDNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.064.099.5OpenBenchmarking.orgPercent, Fewer Is BetteroneDNN 3.4CPU Usage (Summary) Monitor20406080100

oneDNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.096.2100.0OpenBenchmarking.orgPercent, Fewer Is BetteroneDNN 3.4CPU Usage (Summary) Monitor20406080100

oneDNN

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.595.4100.0OpenBenchmarking.orgPercent, Fewer Is BetteroneDNN 3.4CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A764.089.7100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.089.5100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.589.4100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.089.1100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.091.1100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.590.4100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.086.9100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

ONNX Runtime

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.588.7100.0OpenBenchmarking.orgPercent, Fewer Is BetterONNX Runtime 1.19CPU Usage (Summary) Monitor20406080100

Whisper.cpp

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.595.5100.0OpenBenchmarking.orgPercent, Fewer Is BetterWhisper.cpp 1.6.2CPU Usage (Summary) Monitor20406080100

Whisper.cpp

CPU Usage (Summary) Monitor

OpenBenchmarking.orgPercent, Fewer Is BetterWhisper.cpp 1.6.2CPU Usage (Summary) MonitorARMv8 Cortex-A7620406080100Min: 1 / Avg: 98.49 / Max: 100

Whisper.cpp

CPU Usage (Summary) Monitor

OpenBenchmarking.orgPercent, Fewer Is BetterWhisper.cpp 1.6.2CPU Usage (Summary) MonitorARMv8 Cortex-A7620406080100Min: 0.5 / Avg: 99.39 / Max: 100

Whisperfile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A763.091.5100.0OpenBenchmarking.orgPercent, Fewer Is BetterWhisperfile 20Aug24CPU Usage (Summary) Monitor20406080100

Whisperfile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.098.3100.0OpenBenchmarking.orgPercent, Fewer Is BetterWhisperfile 20Aug24CPU Usage (Summary) Monitor20406080100

Whisperfile

CPU Usage (Summary) Monitor

OpenBenchmarking.orgPercent, Fewer Is BetterWhisperfile 20Aug24CPU Usage (Summary) MonitorARMv8 Cortex-A7620406080100Min: 1 / Avg: 99.3 / Max: 100

Llama.cpp

CPU Usage (Summary) Monitor

OpenBenchmarking.orgPercent, Fewer Is BetterLlama.cpp b3067CPU Usage (Summary) MonitorARMv8 Cortex-A7620406080100Min: 0 / Avg: 94.48 / Max: 100

Llamafile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.098.4100.0OpenBenchmarking.orgPercent, Fewer Is BetterLlamafile 0.8.6CPU Usage (Summary) Monitor20406080100

Llamafile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A763.098.7100.0OpenBenchmarking.orgPercent, Fewer Is BetterLlamafile 0.8.6CPU Usage (Summary) Monitor20406080100

Llamafile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.555.498.5OpenBenchmarking.orgPercent, Fewer Is BetterLlamafile 0.8.6CPU Usage (Summary) Monitor20406080100

Llamafile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A762.545.099.5OpenBenchmarking.orgPercent, Fewer Is BetterLlamafile 0.8.6CPU Usage (Summary) Monitor20406080100

Llamafile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A764.097.4100.0OpenBenchmarking.orgPercent, Fewer Is BetterLlamafile 0.8.6CPU Usage (Summary) Monitor20406080100

Llamafile

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A761.097.0100.0OpenBenchmarking.orgPercent, Fewer Is BetterLlamafile 0.8.6CPU Usage (Summary) Monitor20406080100

XNNPACK

CPU Usage (Summary) Monitor

MinAvgMaxARMv8 Cortex-A760.094.0100.0OpenBenchmarking.orgPercent, Fewer Is BetterXNNPACK 2cd86bCPU Usage (Summary) Monitor20406080100

Timed Apache Compilation

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76133593457OpenBenchmarking.orgRPM, Fewer Is BetterTimed Apache Compilation 2.4.41CPU Fan Speed Monitor8001600240032004000

Timed Apache Compilation

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76133593457OpenBenchmarking.orgRPM, Fewer Is BetterTimed Apache Compilation 2.4.41System Fan Speed Monitor8001600240032004000

Timed FFmpeg Compilation

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76271653905965OpenBenchmarking.orgRPM, Fewer Is BetterTimed FFmpeg Compilation 7.0CPU Fan Speed Monitor16003200480064008000

Timed FFmpeg Compilation

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76271653905965OpenBenchmarking.orgRPM, Fewer Is BetterTimed FFmpeg Compilation 7.0System Fan Speed Monitor16003200480064008000

Mobile Neural Network

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76299358785982OpenBenchmarking.orgRPM, Fewer Is BetterMobile Neural Network 2.9.b11b7037dCPU Fan Speed Monitor16003200480064008000

Mobile Neural Network

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76299358785982OpenBenchmarking.orgRPM, Fewer Is BetterMobile Neural Network 2.9.b11b7037dSystem Fan Speed Monitor16003200480064008000

NCNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76362862488027OpenBenchmarking.orgRPM, Fewer Is BetterNCNN 20230517CPU Fan Speed Monitor2K4K6K8K10K

NCNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76362862488027OpenBenchmarking.orgRPM, Fewer Is BetterNCNN 20230517System Fan Speed Monitor2K4K6K8K10K

NCNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76370064238040OpenBenchmarking.orgRPM, Fewer Is BetterNCNN 20230517CPU Fan Speed Monitor2K4K6K8K10K

NCNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76370064238040OpenBenchmarking.orgRPM, Fewer Is BetterNCNN 20230517System Fan Speed Monitor2K4K6K8K10K

OpenCV

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76373368998073OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7CPU Fan Speed Monitor2K4K6K8K10K

OpenCV

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76373368998073OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7System Fan Speed Monitor2K4K6K8K10K

OpenCV

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359050165946OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7CPU Fan Speed Monitor16003200480064008000

OpenCV

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359050165946OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7System Fan Speed Monitor16003200480064008000

OpenCV

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76357139805869OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7CPU Fan Speed Monitor16003200480064008000

OpenCV

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76357139805869OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7System Fan Speed Monitor16003200480064008000

OpenCV

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353036075810OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7CPU Fan Speed Monitor14002800420056007000

OpenCV

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353036075810OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7System Fan Speed Monitor14002800420056007000

OpenCV

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76351244235948OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7CPU Fan Speed Monitor16003200480064008000

OpenCV

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76351244235948OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7System Fan Speed Monitor16003200480064008000

OpenCV

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76352436935882OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7CPU Fan Speed Monitor16003200480064008000

OpenCV

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76352436935882OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7System Fan Speed Monitor16003200480064008000

OpenCV

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76355748955927OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7CPU Fan Speed Monitor16003200480064008000

OpenCV

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76355748955927OpenBenchmarking.orgRPM, Fewer Is BetterOpenCV 4.7System Fan Speed Monitor16003200480064008000

R Benchmark

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76356038425854OpenBenchmarking.orgRPM, Fewer Is BetterR BenchmarkCPU Fan Speed Monitor16003200480064008000

R Benchmark

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76356038425854OpenBenchmarking.orgRPM, Fewer Is BetterR BenchmarkSystem Fan Speed Monitor16003200480064008000

Numpy Benchmark

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76352135403557OpenBenchmarking.orgRPM, Fewer Is BetterNumpy BenchmarkCPU Fan Speed Monitor10002000300040005000

Numpy Benchmark

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76352135403557OpenBenchmarking.orgRPM, Fewer Is BetterNumpy BenchmarkSystem Fan Speed Monitor10002000300040005000

TensorFlow Lite

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A761153255838OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Fan Speed Monitor16003200480064008000

TensorFlow Lite

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A761153255838OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18System Fan Speed Monitor16003200480064008000

TensorFlow Lite

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76355655035896OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Fan Speed Monitor16003200480064008000

TensorFlow Lite

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76355655035896OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18System Fan Speed Monitor16003200480064008000

TensorFlow Lite

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76358355385909OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Fan Speed Monitor16003200480064008000

TensorFlow Lite

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76358355385909OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18System Fan Speed Monitor16003200480064008000

TensorFlow Lite

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76360157207784OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Fan Speed Monitor2K4K6K8K10K

TensorFlow Lite

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76360157207784OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18System Fan Speed Monitor2K4K6K8K10K

TensorFlow Lite

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76362174037982OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Fan Speed Monitor2K4K6K8K10K

TensorFlow Lite

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76362174037982OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18System Fan Speed Monitor2K4K6K8K10K

TensorFlow Lite

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76373874708027OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18CPU Fan Speed Monitor2K4K6K8K10K

TensorFlow Lite

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76373874708027OpenBenchmarking.orgRPM, Fewer Is BetterTensorFlow Lite 2022-05-18System Fan Speed Monitor2K4K6K8K10K

oneDNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76375940185897OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4CPU Fan Speed Monitor16003200480064008000

oneDNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76375940185897OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4System Fan Speed Monitor16003200480064008000

oneDNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76370452466007OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4CPU Fan Speed Monitor16003200480064008000

oneDNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76370452466007OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4System Fan Speed Monitor16003200480064008000

oneDNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76366539855742OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4CPU Fan Speed Monitor14002800420056007000

oneDNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76366539855742OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4System Fan Speed Monitor14002800420056007000

oneDNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76363349955918OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4CPU Fan Speed Monitor16003200480064008000

oneDNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76363349955918OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4System Fan Speed Monitor16003200480064008000

oneDNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76360840835888OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4CPU Fan Speed Monitor16003200480064008000

oneDNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76360840835888OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4System Fan Speed Monitor16003200480064008000

oneDNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359568118071OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4CPU Fan Speed Monitor2K4K6K8K10K

oneDNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359568118071OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4System Fan Speed Monitor2K4K6K8K10K

oneDNN

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76376459466069OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4CPU Fan Speed Monitor16003200480064008000

oneDNN

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76376459466069OpenBenchmarking.orgRPM, Fewer Is BetteroneDNN 3.4System Fan Speed Monitor16003200480064008000

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76368057126003OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor16003200480064008000

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76368057126003OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor16003200480064008000

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76366857285994OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor16003200480064008000

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76366857285994OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor16003200480064008000

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76366565617955OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor2K4K6K8K10K

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76366565617955OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor2K4K6K8K10K

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76358252065920OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor16003200480064008000

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76358252065920OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor16003200480064008000

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359754675930OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor16003200480064008000

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359754675927OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor16003200480064008000

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76357350325907OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor16003200480064008000

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76357350325907OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor16003200480064008000

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359054805931OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor16003200480064008000

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76359054805931OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor16003200480064008000

ONNX Runtime

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76361257085955OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19CPU Fan Speed Monitor16003200480064008000

ONNX Runtime

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76361257085955OpenBenchmarking.orgRPM, Fewer Is BetterONNX Runtime 1.19System Fan Speed Monitor16003200480064008000

Whisper.cpp

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76357677158294OpenBenchmarking.orgRPM, Fewer Is BetterWhisper.cpp 1.6.2CPU Fan Speed Monitor2K4K6K8K10K

Whisper.cpp

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76357677158294OpenBenchmarking.orgRPM, Fewer Is BetterWhisper.cpp 1.6.2System Fan Speed Monitor2K4K6K8K10K

Whisper.cpp

CPU Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisper.cpp 1.6.2CPU Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3751 / Avg: 8037.93 / Max: 8121

Whisper.cpp

System Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisper.cpp 1.6.2System Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3751 / Avg: 8037.94 / Max: 8121

Whisper.cpp

CPU Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisper.cpp 1.6.2CPU Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3796 / Avg: 8088.24 / Max: 8136

Whisper.cpp

System Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisper.cpp 1.6.2System Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3796 / Avg: 8088.23 / Max: 8136

Whisperfile

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76378959366107OpenBenchmarking.orgRPM, Fewer Is BetterWhisperfile 20Aug24CPU Fan Speed Monitor16003200480064008000

Whisperfile

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76378959366107OpenBenchmarking.orgRPM, Fewer Is BetterWhisperfile 20Aug24System Fan Speed Monitor16003200480064008000

Whisperfile

CPU Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisperfile 20Aug24CPU Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3669 / Avg: 7104.03 / Max: 8060

Whisperfile

System Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisperfile 20Aug24System Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3669 / Avg: 7103.91 / Max: 8060

Whisperfile

CPU Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisperfile 20Aug24CPU Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3697 / Avg: 7073.25 / Max: 8253

Whisperfile

System Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterWhisperfile 20Aug24System Fan Speed MonitorARMv8 Cortex-A7614002800420056007000Min: 3697 / Avg: 7073.21 / Max: 8253

Llama.cpp

CPU Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterLlama.cpp b3067CPU Fan Speed MonitorARMv8 Cortex-A766001200180024003000Min: 2773 / Avg: 3530.37 / Max: 3554

Llama.cpp

System Fan Speed Monitor

OpenBenchmarking.orgRPM, Fewer Is BetterLlama.cpp b3067System Fan Speed MonitorARMv8 Cortex-A766001200180024003000Min: 2773 / Avg: 3530.38 / Max: 3554

Llamafile

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353776338090OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6CPU Fan Speed Monitor2K4K6K8K10K

Llamafile

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353776338090OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6System Fan Speed Monitor2K4K6K8K10K

Llamafile

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76380680128103OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6CPU Fan Speed Monitor2K4K6K8K10K

Llamafile

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76380680128103OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6System Fan Speed Monitor2K4K6K8K10K

Llamafile

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353835413544OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6CPU Fan Speed Monitor10002000300040005000

Llamafile

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353835413544OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6System Fan Speed Monitor10002000300040005000

Llamafile

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353835413544OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6CPU Fan Speed Monitor10002000300040005000

Llamafile

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353835413544OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6System Fan Speed Monitor10002000300040005000

Llamafile

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353774868112OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6CPU Fan Speed Monitor2K4K6K8K10K

Llamafile

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76353774868112OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6System Fan Speed Monitor2K4K6K8K10K

Llamafile

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76382378948130OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6CPU Fan Speed Monitor2K4K6K8K10K

Llamafile

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76382378948130OpenBenchmarking.orgRPM, Fewer Is BetterLlamafile 0.8.6System Fan Speed Monitor2K4K6K8K10K

XNNPACK

CPU Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76354564228060OpenBenchmarking.orgRPM, Fewer Is BetterXNNPACK 2cd86bCPU Fan Speed Monitor2K4K6K8K10K

XNNPACK

System Fan Speed Monitor

MinAvgMaxARMv8 Cortex-A76354564228060OpenBenchmarking.orgRPM, Fewer Is BetterXNNPACK 2cd86bSystem Fan Speed Monitor2K4K6K8K10K

Timed Apache Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileARMv8 Cortex-A7620406080100104.93

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 7.0Time To CompileARMv8 Cortex-A76100200300400500462.73

R Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterR BenchmarkARMv8 Cortex-A760.13590.27180.40770.54360.67950.6038

Whisper.cpp

Model: ggml-base.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the UnionARMv8 Cortex-A76140280420560700646.231. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -mcpu=native

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the UnionARMv8 Cortex-A764008001200160020001928.491. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -mcpu=native

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the UnionARMv8 Cortex-A76130026003900520065005899.941. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -mcpu=native

Whisperfile

Model Size: Tiny

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: TinyARMv8 Cortex-A7670140210280350332.12

Whisperfile

Model Size: Small

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: SmallARMv8 Cortex-A764008001200160020001779.12

Whisperfile

Model Size: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: MediumARMv8 Cortex-A76110022003300440055005162.45

XNNPACK

Model: FP32MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV2ARMv8 Cortex-A764K8K12K16K20K205221. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV3LargeARMv8 Cortex-A764K8K12K16K20K203541. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV3SmallARMv8 Cortex-A761600320048006400800074921. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV2ARMv8 Cortex-A762K4K6K8K10K101661. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV3LargeARMv8 Cortex-A762K4K6K8K10K104781. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV3SmallARMv8 Cortex-A76900180027003600450040461. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: QU8MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV2ARMv8 Cortex-A763K6K9K12K15K119041. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: QU8MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV3LargeARMv8 Cortex-A762K4K6K8K10K103381. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: QU8MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV3SmallARMv8 Cortex-A76700140021002800350033371. (CXX) g++ options: -O3 -lrt -lm


Phoronix Test Suite v10.8.5