m7g.8xlarge amazon testing on Ubuntu 22.04 via the Phoronix Test Suite. m7g.8xlarge: Processor: ARMv8 Neoverse-V1 (32 Cores), Motherboard: Amazon EC2 m7g.8xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 128GB, Disk: 322GB Amazon Elastic Block Store, Network: Amazon Elastic OS: Ubuntu 22.04, Kernel: 6.5.0-1017-aws (aarch64), Vulkan: 1.3.255, Compiler: GCC 11.4.0, File-System: ext4, System Layer: amazon Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better m7g.8xlarge . 175.67 |========================================================= Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better m7g.8xlarge . 447.35 |========================================================= Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better m7g.8xlarge . 79.85 |========================================================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 188.05 |========================================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 42.50 |========================================================== OpenCV 4.7 Test: Stitching ms < Lower Is Better m7g.8xlarge . 279901 |========================================================= OpenCV 4.7 Test: Graph API ms < Lower Is Better OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better m7g.8xlarge . 23510 |========================================================== OpenCV 4.7 Test: Image Processing ms < Lower Is Better m7g.8xlarge . 104680 |========================================================= OpenCV 4.7 Test: Core ms < Lower Is Better m7g.8xlarge . 95915 |========================================================== OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better m7g.8xlarge . 3042.37 |======================================================== OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 2.58 |=========================================================== OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 2152.72 |======================================================== OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 3.68 |=========================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 157.78 |========================================================= ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 6.33812 |======================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 172.45 |========================================================= ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 5.79884 |======================================================== OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 377.67 |========================================================= OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 21.16 |========================================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better m7g.8xlarge . 493.62 |========================================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 16.20 |========================================================== OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU ms < Lower Is Better m7g.8xlarge . 378.24 |========================================================= OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 21.12 |========================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 871.81 |========================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 1.14704 |======================================================== OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 170.88 |========================================================= OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 46.76 |========================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 4.49827 |======================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 221.87 |========================================================= ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 6.87024 |======================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 145.42 |========================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 698.99 |========================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 1.43063 |======================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 111.47 |========================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 8.97170 |======================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 218.19 |========================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 4.58322 |======================================================== OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 113.19 |========================================================= OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 70.66 |========================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 53.80 |========================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 18.59 |========================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 111.86 |========================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 8.93926 |======================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 2.75571 |======================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 362.48 |========================================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better m7g.8xlarge . 199.04 |========================================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 40.14 |========================================================== OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better m7g.8xlarge . 155.01 |========================================================= OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 51.58 |========================================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 68.16 |========================================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 117.29 |========================================================= ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 4.38209 |======================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 228.15 |========================================================= OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 25.35 |========================================================== OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 315.36 |========================================================= ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 87.39 |========================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 11.44 |========================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 54.65 |========================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 18.30 |========================================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better m7g.8xlarge . 24.18 |========================================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 330.67 |========================================================= OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 14.76 |========================================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 541.33 |========================================================= OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 23.02 |========================================================== OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 347.35 |========================================================= OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better m7g.8xlarge . 48.08 |========================================================== OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 166.28 |========================================================= OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 26.56 |========================================================== OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 300.83 |========================================================= OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 8.75 |=========================================================== OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 912.22 |========================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 1.00857 |======================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 990.08 |========================================================= OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better m7g.8xlarge . 2.05 |=========================================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 3893.43 |======================================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better m7g.8xlarge . 2.09 |=========================================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better m7g.8xlarge . 3811.90 |======================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 2.55225 |======================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 391.59 |========================================================= ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 3.74916 |======================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 266.64 |========================================================= ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 5.34746 |======================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 186.96 |========================================================= ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 12.78 |========================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better m7g.8xlarge . 78.22 |========================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better m7g.8xlarge . 12.63 |========================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better m7g.8xlarge . 79.14 |========================================================== OpenCV 4.7 Test: Features 2D ms < Lower Is Better m7g.8xlarge . 54156 |========================================================== OpenCV 4.7 Test: Object Detection ms < Lower Is Better m7g.8xlarge . 27236 |========================================================== Llama.cpp b3067 Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf Tokens Per Second > Higher Is Better m7g.8xlarge . 22.43 |========================================================== OpenCV 4.7 Test: Video ms < Lower Is Better m7g.8xlarge . 22649 |==========================================================