ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark onnx .
Test Created 16 January 2021
Last Updated 2 February 2024
Test Type System
Average Install Time 15 Minutes, 1 Second
Average Run Time 5 Minutes, 18 Seconds
Test Dependencies Python + Git + C/C++ Compiler Toolchain + CMake
Accolades 50k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views *** OpenBenchmarking.org Events ONNX Runtime Popularity Statistics pts/onnx 2021.01 2021.02 2021.03 2021.04 2021.05 2021.06 2021.07 2021.08 2021.09 2021.10 2021.11 2021.12 2022.01 2022.02 2022.03 2022.04 2022.05 2022.06 2022.07 2022.08 2022.09 2022.10 2022.11 2022.12 2023.01 2023.02 2023.03 2023.04 2023.05 2023.06 2023.07 2023.08 2023.09 2023.10 2023.11 2023.12 2024.01 2024.02 2024.03 2024.04 2024.05 2024.06 2024.07 5K 10K 15K 20K 25K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data updated weekly as of 7 July 2024.
T5 Encoder 10.7% GPT-2 10.8% ResNet50 v1-12-int8 9.8% bertsquad-12 10.2% yolov4 10.6% ArcFace ResNet-100 9.9% CaffeNet 12-int8 9.4% Faster R-CNN R-50-FPN-int8 8.7% fcn-resnet101-11 9.6% super-resolution-10 10.2% Model Option Popularity OpenBenchmarking.org
Parallel 46.5% Standard 53.5% Executor Option Popularity OpenBenchmarking.org
Revision Historypts/onnx-1.17.0 [View Source ] Fri, 02 Feb 2024 20:20:26 GMT Update against ONNX Runtime 1.17, update download links.
pts/onnx-1.6.0 [View Source ] Sat, 11 Feb 2023 07:19:48 GMT Update against ONNX Runtime 1.14 upstream.
pts/onnx-1.5.0 [View Source ] Wed, 30 Mar 2022 14:24:29 GMT Add standard/sequential and parallel executor option.
pts/onnx-1.4.0 [View Source ] Sat, 26 Mar 2022 18:00:24 GMT Update against ONNX 1.11 upstream, switch to parallel executor, add some new models too.
pts/onnx-1.3.0 [View Source ] Fri, 03 Dec 2021 19:38:42 GMT Update against upstream ONNX-Runtime 1.10.
pts/onnx-1.2.1 [View Source ] Sat, 30 Oct 2021 05:02:16 GMT Add cmake as necessary external dependency.
pts/onnx-1.2.0 [View Source ] Fri, 29 Oct 2021 17:00:07 GMT Update against ONNX 1.9.1 upstream. Changes based on https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560 while also needing to adjust the XML options for removal of bertsquad. Closes: https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560
pts/onnx-1.1.0 [View Source ] Mon, 23 Aug 2021 19:03:17 GMT Update against ONNX 1.8.2 upstream due to prior version now having build problems on newer distributions.
pts/onnx-1.0.1 [View Source ] Sun, 17 Jan 2021 08:51:00 GMT Increase run time limit to help lower deviation on laptops.
pts/onnx-1.0.0 [View Source ] Sat, 16 Jan 2021 20:09:33 GMT Initial commit of Microsoft ONNX Runtime.
Performance MetricsAnalyze Test Configuration: pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.5.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.4.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: bertsquad-12 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: GPT-2 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: ArcFace ResNet-100 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: shufflenet-v2-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: shufflenet-v2-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: fcn-resnet101-11 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: super-resolution-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: yolov4 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: bertsquad-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: shufflenet-v2-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: fcn-resnet101-11 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: super-resolution-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: yolov4 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: shufflenet-v2-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: bertsquad-10 - Device: OpenMP CPU (Inferences Per Minute) ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org metrics for this test profile configuration based on 238 public results since 3 February 2024 with the latest data as of 5 July 2024 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
Inference Time Cost (ms) (Average)
OpenBenchmarking.org Distribution Of Public Results - Model: GPT-2 - Device: CPU - Executor: Standard 238 Results Range From 3 To 16 Inference Time Cost (ms) 3 5 7 9 11 13 15 17 19 21 23 25 27 20 40 60 80 100
Based on OpenBenchmarking.org data, the selected test / test configuration (ONNX Runtime 1.17 - Model: GPT-2 - Device: CPU - Executor: Standard ) has an average run-time of 7 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Model: GPT-2 - Device: CPU - Executor: Standard Run-Time 4 8 12 16 20 Min: 3 / Avg: 6.32 / Max: 16
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.9% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs Model: GPT-2 - Device: CPU - Executor: Standard Deviation 3 6 9 12 15 Min: 0 / Avg: 0.87 / Max: 9
Notable Instruction Set Usage Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
Instruction Set
Support
Instructions Detected
SSE2 (SSE2)
Used by default on supported hardware.
MOVDQA PUNPCKLQDQ MOVDQU CVTSI2SD DIVSD MOVAPD COMISD UNPCKLPD MULSD CVTTSD2SI ADDSD SUBSD MOVUPD MOVD PUNPCKHQDQ PSUBQ PSHUFD PSRLDQ UCOMISD CVTSS2SD PADDQ ANDPD CVTSD2SS DIVPD MULPD SUBPD PMULUDQ ADDPD SQRTSD CVTPS2PD MINPD MAXPD CVTTPD2DQ CVTDQ2PD CMPLTPD PSHUFLW MAXSD MINSD XORPD MOVLPD SQRTPD CVTTPS2DQ CVTDQ2PS ANDNPD ORPD CVTPD2PS CMPLESD UNPCKHPD CMPLEPD SHUFPD MOVHPD CVTPS2DQ
Used by default on supported hardware. Found on Intel processors since Sandy Bridge (2011). Found on AMD processors since Bulldozer (2011).
VBROADCASTSD VINSERTF128 VZEROUPPER VBROADCASTSS VMASKMOVPS VEXTRACTF128 VZEROALL VPERM2F128 VPERMILPS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Excavator (2016).
VPBROADCASTD VPMASKMOVD VEXTRACTI128 VINSERTI128 VPERM2I128 VPERMD VPBROADCASTW VPERMQ VPBROADCASTQ VPMASKMOVQ VPBROADCASTB VPSLLVD VPSRAVD VPSRLVD VPSLLVQ VPSRLVQ VPERMPD VGATHERQPD VPGATHERDQ VPGATHERQQ
AVX Vector Neural Network Instructions (AVX-VNNI)
Used by default on supported hardware.
VPDPBUSDS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Bulldozer (2011).
VFMADD231PD VFMADD213PD VFMADD231PS VFMADD213PS VFMADD132SS VFMADD132PS VFMADD132SD VFMADD213SS VFMADD132PD VFNMADD231PD VFMADD231SS VFNMADD213PS VFNMADD132PS VFNMADD213SS VFNMADD132SS VFNMADD231PS VFNMADD132PD VFNMADD132SD VFMSUB132SS VFMSUB132SD VFMADD231SD VFMADD213SD VFNMADD213PD VFNMADD213SD VFNMADD231SS VFNMADD231SD
The test / benchmark does honor compiler flag changes.
Last automated analysis: 10 May 2021
This test profile binary relies on the shared libraries libonnxruntime.so.1.6.0, libdl.so.2, libpthread.so.0, libc.so.6, libgomp.so.1, libm.so.6 .
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-N1 128-Core, ARMv8 Neoverse-V1
Recent Test Results
1 System - 147 Benchmark Results
Intel Xeon Platinum 8375C - Amazon EC2 m6i.8xlarge - Intel 440FX 82441FX PMC
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 147 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 103 Benchmark Results
Intel Xeon Platinum 8375C - Amazon EC2 m6i.8xlarge - Intel 440FX 82441FX PMC
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 99 Benchmark Results
Intel Xeon Platinum 8375C - Amazon EC2 m6i.8xlarge - Intel 440FX 82441FX PMC
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 92 Benchmark Results
Intel Xeon Platinum 8375C - Amazon EC2 m6i.8xlarge - Intel 440FX 82441FX PMC
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 103 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 99 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 92 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 84 Benchmark Results
Intel Xeon Platinum 8375C - Amazon EC2 m6i.8xlarge - Intel 440FX 82441FX PMC
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 84 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 44 Benchmark Results
Intel Xeon Platinum 8375C - Amazon EC2 m6i.8xlarge - Intel 440FX 82441FX PMC
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 44 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 84 Benchmark Results
Intel Xeon Platinum 8375C - Amazon EC2 m6i.8xlarge - Intel 440FX 82441FX PMC
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 99 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
1 System - 92 Benchmark Results
ARMv8 Neoverse-V1 - Amazon EC2 m7g.8xlarge - Amazon Device 0200
Ubuntu 22.04 - 6.5.0-1017-aws - 1.3.255
Most Popular Test Results
2 Systems - 52 Benchmark Results
2 x INTEL XEON PLATINUM 8592+ - Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS - Intel Device 1bce
Ubuntu 23.10 - 6.6.0-060600-generic - GCC 13.2.0
2 Systems - 54 Benchmark Results
AMD EPYC 8534P 64-Core - AMD Cinnabar - AMD Device 14a4
Ubuntu 23.10 - 6.5.0-15-generic - GNOME Shell
3 Systems - 49 Benchmark Results
ARMv8 Neoverse-N1 - GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 - Ampere Computing LLC Altra PCI Root Complex A
Ubuntu 23.10 - 6.5.0-13-generic - GCC 13.2.0
2 Systems - 46 Benchmark Results
AMD Ryzen Threadripper 7980X 64-Cores - System76 Thelio Major - AMD Device 14a4
Pop 22.04 - 6.7.0-060700daily20240120-generic - GNOME Shell 42.5
3 Systems - 50 Benchmark Results
AMD Ryzen 7 PRO 6850U - LENOVO ThinkPad X13 Gen 3 21CM0001US - AMD 17h-19h PCIe Root Complex
Fedora Linux 39 - 6.5.7-300.fc39.x86_64 - GNOME Shell 45.0
Featured Processor Comparison
Intel Core i5-14600K - ASUS PRIME Z790-P WIFI - Intel Device 7a27
Ubuntu 23.10 - 6.7.0-060700-generic - GNOME Shell 45.0
Featured Processor Comparison
Intel Core i5-14600K - ASUS PRIME Z790-P WIFI - Intel Device 7a27
Ubuntu 23.10 - 6.7.0-060700-generic - GNOME Shell 45.0
4 Systems - 40 Benchmark Results
AMD Ryzen 7 7840HS - NB05 TUXEDO Pulse 14 Gen3 R14FA1 - AMD Device 14e8
Ubuntu 23.10 - 6.7.0-060700-generic - GNOME Shell 45.2
3 Systems - 40 Benchmark Results
AMD Ryzen Threadripper PRO 5965WX 24-Cores - ASUS Pro WS WRX80E-SAGE SE WIFI - AMD Starship
Ubuntu 23.10 - 6.5.0-13-generic - GNOME Shell 45.0
3 Systems - 46 Benchmark Results
2 x AMD EPYC 9684X 96-Core - AMD Titanite_4G - AMD Device 14a4
Ubuntu 23.10 - 6.6.0-060600-generic - GCC 13.2.0
3 Systems - 79 Benchmark Results
Intel Core i7-1185G7 - Dell XPS 13 9310 0DXP1F - Intel Tiger Lake-LP
Ubuntu 23.10 - 6.7.0-060700rc5-generic - GNOME Shell 45.1
4 Systems - 49 Benchmark Results
AMD Ryzen Threadripper 7980X 64-Cores - ASUS Pro WS TRX50-SAGE WIFI - AMD Device 14a4
Pop 22.04 - 6.6.6-76060606-generic - GNOME Shell 42.5
4 Systems - 40 Benchmark Results
AMD Ryzen Threadripper 3990X 64-Core - Gigabyte TRX40 AORUS PRO WIFI - AMD Starship
Pop 22.04 - 6.6.6-76060606-generic - GNOME Shell 42.5
2 Systems - 46 Benchmark Results
AMD Ryzen Threadripper PRO 7995WX 96-Cores - HP Z6 G5 A Workstation 8B24 - AMD Device 14a4
CachyOS rolling - 6.7.2-1-cachyos - GNOME Shell 45.3
3 Systems - 63 Benchmark Results
Intel Core i7-1165G7 - Dell 0GG9PT - Intel Tiger Lake-LP
Ubuntu 23.10 - 6.5.0-14-generic - GNOME Shell 45.0