new amp

ARMv8 Neoverse-N1 testing with a GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCP: 2.10.20220531 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402068-NE-NEWAMP18865
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 06
  52 Minutes
b
February 06
  53 Minutes
c
February 06
  51 Minutes
Invert Hiding All Results Option
  52 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


new amp ARMv8 Neoverse-N1 testing with a GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCP: 2.10.20220531 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite. ,,"a","b","c" Processor,,ARMv8 Neoverse-N1 @ 3.00GHz (128 Cores),ARMv8 Neoverse-N1 @ 3.00GHz (128 Cores),ARMv8 Neoverse-N1 @ 3.00GHz (128 Cores) Motherboard,,GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCP: 2.10.20220531 BIOS),GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCP: 2.10.20220531 BIOS),GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCP: 2.10.20220531 BIOS) Chipset,,Ampere Computing LLC Altra PCI Root Complex A,Ampere Computing LLC Altra PCI Root Complex A,Ampere Computing LLC Altra PCI Root Complex A Memory,,16 x 32GB DDR4-3200MT/s Samsung M393A4K40DB3-CWE,16 x 32GB DDR4-3200MT/s Samsung M393A4K40DB3-CWE,16 x 32GB DDR4-3200MT/s Samsung M393A4K40DB3-CWE Disk,,800GB Micron_7450_MTFDKBA800TFS,800GB Micron_7450_MTFDKBA800TFS,800GB Micron_7450_MTFDKBA800TFS Graphics,,ASPEED,ASPEED,ASPEED Monitor,,VGA HDMI,VGA HDMI,VGA HDMI Network,,2 x Intel I350,2 x Intel I350,2 x Intel I350 OS,,Ubuntu 23.10,Ubuntu 23.10,Ubuntu 23.10 Kernel,,6.5.0-13-generic (aarch64),6.5.0-13-generic (aarch64),6.5.0-13-generic (aarch64) Compiler,,GCC 13.2.0,GCC 13.2.0,GCC 13.2.0 File-System,,ext4,ext4,ext4 Screen Resolution,,1920x1080,1920x1080,1920x1080 ,,"a","b","c" "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,154.293,154.899,154.703 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,178.736,176.523,177.439 "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,6.09066,6.16283,6.20055 "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,7.13777,7.11377,7.12556 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,250.556,251.252,251.457 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences/sec)",HIB,258.637,258.855,253.597 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,10.9277,11.7545,11.1091 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,22.1724,22.0769,21.9998 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,576.593,566.725,576.227 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,701.371,698.343,700.482 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,1.12538,1.14758,1.13122 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,1.20414,1.24444,1.25872 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,9.81261,9.82943,9.80991 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,11.0025,10.7484,10.9854 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,131.488,132,130.705 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,170.121,167.736,170.632 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,75.7142,75.6672,75.6401 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,79.4944,79.5166,79.4851 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences/sec)",HIB,24.8599,24.8398,24.8612 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,25.3641,25.0685,25.4554 "LZ4 Compression - Compression Level: 1 - Compression Speed (MB/s)",HIB,519.83,520.41,521.15 "LZ4 Compression - Compression Level: 1 - Decompression Speed (MB/s)",HIB,2815.2,2827.7,2841.8 "LZ4 Compression - Compression Level: 3 - Compression Speed (MB/s)",HIB,80.97,80.95,80.99 "LZ4 Compression - Compression Level: 3 - Decompression Speed (MB/s)",HIB,2492.2,2493.1,2491.6 "LZ4 Compression - Compression Level: 9 - Compression Speed (MB/s)",HIB,27.59,27.68,27.64 "LZ4 Compression - Compression Level: 9 - Decompression Speed (MB/s)",HIB,2511.8,2511,2512 "Llamafile - Test: llava-v1.5-7b-q4 - Acceleration: CPU (Tokens/sec)",HIB,3.31,3.02,3.31 "Llamafile - Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU (Tokens/sec)",HIB,3.15,2.89,2.83 "Llamafile - Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU (Tokens/sec)",HIB,1.78,1.74,1.77 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,6.47235,6.44697,6.45507 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,5.58525,5.65511,5.62585 "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,164.181,162.26,161.272 "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,140.095,140.568,140.335 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,3.98962,3.97869,3.9752 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,3.86227,3.8592,3.93918 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,91.5067,85.0699,90.0129 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,45.0965,45.2911,45.4499 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,1.73248,1.76282,1.73356 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,1.42343,1.42955,1.42532 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,888.584,871.395,884 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,830.466,803.571,794.456 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,101.907,101.733,101.936 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,90.885,93.0332,91.0258 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,7.60357,7.57392,7.64929 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,5.87533,5.95823,5.85706 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,13.2062,13.2144,13.2191 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,12.576,12.5723,12.5774 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms))",LIB,40.2226,40.2552,40.2202 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,39.4206,39.8855,39.2789