xeon febby

2 x INTEL XEON PLATINUM 8592+ testing with a Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS (3B05.TEL4P1 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402196-NE-XEONFEBBY11
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 3 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 7 Tests
Large Language Models 2 Tests
Machine Learning 5 Tests
Molecular Dynamics 2 Tests
Multi-Core 4 Tests
Python Tests 3 Tests
Scientific Computing 2 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 19
  2 Hours, 29 Minutes
b
February 20
  1 Hour, 59 Minutes
Invert Hiding All Results Option
  2 Hours, 14 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xeon febby 2 x INTEL XEON PLATINUM 8592+ testing with a Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS (3B05.TEL4P1 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite. ,,"a","b" Processor,,2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads),2 x INTEL XEON PLATINUM 8592+ @ 3.90GHz (128 Cores / 256 Threads) Motherboard,,Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS (3B05.TEL4P1 BIOS),Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS (3B05.TEL4P1 BIOS) Chipset,,Intel Device 1bce,Intel Device 1bce Memory,,1008GB,1008GB Disk,,3201GB Micron_7450_MTFDKCB3T2TFS,3201GB Micron_7450_MTFDKCB3T2TFS Graphics,,ASPEED,ASPEED Network,,2 x Intel X710 for 10GBASE-T,2 x Intel X710 for 10GBASE-T OS,,Ubuntu 23.10,Ubuntu 23.10 Kernel,,6.6.0-060600-generic (x86_64),6.6.0-060600-generic (x86_64) Compiler,,GCC 13.2.0,GCC 13.2.0 File-System,,ext4,ext4 Screen Resolution,,1024x768,1024x768 ,,"a","b" "PyTorch - Device: CPU - Batch Size: 1 - Model: ResNet-50 (batches/sec)",HIB,51.66,51.00 "PyTorch - Device: CPU - Batch Size: 1 - Model: ResNet-152 (batches/sec)",HIB,19.08,19.21 "PyTorch - Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l (batches/sec)",HIB,0.42, "Quicksilver - Input: CTS2 (Figure Of Merit)",HIB,9556000,9354000 "Quicksilver - Input: CORAL2 P1 (Figure Of Merit)",HIB,8625000,8820000 "Quicksilver - Input: CORAL2 P2 (Figure Of Merit)",HIB,8000000,8418000 "dav1d - Video Input: Chimera 1080p (FPS)",HIB,204.38,202.83 "dav1d - Video Input: Summer Nature 4K (FPS)",HIB,68.43,68.36 "dav1d - Video Input: Summer Nature 1080p (FPS)",HIB,87.78,86.79 "dav1d - Video Input: Chimera 1080p 10-bit (FPS)",HIB,235.42,239.01 "Intel Open Image Denoise - Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,5.10,5.16 "Intel Open Image Denoise - Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only (Images / Sec)",HIB,5.14,5.00 "Intel Open Image Denoise - Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only (Images / Sec)",HIB,2.46,2.47 "TensorFlow - Device: CPU - Batch Size: 1 - Model: VGG-16 (images/sec)",HIB,12.25,11.89 "TensorFlow - Device: CPU - Batch Size: 1 - Model: AlexNet (images/sec)",HIB,39.98,37.7 "TensorFlow - Device: CPU - Batch Size: 1 - Model: GoogLeNet (images/sec)",HIB,18.23,17.62 "TensorFlow - Device: CPU - Batch Size: 1 - Model: ResNet-50 (images/sec)",HIB,7.28,7.01 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,205.563,208.993 "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,15.3826,17.1686 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences/sec)",HIB,465.647,360.779 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,15.9646,22.6587 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,786.262,789.312 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,9.90267,9.4317 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,35.9174,38.5606 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,170.023,170.235 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,256.996,247.665 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences/sec)",HIB,38.3302,37.8832 "GROMACS - Implementation: MPI CPU - Input: water_GMX50_bare (Ns/Day)",HIB,17.918,18.398 "NAMD - Input: ATPase with 327,506 Atoms (ns/day)",HIB,5.98308,4.02029 "NAMD - Input: STMV with 1,066,628 Atoms (ns/day)",HIB,1.74622,1.81638 "Speedb - Test: Random Read (Op/s)",HIB,613257745,490533153 "Speedb - Test: Update Random (Op/s)",HIB,157060,156458 "Speedb - Test: Read While Writing (Op/s)",HIB,16943739,18187110 "Speedb - Test: Read Random Write Random (Op/s)",HIB,1520436,1514331 "Llama.cpp - Model: llama-2-7b.Q4_0.gguf (Tokens/sec)",HIB,0.69,0.58 "Llama.cpp - Model: llama-2-13b.Q4_0.gguf (Tokens/sec)",HIB,0.55,0.45 "Llama.cpp - Model: llama-2-70b-chat.Q5_0.gguf (Tokens/sec)",HIB,0.43,0.34 "Llamafile - Test: llava-v1.5-7b-q4 - Acceleration: CPU (Tokens/sec)",HIB,0.53,0.53 "Llamafile - Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU (Tokens/sec)",HIB,8.57,8.81 "Llamafile - Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU (Tokens/sec)",HIB,3.74,3.86 "ONNX Runtime - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,4.86151,4.78113 "ONNX Runtime - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,65.0065,58.2438 "ONNX Runtime - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,2.14695,2.77091 "ONNX Runtime - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,62.6361,44.1311 "ONNX Runtime - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,1.27117,1.26631 "ONNX Runtime - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,100.98,106.023 "ONNX Runtime - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,27.8405,25.9319 "ONNX Runtime - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,5.88091,5.87369 "ONNX Runtime - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,3.89053,4.03712 "ONNX Runtime - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms))",LIB,26.0875,26.395 "Y-Cruncher - Pi Digits To Calculate: 1B (sec)",LIB,5.107,5.249 "Y-Cruncher - Pi Digits To Calculate: 500M (sec)",LIB,2.723,2.761