xeon febby

2 x INTEL XEON PLATINUM 8592+ testing with a Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS (3B05.TEL4P1 BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402196-NE-XEONFEBBY11
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 3 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 7 Tests
Large Language Models 2 Tests
Machine Learning 5 Tests
Molecular Dynamics 2 Tests
Multi-Core 4 Tests
Python Tests 3 Tests
Scientific Computing 2 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 19
  2 Hours, 29 Minutes
b
February 20
  1 Hour, 59 Minutes
Invert Hiding All Results Option
  2 Hours, 14 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xeon febby Suite 1.0.0 System Test suite extracted from xeon febby. pts/pytorch-1.0.1 cpu 1 efficientnet_v2_l Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l pts/llama-cpp-1.0.0 -m ../llama-2-70b-chat.Q5_0.gguf Model: llama-2-70b-chat.Q5_0.gguf pts/llama-cpp-1.0.0 -m ../llama-2-13b.Q4_0.gguf Model: llama-2-13b.Q4_0.gguf pts/quicksilver-1.0.0 ../Examples/CORAL2_Benchmark/Problem2/Coral2_P2.inp Input: CORAL2 P2 pts/quicksilver-1.0.0 ../Examples/CTS2_Benchmark/CTS2.inp Input: CTS2 pts/llama-cpp-1.0.0 -m ../llama-2-7b.Q4_0.gguf Model: llama-2-7b.Q4_0.gguf pts/llamafile-1.0.0 run-llava --gpu DISABLE Test: llava-v1.5-7b-q4 - Acceleration: CPU pts/quicksilver-1.0.0 ../Examples/CORAL2_Benchmark/Problem1/Coral2_P1.inp Input: CORAL2 P1 pts/llamafile-1.0.0 run-mistral --gpu DISABLE Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU pts/namd-1.3.1 ../stmv/stmv.namd Input: STMV with 1,066,628 Atoms pts/llamafile-1.0.0 run-wizardcoder --gpu DISABLE Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU pts/namd-1.3.1 ../f1atpase/f1atpase.namd Input: ATPase with 327,506 Atoms pts/onnx-1.17.0 FasterRCNN-12-int8/FasterRCNN-12-int8.onnx -e cpu Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard pts/pytorch-1.0.1 cpu 1 resnet152 Device: CPU - Batch Size: 1 - Model: ResNet-152 pts/onnx-1.17.0 GPT2/model.onnx -e cpu Model: GPT-2 - Device: CPU - Executor: Standard pts/speedb-1.0.1 --benchmarks="updaterandom" Test: Update Random pts/onnx-1.17.0 bertsquad-12/bertsquad-12.onnx -e cpu Model: bertsquad-12 - Device: CPU - Executor: Standard pts/onnx-1.17.0 yolov4/yolov4.onnx -e cpu Model: yolov4 - Device: CPU - Executor: Standard pts/onnx-1.17.0 t5-encoder/t5-encoder.onnx -e cpu Model: T5 Encoder - Device: CPU - Executor: Standard pts/onnx-1.17.0 resnet100/resnet100.onnx -e cpu Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard pts/onnx-1.17.0 fcn-resnet101-11/model.onnx -e cpu Model: fcn-resnet101-11 - Device: CPU - Executor: Standard pts/speedb-1.0.1 --benchmarks="readwhilewriting" Test: Read While Writing pts/speedb-1.0.1 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/speedb-1.0.1 --benchmarks="readrandom" Test: Random Read pts/onnx-1.17.0 caffenet-12-int8/caffenet-12-int8.onnx -e cpu Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard pts/onnx-1.17.0 resnet50-v1-12-int8/resnet50-v1-12-int8.onnx -e cpu Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard pts/onnx-1.17.0 super_resolution/super_resolution.onnx -e cpu Model: super-resolution-10 - Device: CPU - Executor: Standard pts/dav1d-1.15.1 -i summer_nature_4k.ivf Video Input: Summer Nature 4K pts/dav1d-1.15.1 -i chimera_8b_1080p.ivf Video Input: Chimera 1080p pts/dav1d-1.15.1 -i summer_nature_1080p.ivf Video Input: Summer Nature 1080p pts/dav1d-1.15.1 -i chimera_10b_1080p.ivf Video Input: Chimera 1080p 10-bit pts/gromacs-1.9.0 mpi-build water-cut1.0_GMX50_bare/1536 Implementation: MPI CPU - Input: water_GMX50_bare pts/pytorch-1.0.1 cpu 1 resnet50 Device: CPU - Batch Size: 1 - Model: ResNet-50 pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=resnet50 Device: CPU - Batch Size: 1 - Model: ResNet-50 pts/oidn-2.2.0 -r RTLightmap.hdr.4096x4096 -d cpu Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=vgg16 Device: CPU - Batch Size: 1 - Model: VGG-16 pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=googlenet Device: CPU - Batch Size: 1 - Model: GoogLeNet pts/y-cruncher-1.4.0 1b Pi Digits To Calculate: 1B pts/oidn-2.2.0 -r RT.ldr_alb_nrm.3840x2160 -d cpu Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only pts/oidn-2.2.0 -r RT.hdr_alb_nrm.3840x2160 -d cpu Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only pts/y-cruncher-1.4.0 500m Pi Digits To Calculate: 500M pts/tensorflow-2.1.1 --device cpu --batch_size=1 --model=alexnet Device: CPU - Batch Size: 1 - Model: AlexNet