2888

Intel Xeon E E-2488 testing with a Supermicro Super Server X13SCL-F v0123456789 (1.1 BIOS) and llvmpipe on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403223-NE-28881544411
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 4 Tests
CPU Massive 10 Tests
Creator Workloads 8 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 3 Tests
Molecular Dynamics 2 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 3 Tests
Intel oneAPI 4 Tests
Python Tests 2 Tests
Raytracing 2 Tests
Renderers 4 Tests
Scientific Computing 2 Tests
Server 2 Tests
Server CPU Tests 7 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 21
  4 Hours, 4 Minutes
b
March 21
  4 Hours, 3 Minutes
c
March 22
  4 Hours, 2 Minutes
d
March 22
  4 Hours, 1 Minute
Invert Hiding All Results Option
  4 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2888 Suite 1.0.0 System Test suite extracted from 2888. pts/build-linux-kernel-1.16.0 allmodconfig Build: allmodconfig pts/blender-4.0.0 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/brl-cad-1.6.0 VGR Performance Metric pts/blender-4.0.0 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/quicksilver-1.0.0 ../Examples/CTS2_Benchmark/CTS2.inp Input: CTS2 pts/ospray-studio-1.3.0 --cameras 3 3 --resolution 3840x2160 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/blender-4.0.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/ospray-studio-1.3.0 --cameras 2 2 --resolution 3840x2160 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.3.0 --cameras 1 1 --resolution 3840x2160 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/quicksilver-1.0.0 ../Examples/CORAL2_Benchmark/Problem2/Coral2_P2.inp Input: CORAL2 P2 pts/primesieve-1.10.0 1e13 Length: 1e13 pts/namd-1.3.1 ../stmv/stmv.namd Input: STMV with 1,066,628 Atoms pts/ospray-studio-1.3.0 --cameras 3 3 --resolution 3840x2160 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-3.1.0 --benchmark_filter=particle_volume/scivis/real_time Benchmark: particle_volume/scivis/real_time pts/blender-4.0.0 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/ospray-studio-1.3.0 --cameras 2 2 --resolution 3840x2160 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.3.0 --cameras 1 1 --resolution 3840x2160 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-3.1.0 --benchmark_filter=particle_volume/pathtracer/real_time Benchmark: particle_volume/pathtracer/real_time pts/gromacs-1.9.0 mpi-build water-cut1.0_GMX50_bare/1536 Implementation: MPI CPU - Input: water_GMX50_bare pts/blender-4.0.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/ospray-3.1.0 --benchmark_filter=particle_volume/ao/real_time Benchmark: particle_volume/ao/real_time pts/stockfish-1.5.0 Chess Benchmark pts/ospray-studio-1.3.0 --cameras 3 3 --resolution 1920x1080 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/build-linux-kernel-1.16.0 defconfig Build: defconfig pts/ospray-studio-1.3.0 --cameras 2 2 --resolution 1920x1080 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.3.0 --cameras 1 1 --resolution 1920x1080 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/quicksilver-1.0.0 ../Examples/CORAL2_Benchmark/Problem1/Coral2_P1.inp Input: CORAL2 P1 pts/onednn-3.4.0 --rnn --batch=inputs/rnn/perf_rnn_training --engine=cpu Harness: Recurrent Neural Network Training - Engine: CPU pts/namd-1.3.1 ../f1atpase/f1atpase.namd Input: ATPase with 327,506 Atoms pts/ospray-studio-1.3.0 --cameras 3 3 --resolution 1920x1080 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.3.0 --cameras 2 2 --resolution 1920x1080 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/onednn-3.4.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --engine=cpu Harness: Recurrent Neural Network Inference - Engine: CPU pts/ospray-studio-1.3.0 --cameras 1 1 --resolution 1920x1080 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-3.1.0 --benchmark_filter=gravity_spheres_volume/dim_512/pathtracer/real_time Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time pts/v-ray-1.5.0 -m vray Mode: CPU pts/openvino-1.5.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-detection-0303/FP32/person-detection-0303.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-detection-0303/FP16/person-detection-0303.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/road-segmentation-adas-0001/FP16-INT8/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/noise-suppression-poconetlike-0001/FP16/noise-suppression-poconetlike-0001.xml -d CPU Model: Noise Suppression Poconet-Like FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/person-reidentification-retail-0277/FP16/person-reidentification-retail-0277.xml -d CPU Model: Person Re-Identification Retail FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/handwritten-english-recognition-0001/FP16-INT8/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/road-segmentation-adas-0001/FP16/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/handwritten-english-recognition-0001/FP16/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/speedb-1.0.1 --benchmarks="updaterandom" Test: Update Random pts/openvino-1.5.0 -m models/intel/face-detection-retail-0005/FP16-INT8/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16-INT8 - Device: CPU pts/speedb-1.0.1 --benchmarks="readwhilewriting" Test: Read While Writing pts/openvino-1.5.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/speedb-1.0.1 --benchmarks="fillrandom" Test: Random Fill pts/openvino-1.5.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/speedb-1.0.1 --benchmarks="fillsync" Test: Random Fill Sync pts/openvino-1.5.0 -m models/intel/face-detection-retail-0005/FP16/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16 - Device: CPU pts/openvino-1.5.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/openvino-1.5.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/speedb-1.0.1 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/rocksdb-1.6.0 --benchmarks="fillrandom" Test: Random Fill pts/speedb-1.0.1 --benchmarks="readrandom" Test: Random Read pts/rocksdb-1.6.0 --benchmarks="overwrite" Test: Overwrite pts/rocksdb-1.6.0 --benchmarks="updaterandom" Test: Update Random pts/rocksdb-1.6.0 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/rocksdb-1.6.0 --benchmarks="fillsync" Test: Random Fill Sync pts/rocksdb-1.6.0 --benchmarks="readwhilewriting" Test: Read While Writing pts/rocksdb-1.6.0 --benchmarks="readrandom" Test: Random Read pts/ospray-studio-1.3.0 --cameras 2 2 --resolution 3840x2160 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-3.1.0 --benchmark_filter=gravity_spheres_volume/dim_512/scivis/real_time Benchmark: gravity_spheres_volume/dim_512/scivis/real_time pts/ospray-studio-1.3.0 --cameras 1 1 --resolution 3840x2160 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.3.0 --cameras 3 3 --resolution 1920x1080 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.3.0 --cameras 3 3 --resolution 3840x2160 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/deepsparse-1.7.0 zoo:llama2-7b-llama2_chat_llama2_pretrain-base_quantized --scenario async Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream pts/ospray-studio-1.3.0 --cameras 2 2 --resolution 1920x1080 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.3.0 --cameras 1 1 --resolution 1920x1080 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/deepsparse-1.7.0 zoo:llama2-7b-llama2_chat_llama2_pretrain-base_quantized --scenario sync Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream pts/ospray-3.1.0 --benchmark_filter=gravity_spheres_volume/dim_512/ao/real_time Benchmark: gravity_spheres_volume/dim_512/ao/real_time pts/deepsparse-1.7.0 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/svt-av1-2.12.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario async Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario async Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.7.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario sync Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario sync Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream pts/deepsparse-1.7.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/compress-7zip-1.10.0 Test: Decompression Rating pts/compress-7zip-1.10.0 Test: Compression Rating pts/onednn-3.4.0 --deconv --batch=inputs/deconv/shapes_1d --engine=cpu Harness: Deconvolution Batch shapes_1d - Engine: CPU pts/primesieve-1.10.0 1e12 Length: 1e12 pts/svt-av1-2.12.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/speedb-1.0.1 --benchmarks="fillseq" Test: Sequential Fill pts/onednn-3.4.0 --ip --batch=inputs/ip/shapes_1d --engine=cpu Harness: IP Shapes 1D - Engine: CPU pts/svt-av1-2.12.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/onednn-3.4.0 --ip --batch=inputs/ip/shapes_3d --engine=cpu Harness: IP Shapes 3D - Engine: CPU pts/rocksdb-1.6.0 --benchmarks="fillseq" Test: Sequential Fill pts/onednn-3.4.0 --conv --batch=inputs/conv/shapes_auto --engine=cpu Harness: Convolution Batch Shapes Auto - Engine: CPU pts/svt-av1-2.12.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/svt-av1-2.12.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.12.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/onednn-3.4.0 --deconv --batch=inputs/deconv/shapes_3d --engine=cpu Harness: Deconvolution Batch shapes_3d - Engine: CPU pts/svt-av1-2.12.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.12.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p