new xeon
Intel Xeon Gold 6421N testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2307311-NE-NEWXEON6232&grw&sro.
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128
Laghos
Test: Triple Point Problem
libxsmm
M N K: 32
libxsmm
M N K: 64
libxsmm
M N K: 256
libxsmm
M N K: 128
Stress-NG
Test: Hash
Stress-NG
Test: MMAP
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double - X Y Z: 256
Laghos
Test: Sedov Blast Wave, ube_922_hex.mesh
Palabos
Grid Size: 100
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512
Palabos
Grid Size: 400
Palabos
Grid Size: 500
Stress-NG
Test: NUMA
Stress-NG
Test: Pipe
Stress-NG
Test: Poll
Stress-NG
Test: Zlib
Stress-NG
Test: Futex
Stress-NG
Test: MEMFD
Stress-NG
Test: Mutex
Stress-NG
Test: Atomic
Stress-NG
Test: Crypto
Stress-NG
Test: Malloc
Stress-NG
Test: Cloning
Stress-NG
Test: Forking
Stress-NG
Test: Pthread
Stress-NG
Test: AVL Tree
Stress-NG
Test: IO_uring
Stress-NG
Test: SENDFILE
Stress-NG
Test: CPU Cache
Stress-NG
Test: CPU Stress
Stress-NG
Test: Semaphores
Stress-NG
Test: Matrix Math
Stress-NG
Test: Vector Math
Stress-NG
Test: Function Call
Stress-NG
Test: x86_64 RdRand
Stress-NG
Test: Floating Point
Stress-NG
Test: Matrix 3D Math
Stress-NG
Test: Memory Copying
Stress-NG
Test: Vector Shuffle
Stress-NG
Test: Socket Activity
Stress-NG
Test: Wide Vector Math
Stress-NG
Test: Context Switching
Stress-NG
Test: Fused Multiply-Add
Stress-NG
Test: Vector Floating Point
Stress-NG
Test: Glibc C String Functions
Stress-NG
Test: Glibc Qsort Data Sorting
Stress-NG
Test: System V Message Passing
BRL-CAD
VGR Performance Metric
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
High Performance Conjugate Gradient
X Y Z: 104 104 104 - RT: 60
High Performance Conjugate Gradient
X Y Z: 144 144 144 - RT: 60
High Performance Conjugate Gradient
X Y Z: 160 160 160 - RT: 60
OpenFOAM
Input: drivaerFastback, Small Mesh Size - Mesh Time
OpenFOAM
Input: drivaerFastback, Small Mesh Size - Execution Time
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Mesh Time
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Execution Time
Timed GDB GNU Debugger Compilation
Time To Compile
Timed LLVM Compilation
Build System: Ninja
Timed LLVM Compilation
Build System: Unix Makefiles
Timed PHP Compilation
Time To Compile
Timed Linux Kernel Compilation
Build: defconfig
Timed Linux Kernel Compilation
Build: allmodconfig
Blender
Blend File: BMW27 - Compute: CPU-Only
Blender
Blend File: Classroom - Compute: CPU-Only
Blender
Blend File: Fishy Cat - Compute: CPU-Only
Blender
Blend File: Barbershop - Compute: CPU-Only
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
VVenC
Video Input: Bosphorus 4K - Video Preset: Fast
VVenC
Video Input: Bosphorus 4K - Video Preset: Faster
VVenC
Video Input: Bosphorus 1080p - Video Preset: Fast
VVenC
Video Input: Bosphorus 1080p - Video Preset: Faster
Liquid-DSP
Threads: 16 - Buffer Length: 256 - Filter Length: 32
Liquid-DSP
Threads: 16 - Buffer Length: 256 - Filter Length: 57
Liquid-DSP
Threads: 32 - Buffer Length: 256 - Filter Length: 32
Liquid-DSP
Threads: 32 - Buffer Length: 256 - Filter Length: 57
Liquid-DSP
Threads: 64 - Buffer Length: 256 - Filter Length: 32
Liquid-DSP
Threads: 64 - Buffer Length: 256 - Filter Length: 57
Liquid-DSP
Threads: 16 - Buffer Length: 256 - Filter Length: 512
Liquid-DSP
Threads: 32 - Buffer Length: 256 - Filter Length: 512
Liquid-DSP
Threads: 64 - Buffer Length: 256 - Filter Length: 512
srsRAN Project
Test: Downlink Processor Benchmark
srsRAN Project
Test: PUSCH Processor Benchmark, Throughput Total
srsRAN Project
Test: PUSCH Processor Benchmark, Throughput Thread
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500
Apache IoTDB
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10
Apache Cassandra
Test: Writes
Phoronix Test Suite v10.8.5