xeon okt
Intel Xeon E5-2609 v4 testing with a MSI X99A RAIDER (MS-7885) v5.0 (P.50 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2210269-NE-XEONOKT7916&grs&sro.
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
GraphicsMagick
Operation: HWB Color Space
GraphicsMagick
Operation: Rotate
AOM AV1
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K
ClickHouse
100M Rows Web Analytics Dataset, First Run / Cold Cache
NCNN
Target: CPU - Model: resnet50
JPEG XL Decoding libjxl
CPU Threads: All
libavif avifenc
Encoder Speed: 0
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
GraphicsMagick
Operation: Noise-Gaussian
NCNN
Target: CPU - Model: yolov4-tiny
BRL-CAD
VGR Performance Metric
AOM AV1
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p
JPEG XL Decoding libjxl
CPU Threads: 1
Kvazaar
Video Input: Bosphorus 4K - Video Preset: Very Fast
ClickHouse
100M Rows Web Analytics Dataset, Third Run
GraphicsMagick
Operation: Resizing
AOM AV1
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p
Kvazaar
Video Input: Bosphorus 1080p - Video Preset: Ultra Fast
Chia Blockchain VDF
Test: Square Assembly Optimized
JPEG XL libjxl
Input: PNG - Quality: 90
libavif avifenc
Encoder Speed: 2
AOM AV1
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p
Node.js V8 Web Tooling Benchmark
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
AOM AV1
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p
NCNN
Target: CPU - Model: resnet18
7-Zip Compression
Test: Decompression Rating
libavif avifenc
Encoder Speed: 6
JPEG XL libjxl
Input: JPEG - Quality: 90
AOM AV1
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K
oneDNN
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
AOM AV1
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K
NCNN
Target: CPU - Model: shufflenet-v2
oneDNN
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU
AOM AV1
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K
NCNN
Target: CPU - Model: regnety_400m
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
LAMMPS Molecular Dynamics Simulator
Model: Rhodopsin Protein
libavif avifenc
Encoder Speed: 6, Lossless
Mobile Neural Network
Model: resnet-v2-50
oneDNN
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
NCNN
Target: CPU - Model: mobilenet
libavif avifenc
Encoder Speed: 10, Lossless
AOM AV1
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K
FLAC Audio Encoding
WAV To FLAC
Mobile Neural Network
Model: mobilenetV3
AOM AV1
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
NCNN
Target: CPU - Model: squeezenet_ssd
ClickHouse
100M Rows Web Analytics Dataset, Second Run
NCNN
Target: CPU - Model: FastestDet
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream
Mobile Neural Network
Model: MobileNetV2_224
7-Zip Compression
Test: Compression Rating
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Parallel BZIP2 Compression
FreeBSD-13.0-RELEASE-amd64-memstick.img Compression
Timed CPython Compilation
Build Configuration: Default
AOM AV1
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p
Kvazaar
Video Input: Bosphorus 4K - Video Preset: Ultra Fast
Mobile Neural Network
Model: inception-v3
NCNN
Target: CPU - Model: alexnet
Glibc Benchmarks
Benchmark: exp
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Chia Blockchain VDF
Test: Square Plain C++
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Kvazaar
Video Input: Bosphorus 1080p - Video Preset: Very Fast
Timed MPlayer Compilation
Time To Compile
Mobile Neural Network
Model: squeezenetv1.1
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream
Glibc Benchmarks
Benchmark: asinh
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
Mobile Neural Network
Model: nasnet
NCNN
Target: CPU - Model: mnasnet
Timed CPython Compilation
Build Configuration: Released Build, PGO + LTO Optimized
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
NCNN
Target: CPU - Model: vision_transformer
Glibc Benchmarks
Benchmark: ffsll
ASTC Encoder
Preset: Thorough
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
Timed PHP Compilation
Time To Compile
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream
Timed Linux Kernel Compilation
Build: allmodconfig
Timed Erlang/OTP Compilation
Time To Compile
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Blender
Blend File: BMW27 - Compute: CPU-Only
Neural Magic DeepSparse
Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
ASTC Encoder
Preset: Medium
Timed Linux Kernel Compilation
Build: defconfig
Mobile Neural Network
Model: SqueezeNetV1.0
Neural Magic DeepSparse
Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
NCNN
Target: CPU - Model: efficientnet-b0
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
ASTC Encoder
Preset: Exhaustive
Glibc Benchmarks
Benchmark: atanh
ASTC Encoder
Preset: Fast
Timed Node.js Compilation
Time To Compile
Mobile Neural Network
Model: mobilenet-v1-1.0
Glibc Benchmarks
Benchmark: log2
Glibc Benchmarks
Benchmark: ffs
Glibc Benchmarks
Benchmark: modf
Glibc Benchmarks
Benchmark: cos
Timed Gem5 Compilation
Time To Compile
oneDNN
Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU
Glibc Benchmarks
Benchmark: tanh
Glibc Benchmarks
Benchmark: pthread_once
Glibc Benchmarks
Benchmark: sincos
Glibc Benchmarks
Benchmark: sin
Glibc Benchmarks
Benchmark: sqrt
Glibc Benchmarks
Benchmark: sinh
Natron
Input: Spaceship
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: blazeface
GROMACS
Implementation: MPI CPU - Input: water_GMX50_bare
AOM AV1
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K
GraphicsMagick
Operation: Enhanced
GraphicsMagick
Operation: Sharpen
GraphicsMagick
Operation: Swirl
JPEG XL libjxl
Input: JPEG - Quality: 100
JPEG XL libjxl
Input: PNG - Quality: 100
JPEG XL libjxl
Input: JPEG - Quality: 80
JPEG XL libjxl
Input: PNG - Quality: 80
Phoronix Test Suite v10.8.4