Ubuntu 22.04 amd64v3 Benchmarks Xeon Ice Lake
Overview of performance of Ubuntu with default and powersave governors and amd64 vs amd64v3 for server-ish workloads
HTML result view exported from: https://openbenchmarking.org/result/2209158-NE-2209131NE60&rdt&grs.
Zstd Compression
Compression Level: 3, Long Mode - Compression Speed
Zstd Compression
Compression Level: 8, Long Mode - Compression Speed
DaCapo Benchmark
Java Test: H2
Zstd Compression
Compression Level: 3, Long Mode - Compression Speed
Apache HTTP Server
Concurrent Requests: 1000
Renaissance
Test: Savina Reactors.IO
Zstd Compression
Compression Level: 8, Long Mode - Compression Speed
Renaissance
Test: In-Memory Database Shootout
Pennant
Test: sedovbig
Renaissance
Test: Genetic Algorithm Using Jenetics + Futures
PyPerformance
Benchmark: python_startup
Renaissance
Test: ALS Movie Lens
Renaissance
Test: Random Forest
Renaissance
Test: Apache Spark ALS
SVT-VP9
Tuning: Visual Quality Optimized - Input: Bosphorus 1080p
SVT-AV1
Encoder Mode: Preset 12 - Input: Bosphorus 4K
SVT-HEVC
Tuning: 7 - Input: Bosphorus 1080p
Apache HTTP Server
Concurrent Requests: 500
libavif avifenc
Encoder Speed: 10, Lossless
SVT-AV1
Encoder Mode: Preset 10 - Input: Bosphorus 4K
SVT-VP9
Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p
SVT-VP9
Tuning: VMAF Optimized - Input: Bosphorus 1080p
SVT-HEVC
Tuning: 10 - Input: Bosphorus 1080p
Renaissance
Test: Scala Dotty
Renaissance
Test: Finagle HTTP Requests
libavif avifenc
Encoder Speed: 6, Lossless
Renaissance
Test: Apache Spark Bayes
Renaissance
Test: Apache Spark PageRank
Zstd Compression
Compression Level: 3, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 19, Long Mode - Compression Speed
Zstd Compression
Compression Level: 19, Long Mode - Compression Speed
Zstd Compression
Compression Level: 19 - Compression Speed
Tachyon
Total Time
libavif avifenc
Encoder Speed: 6
Zstd Compression
Compression Level: 8, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 3 - Compression Speed
LuxCoreRender
Scene: Orange Juice - Acceleration: CPU
Timed Gem5 Compilation
Time To Compile
SVT-HEVC
Tuning: 1 - Input: Bosphorus 1080p
PyPerformance
Benchmark: nbody
Zstd Compression
Compression Level: 19, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 3 - Compression Speed
PyPerformance
Benchmark: django_template
PyPerformance
Benchmark: pickle_pure_python
libavif avifenc
Encoder Speed: 2
ASTC Encoder
Preset: Medium
Numpy Benchmark
nginx
Concurrent Requests: 500
ASTC Encoder
Preset: Exhaustive
Zstd Compression
Compression Level: 8, Long Mode - Decompression Speed
PyPerformance
Benchmark: json_loads
QuantLib
Zstd Compression
Compression Level: 19 - Decompression Speed
Zstd Compression
Compression Level: 3, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 3 - Decompression Speed
nginx
Concurrent Requests: 200
Zstd Compression
Compression Level: 19 - Decompression Speed
PyPerformance
Benchmark: pathlib
nginx
Concurrent Requests: 100
nginx
Concurrent Requests: 1000
libavif avifenc
Encoder Speed: 0
Liquid-DSP
Threads: 128 - Buffer Length: 256 - Filter Length: 57
PyBench
Total For Average Test Times
OSPray
Benchmark: particle_volume/pathtracer/real_time
Liquid-DSP
Threads: 160 - Buffer Length: 256 - Filter Length: 57
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Zstd Compression
Compression Level: 19, Long Mode - Decompression Speed
Blender
Blend File: BMW27 - Compute: CPU-Only
Zstd Compression
Compression Level: 19 - Compression Speed
Blender
Blend File: Barbershop - Compute: CPU-Only
PyPerformance
Benchmark: regex_compile
LuxCoreRender
Scene: DLSC - Acceleration: CPU
PHPBench
PHP Benchmark Suite
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
PyPerformance
Benchmark: go
Blender
Blend File: Fishy Cat - Compute: CPU-Only
Blender
Blend File: Classroom - Compute: CPU-Only
Algebraic Multi-Grid Benchmark
Embree
Binary: Pathtracer ISPC - Model: Asian Dragon
Embree
Binary: Pathtracer - Model: Asian Dragon
Embree
Binary: Pathtracer ISPC - Model: Crown
Apache HTTP Server
Concurrent Requests: 200
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
PyPerformance
Benchmark: crypto_pyaes
PyPerformance
Benchmark: float
PyPerformance
Benchmark: chaos
PyPerformance
Benchmark: 2to3
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
LuxCoreRender
Scene: LuxCore Benchmark - Acceleration: CPU
LuxCoreRender
Scene: Danish Mood - Acceleration: CPU
Renaissance
Test: Akka Unbalanced Cobwebbed Tree
DaCapo Benchmark
Java Test: Jython
LAMMPS Molecular Dynamics Simulator
Model: Rhodopsin Protein
Phoronix Test Suite v10.8.5