test
lxc testing on Debian GNU/Linux 12 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2411144-NE-TEST4170061&grw.
CLOMP
Static OMP Speedup
ctx_clock
Context Switch Time
Hackbench
Count: 32 - Type: Process
DaCapo Benchmark
Java Test: Jython
DaCapo Benchmark
Java Test: Tradebeans
Renaissance
Test: Savina Reactors.IO
Cython Benchmark
Test: N-Queens
CacheBench
Test: Read
CacheBench
Test: Write
CacheBench
Test: Read / Modify / Write
Glibc Benchmarks
Benchmark: cos
Glibc Benchmarks
Benchmark: exp
Glibc Benchmarks
Benchmark: ffs
Glibc Benchmarks
Benchmark: pow
Glibc Benchmarks
Benchmark: sin
Glibc Benchmarks
Benchmark: log2
Glibc Benchmarks
Benchmark: modf
Glibc Benchmarks
Benchmark: sinh
Glibc Benchmarks
Benchmark: sqrt
Glibc Benchmarks
Benchmark: tanh
Glibc Benchmarks
Benchmark: asinh
Glibc Benchmarks
Benchmark: atanh
Glibc Benchmarks
Benchmark: ffsll
Glibc Benchmarks
Benchmark: sincos
Glibc Benchmarks
Benchmark: pthread_once
Xmrig
Variant: KawPow - Hash Count: 1M
Xmrig
Variant: Monero - Hash Count: 1M
Xmrig
Variant: Wownero - Hash Count: 1M
Xmrig
Variant: GhostRider - Hash Count: 1M
Xmrig
Variant: CryptoNight-Heavy - Hash Count: 1M
Xmrig
Variant: CryptoNight-Femto UPX2 - Hash Count: 1M
QuantLib
Size: S
QuantLib
Size: XXS
miniBUDE
Implementation: OpenMP - Input Deck: BM1
miniBUDE
Implementation: OpenMP - Input Deck: BM1
miniBUDE
Implementation: OpenMP - Input Deck: BM2
miniBUDE
Implementation: OpenMP - Input Deck: BM2
OpenRadioss
Model: Bumper Beam
OpenRadioss
Model: Cell Phone Drop Test
OpenRadioss
Model: Bird Strike on Windshield
Himeno Benchmark
Poisson Pressure Solver
TensorFlow
Device: CPU - Batch Size: 1 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 1 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 1 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 32 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 64 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 1 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 16 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 32 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 64 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 512 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 16 - Model: AlexNet
LeelaChessZero
Backend: BLAS
LeelaChessZero
Backend: Eigen
Numenta Anomaly Benchmark
Detector: KNN CAD
Numenta Anomaly Benchmark
Detector: Relative Entropy
Numenta Anomaly Benchmark
Detector: Windowed Gaussian
Numenta Anomaly Benchmark
Detector: Earthgecko Skyline
Numenta Anomaly Benchmark
Detector: Bayesian Changepoint
Numenta Anomaly Benchmark
Detector: Contextual Anomaly Detector OSE
R Benchmark
Numpy Benchmark
DeepSpeech
Acceleration: CPU
RNNoise
Input: 26 Minute Long Talking Sample
Mobile Neural Network
Model: nasnet
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: squeezenetv1.1
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
OpenCV
Test: DNN - Deep Neural Network
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l
TensorFlow Lite
Model: SqueezeNet
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Inception ResNet V2
TNN
Target: CPU - Model: DenseNet
TNN
Target: CPU - Model: MobileNet v2
TNN
Target: CPU - Model: SqueezeNet v2
TNN
Target: CPU - Model: SqueezeNet v1.1
Whisper.cpp
Model: ggml-base.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-small.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-medium.en - Input: 2016 State of the Union
XNNPACK
Model: FP32MobileNetV1
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP16MobileNetV1
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: QS8MobileNetV2
Llama.cpp
Model: llama-2-7b.Q4_0.gguf
Llama.cpp
Model: llama-2-13b.Q4_0.gguf
Llama.cpp
Model: llama-2-70b-chat.Q5_0.gguf
Llama.cpp
Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf
Llamafile
Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU
NCNN
Target: CPU - Model: mobilenet
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU - Model: shufflenet-v2
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: regnety_400m
NCNN
Target: CPU - Model: vision_transformer
NCNN
Target: CPU - Model: FastestDet
NCNN
Target: Vulkan GPU - Model: mobilenet
NCNN
Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: Vulkan GPU - Model: shufflenet-v2
NCNN
Target: Vulkan GPU - Model: mnasnet
NCNN
Target: Vulkan GPU - Model: efficientnet-b0
NCNN
Target: Vulkan GPU - Model: blazeface
NCNN
Target: Vulkan GPU - Model: googlenet
NCNN
Target: Vulkan GPU - Model: vgg16
NCNN
Target: Vulkan GPU - Model: resnet18
NCNN
Target: Vulkan GPU - Model: alexnet
NCNN
Target: Vulkan GPU - Model: resnet50
NCNN
Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3
NCNN
Target: Vulkan GPU - Model: yolov4-tiny
NCNN
Target: Vulkan GPU - Model: squeezenet_ssd
NCNN
Target: Vulkan GPU - Model: regnety_400m
NCNN
Target: Vulkan GPU - Model: vision_transformer
NCNN
Target: Vulkan GPU - Model: FastestDet
NAS Parallel Benchmarks
Test / Class: EP.C
NAS Parallel Benchmarks
Test / Class: LU.C
Rodinia
Test: OpenMP LavaMD
Rodinia
Test: OpenMP HotSpot3D
Rodinia
Test: OpenMP Leukocyte
Rodinia
Test: OpenMP CFD Solver
Rodinia
Test: OpenMP Streamcluster
NAMD
Input: ATPase with 327,506 Atoms
NAMD
Input: STMV with 1,066,628 Atoms
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Noise Suppression Poconet-Like FP16 - Device: CPU
OpenVINO
Model: Noise Suppression Poconet-Like FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Person Re-Identification Retail FP16 - Device: CPU
OpenVINO
Model: Person Re-Identification Retail FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
Pennant
Test: sedovbig
Pennant
Test: leblancbig
Algebraic Multi-Grid Benchmark
Xcompact3d Incompact3d
Input: X3D-benchmarking input.i3d
Xcompact3d Incompact3d
Input: input.i3d 129 Cells Per Direction
Xcompact3d Incompact3d
Input: input.i3d 193 Cells Per Direction
Aircrack-ng
Stockfish
Total Time
7-Zip Compression
Test: Compression Rating
7-Zip Compression
Test: Decompression Rating
John The Ripper
Test: Blowfish
Timed LLVM Compilation
Build System: Ninja
Timed LLVM Compilation
Build System: Unix Makefiles
Timed PHP Compilation
Time To Compile
Zstd Compression
Compression Level: 3 - Compression Speed
Zstd Compression
Compression Level: 3 - Decompression Speed
Zstd Compression
Compression Level: 8 - Compression Speed
Zstd Compression
Compression Level: 8 - Decompression Speed
Zstd Compression
Compression Level: 12 - Compression Speed
Zstd Compression
Compression Level: 12 - Decompression Speed
Zstd Compression
Compression Level: 19 - Compression Speed
Zstd Compression
Compression Level: 19 - Decompression Speed
Zstd Compression
Compression Level: 3, Long Mode - Compression Speed
Zstd Compression
Compression Level: 3, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 8, Long Mode - Compression Speed
Zstd Compression
Compression Level: 8, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 19, Long Mode - Compression Speed
Zstd Compression
Compression Level: 19, Long Mode - Decompression Speed
Rust Mandelbrot
Time To Complete Serial/Parallel Mandelbrot
asmFish
1024 Hash Memory, 26 Depth
m-queens
Time To Solve
Cpuminer-Opt
Algorithm: Magi
Cpuminer-Opt
Algorithm: x20r
Cpuminer-Opt
Algorithm: scrypt
Cpuminer-Opt
Algorithm: Deepcoin
Cpuminer-Opt
Algorithm: Ringcoin
Cpuminer-Opt
Algorithm: Blake-2 S
Cpuminer-Opt
Algorithm: Garlicoin
Cpuminer-Opt
Algorithm: Skeincoin
Cpuminer-Opt
Algorithm: Myriad-Groestl
Cpuminer-Opt
Algorithm: LBC, LBRY Credits
Cpuminer-Opt
Algorithm: Quad SHA-256, Pyrite
Cpuminer-Opt
Algorithm: Triple SHA-256, Onecoin
Timed GCC Compilation
Time To Compile
AOM AV1
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p
GraphicsMagick
Operation: Swirl
GraphicsMagick
Operation: Rotate
GraphicsMagick
Operation: Sharpen
GraphicsMagick
Operation: Enhanced
GraphicsMagick
Operation: Resizing
GraphicsMagick
Operation: Noise-Gaussian
GraphicsMagick
Operation: HWB Color Space
C-Ray
Total Time - 4K, 16 Rays Per Pixel
Blender
Blend File: BMW27 - Compute: CPU-Only
Blender
Blend File: Junkshop - Compute: CPU-Only
Blender
Blend File: Classroom - Compute: CPU-Only
Blender
Blend File: Fishy Cat - Compute: CPU-Only
Blender
Blend File: Barbershop - Compute: CPU-Only
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
POV-Ray
Trace Time
libavif avifenc
Encoder Speed: 0
libavif avifenc
Encoder Speed: 2
libavif avifenc
Encoder Speed: 6
libavif avifenc
Encoder Speed: 6, Lossless
libavif avifenc
Encoder Speed: 10, Lossless
Timed Node.js Compilation
Time To Compile
Timed Wasmer Compilation
Time To Compile
Apache Siege
Concurrent Users: 10
Apache Siege
Concurrent Users: 50
Apache Siege
Concurrent Users: 100
Apache Siege
Concurrent Users: 200
Apache Siege
Concurrent Users: 500
Apache Siege
Concurrent Users: 1000
InfluxDB
Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
InfluxDB
Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
InfluxDB
Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
Memcached
Set To Get Ratio: 1:1
Memcached
Set To Get Ratio: 1:5
Memcached
Set To Get Ratio: 5:1
Memcached
Set To Get Ratio: 1:10
Memcached
Set To Get Ratio: 1:100
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10
Redis
Test: GET - Parallel Connections: 50
Redis
Test: SET - Parallel Connections: 50
Redis
Test: GET - Parallel Connections: 500
Redis
Test: LPOP - Parallel Connections: 50
Redis
Test: SADD - Parallel Connections: 50
Redis
Test: SET - Parallel Connections: 500
Redis
Test: GET - Parallel Connections: 1000
Redis
Test: LPOP - Parallel Connections: 500
Redis
Test: LPUSH - Parallel Connections: 50
Redis
Test: SADD - Parallel Connections: 500
Redis
Test: SET - Parallel Connections: 1000
Redis
Test: LPOP - Parallel Connections: 1000
Redis
Test: LPUSH - Parallel Connections: 500
Redis
Test: SADD - Parallel Connections: 1000
Redis
Test: LPUSH - Parallel Connections: 1000
PyBench
Total For Average Test Times
Phoronix Test Suite v10.8.5