test
lxc testing on Debian GNU/Linux 12 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2411135-NE-TEST4245051&rdt&grs.
Whisper.cpp
Model: ggml-medium.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-small.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-base.en - Input: 2016 State of the Union
Numenta Anomaly Benchmark
Detector: Contextual Anomaly Detector OSE
Numenta Anomaly Benchmark
Detector: Bayesian Changepoint
Numenta Anomaly Benchmark
Detector: Earthgecko Skyline
Numenta Anomaly Benchmark
Detector: Windowed Gaussian
Numenta Anomaly Benchmark
Detector: Relative Entropy
Numenta Anomaly Benchmark
Detector: KNN CAD
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Person Re-Identification Retail FP16 - Device: CPU
OpenVINO
Model: Person Re-Identification Retail FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Handwritten English Recognition FP16 - Device: CPU
OpenVINO
Model: Noise Suppression Poconet-Like FP16 - Device: CPU
OpenVINO
Model: Noise Suppression Poconet-Like FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Road Segmentation ADAS FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Face Detection Retail FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
XNNPACK
Model: QS8MobileNetV2
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV1
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV1
TNN
Target: CPU - Model: SqueezeNet v1.1
TNN
Target: CPU - Model: SqueezeNet v2
TNN
Target: CPU - Model: MobileNet v2
TNN
Target: CPU - Model: DenseNet
NCNN
Target: Vulkan GPU - Model: FastestDet
NCNN
Target: Vulkan GPU - Model: vision_transformer
NCNN
Target: Vulkan GPU - Model: regnety_400m
NCNN
Target: Vulkan GPU - Model: squeezenet_ssd
NCNN
Target: Vulkan GPU - Model: yolov4-tiny
NCNN
Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3
NCNN
Target: Vulkan GPU - Model: resnet50
NCNN
Target: Vulkan GPU - Model: alexnet
NCNN
Target: Vulkan GPU - Model: resnet18
NCNN
Target: Vulkan GPU - Model: vgg16
NCNN
Target: Vulkan GPU - Model: googlenet
NCNN
Target: Vulkan GPU - Model: efficientnet-b0
NCNN
Target: Vulkan GPU - Model: mnasnet
NCNN
Target: Vulkan GPU - Model: shufflenet-v2
NCNN
Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: Vulkan GPU - Model: mobilenet
NCNN
Target: CPU - Model: FastestDet
NCNN
Target: CPU - Model: vision_transformer
NCNN
Target: CPU - Model: regnety_400m
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU - Model: shufflenet-v2
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU - Model: mobilenet
Mobile Neural Network
Model: inception-v3
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: nasnet
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
TensorFlow
Device: GPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 512 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 64 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 32 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 16 - Model: VGG-16
TensorFlow
Device: GPU - Batch Size: 1 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 32 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 16 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 1 - Model: AlexNet
TensorFlow
Device: GPU - Batch Size: 1 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 1 - Model: VGG-16
PyTorch
Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 512 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 256 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 64 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 32 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-50
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: SqueezeNet
RNNoise
Input: 26 Minute Long Talking Sample
R Benchmark
DeepSpeech
Acceleration: CPU
Numpy Benchmark
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Engine: CPU
Cpuminer-Opt
Algorithm: Triple SHA-256, Onecoin
Cpuminer-Opt
Algorithm: Quad SHA-256, Pyrite
Cpuminer-Opt
Algorithm: LBC, LBRY Credits
Cpuminer-Opt
Algorithm: Myriad-Groestl
Cpuminer-Opt
Algorithm: Skeincoin
Cpuminer-Opt
Algorithm: Garlicoin
Cpuminer-Opt
Algorithm: Blake-2 S
Cpuminer-Opt
Algorithm: Ringcoin
Cpuminer-Opt
Algorithm: Deepcoin
Cpuminer-Opt
Algorithm: scrypt
Cpuminer-Opt
Algorithm: x20r
Cpuminer-Opt
Algorithm: Magi
Timed LLVM Compilation
Build System: Unix Makefiles
Timed LLVM Compilation
Build System: Ninja
Xmrig
Variant: CryptoNight-Femto UPX2 - Hash Count: 1M
Xmrig
Variant: CryptoNight-Heavy - Hash Count: 1M
Xmrig
Variant: GhostRider - Hash Count: 1M
Xmrig
Variant: Wownero - Hash Count: 1M
Xmrig
Variant: Monero - Hash Count: 1M
Xmrig
Variant: KawPow - Hash Count: 1M
InfluxDB
Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
InfluxDB
Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
InfluxDB
Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000
Apache Siege
Concurrent Users: 1000
Apache Siege
Concurrent Users: 500
Apache Siege
Concurrent Users: 200
Apache Siege
Concurrent Users: 100
Apache Siege
Concurrent Users: 50
Apache Siege
Concurrent Users: 10
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1
Redis
Test: LPUSH - Parallel Connections: 1000
Redis
Test: SADD - Parallel Connections: 1000
Redis
Test: LPUSH - Parallel Connections: 500
Redis
Test: SET - Parallel Connections: 1000
Redis
Test: SADD - Parallel Connections: 500
Redis
Test: LPUSH - Parallel Connections: 50
Redis
Test: GET - Parallel Connections: 1000
Redis
Test: SET - Parallel Connections: 500
Redis
Test: SADD - Parallel Connections: 50
Redis
Test: LPOP - Parallel Connections: 50
Redis
Test: GET - Parallel Connections: 500
Redis
Test: SET - Parallel Connections: 50
Redis
Test: GET - Parallel Connections: 50
Memcached
Set To Get Ratio: 1:100
Memcached
Set To Get Ratio: 1:10
Memcached
Set To Get Ratio: 5:1
Memcached
Set To Get Ratio: 1:5
Memcached
Set To Get Ratio: 1:1
libavif avifenc
Encoder Speed: 10, Lossless
libavif avifenc
Encoder Speed: 6, Lossless
libavif avifenc
Encoder Speed: 6
libavif avifenc
Encoder Speed: 2
libavif avifenc
Encoder Speed: 0
Zstd Compression
Compression Level: 19, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 19, Long Mode - Compression Speed
Zstd Compression
Compression Level: 8, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 8, Long Mode - Compression Speed
Zstd Compression
Compression Level: 3, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 3, Long Mode - Compression Speed
Zstd Compression
Compression Level: 19 - Decompression Speed
Zstd Compression
Compression Level: 19 - Compression Speed
Zstd Compression
Compression Level: 12 - Decompression Speed
Zstd Compression
Compression Level: 12 - Compression Speed
Zstd Compression
Compression Level: 8 - Decompression Speed
Zstd Compression
Compression Level: 8 - Compression Speed
Zstd Compression
Compression Level: 3 - Decompression Speed
Zstd Compression
Compression Level: 3 - Compression Speed
Timed Node.js Compilation
Time To Compile
Timed Wasmer Compilation
Time To Compile
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
Blender
Blend File: Barbershop - Compute: CPU-Only
Blender
Blend File: Fishy Cat - Compute: CPU-Only
Blender
Blend File: Classroom - Compute: CPU-Only
Blender
Blend File: Junkshop - Compute: CPU-Only
Blender
Blend File: BMW27 - Compute: CPU-Only
AOM AV1
Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p
AOM AV1
Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K
AOM AV1
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K
Aircrack-ng
OpenRadioss
Model: Bird Strike on Windshield
OpenRadioss
Model: Cell Phone Drop Test
OpenRadioss
Model: Bumper Beam
Xcompact3d Incompact3d
Input: input.i3d 193 Cells Per Direction
Xcompact3d Incompact3d
Input: input.i3d 129 Cells Per Direction
Xcompact3d Incompact3d
Input: X3D-benchmarking input.i3d
Pennant
Test: leblancbig
Pennant
Test: sedovbig
Algebraic Multi-Grid Benchmark
QuantLib
Size: XXS
QuantLib
Size: S
NAMD
Input: STMV with 1,066,628 Atoms
NAMD
Input: ATPase with 327,506 Atoms
CLOMP
Static OMP Speedup
Rodinia
Test: OpenMP Streamcluster
Rodinia
Test: OpenMP CFD Solver
Rodinia
Test: OpenMP Leukocyte
Rodinia
Test: OpenMP HotSpot3D
miniBUDE
Implementation: OpenMP - Input Deck: BM2
miniBUDE
Implementation: OpenMP - Input Deck: BM2
miniBUDE
Implementation: OpenMP - Input Deck: BM1
miniBUDE
Implementation: OpenMP - Input Deck: BM1
LeelaChessZero
Backend: BLAS
PyBench
Total For Average Test Times
ctx_clock
Context Switch Time
m-queens
Time To Solve
Hackbench
Count: 32 - Type: Process
Cython Benchmark
Test: N-Queens
POV-Ray
Trace Time
C-Ray
Total Time - 4K, 16 Rays Per Pixel
Timed PHP Compilation
Time To Compile
Timed GCC Compilation
Time To Compile
asmFish
1024 Hash Memory, 26 Depth
7-Zip Compression
Test: Decompression Rating
7-Zip Compression
Test: Compression Rating
Himeno Benchmark
Poisson Pressure Solver
John The Ripper
Test: Blowfish
Renaissance
Test: Savina Reactors.IO
DaCapo Benchmark
Java Test: Tradebeans
DaCapo Benchmark
Java Test: Jython
Rodinia
Test: OpenMP LavaMD
NAS Parallel Benchmarks
Test / Class: LU.C
NAS Parallel Benchmarks
Test / Class: EP.C
CacheBench
Test: Read / Modify / Write
CacheBench
Test: Write
CacheBench
Test: Read
Llama.cpp
Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf
OpenCV
Test: DNN - Deep Neural Network
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP32MobileNetV2
NCNN
Target: Vulkan GPU - Model: blazeface
NCNN
Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
Mobile Neural Network
Model: squeezenetv1.1
TensorFlow Lite
Model: Inception ResNet V2
TensorFlow Lite
Model: NASNet Mobile
Redis
Test: LPOP - Parallel Connections: 1000
Redis
Test: LPOP - Parallel Connections: 500
AOM AV1
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p
LeelaChessZero
Backend: Eigen
Stockfish
Total Time
Phoronix Test Suite v10.8.5