AMD 3D V-Cache Comparison
Tests for a future article.
HTML result view exported from: https://openbenchmarking.org/result/2204297-NE-CC929132156&grt.
ASKAP
Test: tConvolve MPI - Degridding
ASKAP
Test: tConvolve MPI - Gridding
ASKAP
Test: tConvolve OpenMP - Gridding
ASKAP
Test: tConvolve OpenMP - Degridding
ASKAP
Test: tConvolve MT - Gridding
ASKAP
Test: tConvolve MT - Degridding
ASKAP
Test: Hogbom Clean OpenMP
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 100
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 200
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 100
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 200
ECP-CANDLE
Benchmark: P1B2
ECP-CANDLE
Benchmark: P3B1
ECP-CANDLE
Benchmark: P3B2
LeelaChessZero
Backend: BLAS
LeelaChessZero
Backend: Eigen
Mlpack Benchmark
Benchmark: scikit_svm
Mlpack Benchmark
Benchmark: scikit_linearridgeregression
Mlpack Benchmark
Benchmark: scikit_qda
Mlpack Benchmark
Benchmark: scikit_ica
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: squeezenetv1.1
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
NCNN
Target: CPU - Model: mobilenet
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU - Model: shufflenet-v2
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: regnety_400m
Numpy Benchmark
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Open Porous Media Git
OPM Benchmark: Flow MPI Norne - Threads: 1
Open Porous Media Git
OPM Benchmark: Flow MPI Norne - Threads: 2
Open Porous Media Git
OPM Benchmark: Flow MPI Norne - Threads: 4
Open Porous Media Git
OPM Benchmark: Flow MPI Norne - Threads: 8
Open Porous Media Git
OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 1
Open Porous Media Git
OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 2
Open Porous Media Git
OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 4
Open Porous Media Git
OPM Benchmark: Flow MPI Norne-4C MSW - Threads: 8
Open Porous Media Git
OPM Benchmark: Flow MPI Extra - Threads: 1
Open Porous Media Git
OPM Benchmark: Flow MPI Extra - Threads: 2
Open Porous Media Git
OPM Benchmark: Flow MPI Extra - Threads: 4
Open Porous Media Git
OPM Benchmark: Flow MPI Extra - Threads: 8
OpenFOAM
Input: Motorbike 30M
OpenFOAM
Input: Motorbike 60M
TNN
Target: CPU - Model: DenseNet
TNN
Target: CPU - Model: MobileNet v2
TNN
Target: CPU - Model: SqueezeNet v1.1
TNN
Target: CPU - Model: SqueezeNet v2
WebP2 Image Encode
Encode Settings: Default
WebP2 Image Encode
Encode Settings: Quality 75, Compression Effort 7
WebP2 Image Encode
Encode Settings: Quality 95, Compression Effort 7
WebP2 Image Encode
Encode Settings: Quality 100, Compression Effort 5
WebP2 Image Encode
Encode Settings: Quality 100, Lossless Compression
Xcompact3d Incompact3d
Input: input.i3d 129 Cells Per Direction
Xcompact3d Incompact3d
Input: input.i3d 193 Cells Per Direction
Phoronix Test Suite v10.8.5