svc05_hpc_run_1_23-09-22
v1.59
HTML result view exported from: https://openbenchmarking.org/result/2309266-NE-SVC05HPCR92&grs.
OpenCV
Test: DNN - Deep Neural Network
Kripke
Scikit-Learn
Benchmark: Sparse Random Projections / 100 Iterations
Scikit-Learn
Benchmark: 20 Newsgroups / Logistic Regression
Scikit-Learn
Benchmark: Covertype Dataset Benchmark
Scikit-Learn
Benchmark: Text Vectorizers
Scikit-Learn
Benchmark: Sparsify
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing
PyHPC Benchmarks
Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State
Mlpack Benchmark
Benchmark: scikit_linearridgeregression
Mlpack Benchmark
Benchmark: scikit_svm
Mlpack Benchmark
Benchmark: scikit_qda
Mlpack Benchmark
Benchmark: scikit_ica
Faiss
Test: bench_polysemous_sift1m - PQ baseline
AI Benchmark Alpha
Device AI Score
AI Benchmark Alpha
Device Training Score
AI Benchmark Alpha
Device Inference Score
Numenta Anomaly Benchmark
Detector: Contextual Anomaly Detector OSE
Numenta Anomaly Benchmark
Detector: Bayesian Changepoint
Numenta Anomaly Benchmark
Detector: Earthgecko Skyline
Numenta Anomaly Benchmark
Detector: Windowed Gaussian
Numenta Anomaly Benchmark
Detector: Relative Entropy
Numenta Anomaly Benchmark
Detector: KNN CAD
PETSc
Test: Streams
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Person Vehicle Bike Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Machine Translation EN To DE FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Vehicle Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP32 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Person Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
OpenVINO
Model: Face Detection FP16 - Device: CPU
TNN
Target: CPU - Model: SqueezeNet v1.1
TNN
Target: CPU - Model: SqueezeNet v2
TNN
Target: CPU - Model: MobileNet v2
TNN
Target: CPU - Model: DenseNet
NCNN
Target: Vulkan GPU - Model: FastestDet
NCNN
Target: Vulkan GPU - Model: vision_transformer
NCNN
Target: Vulkan GPU - Model: regnety_400m
NCNN
Target: Vulkan GPU - Model: squeezenet_ssd
NCNN
Target: Vulkan GPU - Model: yolov4-tiny
NCNN
Target: Vulkan GPU - Model: resnet50
NCNN
Target: Vulkan GPU - Model: alexnet
NCNN
Target: Vulkan GPU - Model: resnet18
NCNN
Target: Vulkan GPU - Model: vgg16
NCNN
Target: Vulkan GPU - Model: googlenet
NCNN
Target: Vulkan GPU - Model: blazeface
NCNN
Target: Vulkan GPU - Model: efficientnet-b0
NCNN
Target: Vulkan GPU - Model: mnasnet
NCNN
Target: Vulkan GPU - Model: shufflenet-v2
NCNN
Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: Vulkan GPU - Model: mobilenet
NCNN
Target: CPU - Model: vision_transformer
NCNN
Target: CPU - Model: squeezenet_ssd
NCNN
Target: CPU - Model: yolov4-tiny
NCNN
Target: CPU - Model: resnet50
NCNN
Target: CPU - Model: alexnet
NCNN
Target: CPU - Model: resnet18
NCNN
Target: CPU - Model: vgg16
NCNN
Target: CPU - Model: googlenet
NCNN
Target: CPU - Model: efficientnet-b0
NCNN
Target: CPU - Model: mnasnet
NCNN
Target: CPU-v3-v3 - Model: mobilenet-v3
NCNN
Target: CPU-v2-v2 - Model: mobilenet-v2
NCNN
Target: CPU - Model: mobilenet
Mobile Neural Network
Model: inception-v3
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: squeezenetv1.1
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: nasnet
GPAW
Input: Carbon Nanotube
WRF
Input: conus 2.5km
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 200
Caffe
Model: GoogleNet - Acceleration: CPU - Iterations: 100
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 1000
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 200
Caffe
Model: AlexNet - Acceleration: CPU - Iterations: 100
spaCy
Model: en_core_web_trf
spaCy
Model: en_core_web_lg
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
GNU Octave Benchmark
TensorFlow
Device: CPU - Batch Size: 512 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 512 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 256 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 64 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 32 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 16 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 512 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 512 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 32 - Model: VGG-16
TensorFlow
Device: CPU - Batch Size: 16 - Model: VGG-16
TensorFlow Lite
Model: Inception ResNet V2
TensorFlow Lite
Model: Mobilenet Quant
TensorFlow Lite
Model: Mobilenet Float
TensorFlow Lite
Model: NASNet Mobile
TensorFlow Lite
Model: Inception V4
TensorFlow Lite
Model: SqueezeNet
Darmstadt Automotive Parallel Heterogeneous Suite
Backend: OpenMP - Kernel: Euclidean Cluster
Darmstadt Automotive Parallel Heterogeneous Suite
Backend: OpenMP - Kernel: Points2Image
Darmstadt Automotive Parallel Heterogeneous Suite
Backend: OpenMP - Kernel: NDT Mapping
GROMACS
Implementation: MPI CPU - Input: water_GMX50_bare
Intel MPI Benchmarks
Test: IMB-MPI1 PingPong
Intel MPI Benchmarks
Test: IMB-P2P PingPong
Graph500
Scale: 26
Graph500
Scale: 26
Graph500
Scale: 26
Graph500
Scale: 26
ASKAP
Test: Hogbom Clean OpenMP
ASKAP
Test: tConvolve OpenMP - Degridding
ASKAP
Test: tConvolve OpenMP - Gridding
ASKAP
Test: tConvolve MPI - Gridding
ASKAP
Test: tConvolve MPI - Degridding
ASKAP
Test: tConvolve MT - Degridding
ASKAP
Test: tConvolve MT - Gridding
RNNoise
R Benchmark
DeepSpeech
Acceleration: CPU
Numpy Benchmark
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
Himeno Benchmark
Poisson Pressure Solver
ACES DGEMM
Sustained Floating-Point Rate
ArrayFire
Test: BLAS CPU
LULESH
LAMMPS Molecular Dynamics Simulator
Model: Rhodopsin Protein
LAMMPS Molecular Dynamics Simulator
Model: 20k Atoms
nekRS
Input: TurboPipe Periodic
nekRS
Input: Kershaw
SPECFEM3D
Model: Water-layered Halfspace
SPECFEM3D
Model: Homogeneous Halfspace
SPECFEM3D
Model: Tomographic Model
SPECFEM3D
Model: Layered Halfspace
SPECFEM3D
Model: Mount St. Helens
Remhos
Test: Sample Remap Example
Quantum ESPRESSO
Input: AUSURF112
OpenRadioss
Model: Rubber O-Ring Seal Installation
OpenRadioss
Model: Bird Strike on Windshield
OpenRadioss
Model: Cell Phone Drop Test
OpenRadioss
Model: Chrysler Neon 1M
OpenRadioss
Model: Bumper Beam
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Execution Time
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Mesh Time
OpenFOAM
Input: drivaerFastback, Small Mesh Size - Execution Time
OpenFOAM
Input: drivaerFastback, Small Mesh Size - Mesh Time
OpenFOAM
Input: drivaerFastback, Large Mesh Size - Execution Time
OpenFOAM
Input: drivaerFastback, Large Mesh Size - Mesh Time
OpenFOAM
Input: motorBike - Execution Time
OpenFOAM
Input: motorBike - Mesh Time
Monte Carlo Simulations of Ionised Nebulae
Input: Dust 2D tau100.0
Timed MAFFT Alignment
Multiple Sequence Alignment - LSU RNA
Xcompact3d Incompact3d
Input: input.i3d 193 Cells Per Direction
Xcompact3d Incompact3d
Input: input.i3d 129 Cells Per Direction
Xcompact3d Incompact3d
Input: X3D-benchmarking input.i3d
Timed HMMer Search
Pfam Database Search
QMCPACK
Input: FeCO6_b3lyp_gms
QMCPACK
Input: FeCO6_b3lyp_gms
QMCPACK
Input: simple-H2O
QMCPACK
Input: Li2_STO_ae
NWChem
Input: C240 Buckyball
Timed MrBayes Analysis
Primate Phylogeny Analysis
Palabos
Grid Size: 1000
Palabos
Grid Size: 500
Palabos
Grid Size: 400
Palabos
Grid Size: 100
Pennant
Test: leblancbig
Pennant
Test: sedovbig
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float-long - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float-long - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float-long - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: double - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: double - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: Stock - Precision: float - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: Stock - Precision: float - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256
HeFFTe - Highly Efficient FFT for Exascale
Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128
FFTW
Build: Float + SSE - Size: 2D FFT Size 4096
FFTW
Build: Float + SSE - Size: 1D FFT Size 4096
FFTW
Build: Float + SSE - Size: 2D FFT Size 32
FFTW
Build: Float + SSE - Size: 1D FFT Size 32
FFTW
Build: Stock - Size: 2D FFT Size 4096
FFTW
Build: Stock - Size: 1D FFT Size 4096
FFTW
Build: Stock - Size: 2D FFT Size 32
FFTW
Build: Stock - Size: 1D FFT Size 32
Laghos
Test: Sedov Blast Wave, ube_922_hex.mesh
Laghos
Test: Triple Point Problem
FFTE
Test: N=256, 1D Complex FFT Routine
libxsmm
M N K: 64
libxsmm
M N K: 32
libxsmm
M N K: 256
libxsmm
M N K: 128
Algebraic Multi-Grid Benchmark
Nebular Empirical Analysis Tool
Dolfyn
Computational Fluid Dynamics
NAMD
ATPase Simulation - 327,506 Atoms
CP2K Molecular Dynamics
Input: Fayalite-FIST
CP2K Molecular Dynamics
Input: H20-64
Rodinia
Test: OpenMP Streamcluster
Rodinia
Test: OpenMP CFD Solver
Rodinia
Test: OpenMP Leukocyte
Rodinia
Test: OpenMP LavaMD
CloverLeaf
Lagrangian-Eulerian Hydrodynamics
miniBUDE
Implementation: OpenMP - Input Deck: BM2
miniBUDE
Implementation: OpenMP - Input Deck: BM2
miniBUDE
Implementation: OpenMP - Input Deck: BM1
miniBUDE
Implementation: OpenMP - Input Deck: BM1
miniFE
Problem Size: Small
Parboil
Test: OpenMP MRI Gridding
Parboil
Test: OpenMP Stencil
Parboil
Test: OpenMP CUTCP
Parboil
Test: OpenMP LBM
LeelaChessZero
Backend: BLAS
HPC Challenge
Test / Class: Max Ping Pong Bandwidth
HPC Challenge
Test / Class: Random Ring Bandwidth
HPC Challenge
Test / Class: Random Ring Latency
HPC Challenge
Test / Class: EP-STREAM Triad
HPC Challenge
Test / Class: G-Ptrans
HPC Challenge
Test / Class: G-Ffte
HPC Challenge
Test / Class: G-HPL
NAS Parallel Benchmarks
Test / Class: SP.C
NAS Parallel Benchmarks
Test / Class: SP.B
NAS Parallel Benchmarks
Test / Class: MG.C
NAS Parallel Benchmarks
Test / Class: LU.C
NAS Parallel Benchmarks
Test / Class: IS.D
NAS Parallel Benchmarks
Test / Class: FT.C
NAS Parallel Benchmarks
Test / Class: EP.D
NAS Parallel Benchmarks
Test / Class: CG.C
NAS Parallel Benchmarks
Test / Class: BT.C
HPL Linpack
High Performance Conjugate Gradient
X Y Z: 160 160 160 - RT: 1800
High Performance Conjugate Gradient
X Y Z: 144 144 144 - RT: 1800
High Performance Conjugate Gradient
X Y Z: 104 104 104 - RT: 1800
High Performance Conjugate Gradient
X Y Z: 160 160 160 - RT: 60
High Performance Conjugate Gradient
X Y Z: 144 144 144 - RT: 60
High Performance Conjugate Gradient
X Y Z: 104 104 104 - RT: 60
IOR
Block Size: 1024MB - Disk Target: Default Test Directory
IOR
Block Size: 512MB - Disk Target: Default Test Directory
IOR
Block Size: 256MB - Disk Target: Default Test Directory
IOR
Block Size: 64MB - Disk Target: Default Test Directory
IOR
Block Size: 32MB - Disk Target: Default Test Directory
IOR
Block Size: 16MB - Disk Target: Default Test Directory
IOR
Block Size: 8MB - Disk Target: Default Test Directory
IOR
Block Size: 2MB - Disk Target: Default Test Directory
Whisper.cpp
Model: ggml-medium.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-small.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-base.en - Input: 2016 State of the Union
Faiss
Test: bench_polysemous_sift1m - Polysemous 30
Faiss
Test: bench_polysemous_sift1m - Polysemous 34
Faiss
Test: bench_polysemous_sift1m - Polysemous 38
Faiss
Test: bench_polysemous_sift1m - Polysemous 42
Faiss
Test: bench_polysemous_sift1m - Polysemous 46
Faiss
Test: bench_polysemous_sift1m - Polysemous 50
Faiss
Test: bench_polysemous_sift1m - Polysemous 54
Faiss
Test: bench_polysemous_sift1m - Polysemous 58
Faiss
Test: bench_polysemous_sift1m - Polysemous 62
Faiss
Test: bench_polysemous_sift1m - Polysemous 64
NCNN
Target: CPU - Model: FastestDet
NCNN
Target: CPU - Model: regnety_400m
NCNN
Target: CPU - Model: blazeface
NCNN
Target: CPU - Model: shufflenet-v2
Intel MPI Benchmarks
Test: IMB-MPI1 Sendrecv
Intel MPI Benchmarks
Test: IMB-MPI1 Sendrecv
Intel MPI Benchmarks
Test: IMB-MPI1 Exchange
Intel MPI Benchmarks
Test: IMB-MPI1 Exchange
oneDNN
Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU
RELION
Test: Basic - Device: CPU
HPC Challenge
Test / Class: G-Random Access
HPC Challenge
Test / Class: EP-DGEMM
NAS Parallel Benchmarks
Test / Class: EP.C
IOR
Block Size: 4MB - Disk Target: Default Test Directory
Phoronix Test Suite v10.8.5