xps13-hpc-baseline-x11-20210121-1

Intel Core i7-1165G7 testing with a Dell 08607K (1.0.3 BIOS) and Intel Xe 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101218-HA-XPS13HPCB96
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
xps13-hpc-baseline-x11-20210121-1
January 21 2021
  9 Hours, 30 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xps13-hpc-baseline-x11-20210121-1OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 08607K (1.0.3 BIOS)Intel Device a0ef16GBMicron 2300 NVMe 512GBIntel Xe 3GB (1300MHz)Realtek ALC289Intel Device a0f0Ubuntu 20.045.6.0-1042-oem (x86_64)GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.6 Mesa 20.2.61.2.131GCC 9.3.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionXps13-hpc-baseline-x11-20210121-1 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 1.9.1 - Python 3.8.5- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

xps13-hpc-baseline-x11-20210121-1gpaw: Carbon Nanotubeplaidml: No - Inference - ResNet 50 - CPUai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scorecp2k: Fayalite-FIST Dataplaidml: No - Inference - VGG16 - CPUlczero: BLASmocassin: Dust 2D tau100.0gromacs: Water Benchmarknumpy: namd: ATPase Simulation - 327,506 Atomshmmer: Pfam Database Searchmlpack: scikit_qdatensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2mnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0mlpack: scikit_icahpcg: parboil: OpenMP LBMmrbayes: Primate Phylogeny Analysisncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetmlpack: scikit_linearridgeregressiondeepspeech: CPUkripke: tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: SqueezeNethimeno: Poisson Pressure Solverintel-mpi: IMB-MPI1 PingPongminife: Smallparboil: OpenMP MRI Griddingmlpack: scikit_svmneat: opencv: DNN - Deep Neural Networkrnnoise: rbenchmark: tnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1parboil: OpenMP Stencildolfyn: Computational Fluid Dynamicsintel-mpi: IMB-P2P PingPongintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 Exchangeaskap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingscikit-learn: arrayfire: BLAS CPUaskap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingaskap: tConvolve MPI - Degriddingaskap: tConvolve MPI - Griddingintel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 Sendrecvoctave-benchmark: mafft: Multiple Sequence Alignment - LSU RNAparboil: OpenMP CUTCPamg: lulesh: ffte: N=256, 1D Complex FFT Routinexps13-hpc-baseline-x11-20210121-11029.1503.4113166896271339.7677.701523480.524300.384.63829200.905120.878165053738269052.7824.8333.80639.1387.138107.065.6584398.18610496.84220.0034.8039.1644.6817.4619.7063.0121.932.7410.797.409.306.867.9730.4619.9534.8139.1644.7717.2619.3762.6921.702.7510.757.359.316.927.9830.4413.3181.81586556749704033033767053813765685773635.7603372893.257771.4048.42612734.0035.240641832.0940.2977410.458404.11017.04657424.0923031420149.954327.731409.391434.0817.455352.2871504.271408.761424.461462.9693.053748.228.67610.8099.9431143526312001196.100123348.110592358OpenBenchmarking.org

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon Nanotubexps13-hpc-baseline-x11-20210121-12004006008001000SE +/- 2.30, N = 31029.151. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUxps13-hpc-baseline-x11-20210121-10.76731.53462.30193.06923.8365SE +/- 0.01, N = 33.41

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Scorexps13-hpc-baseline-x11-20210121-1300600900120015001316

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Scorexps13-hpc-baseline-x11-20210121-1150300450600750689

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Scorexps13-hpc-baseline-x11-20210121-1140280420560700627

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Fayalite-FIST Dataxps13-hpc-baseline-x11-20210121-1300600900120015001339.77

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUxps13-hpc-baseline-x11-20210121-1246810SE +/- 0.01, N = 37.70

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASxps13-hpc-baseline-x11-20210121-1306090120150SE +/- 0.58, N = 31521. (CXX) g++ options: -flto -pthread

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0xps13-hpc-baseline-x11-20210121-180160240320400SE +/- 0.58, N = 33481. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmarkxps13-hpc-baseline-x11-20210121-10.11790.23580.35370.47160.5895SE +/- 0.005, N = 30.5241. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkxps13-hpc-baseline-x11-20210121-170140210280350SE +/- 0.35, N = 3300.38

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsxps13-hpc-baseline-x11-20210121-11.04362.08723.13084.17445.218SE +/- 0.04503, N = 34.63829

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Searchxps13-hpc-baseline-x11-20210121-14080120160200SE +/- 0.30, N = 3200.911. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaxps13-hpc-baseline-x11-20210121-1306090120150SE +/- 0.07, N = 3120.87

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4xps13-hpc-baseline-x11-20210121-12M4M6M8M10MSE +/- 23972.73, N = 38165053

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2xps13-hpc-baseline-x11-20210121-11.6M3.2M4.8M6.4M8MSE +/- 23654.20, N = 37382690

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v3xps13-hpc-baseline-x11-20210121-11224364860SE +/- 0.77, N = 352.78MIN: 50.78 / MAX: 98.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0xps13-hpc-baseline-x11-20210121-11.08742.17483.26224.34965.437SE +/- 0.007, N = 34.833MIN: 4.79 / MAX: 7.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224xps13-hpc-baseline-x11-20210121-10.85641.71282.56923.42564.282SE +/- 0.128, N = 33.806MIN: 3.5 / MAX: 22.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50xps13-hpc-baseline-x11-20210121-1918273645SE +/- 0.25, N = 339.14MIN: 38.36 / MAX: 60.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0xps13-hpc-baseline-x11-20210121-1246810SE +/- 0.025, N = 37.138MIN: 7.03 / MAX: 28.241. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaxps13-hpc-baseline-x11-20210121-120406080100SE +/- 0.41, N = 3107.06

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1xps13-hpc-baseline-x11-20210121-11.27312.54623.81935.09246.3655SE +/- 0.02726, N = 35.658431. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP LBMxps13-hpc-baseline-x11-20210121-120406080100SE +/- 0.25, N = 398.191. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisxps13-hpc-baseline-x11-20210121-120406080100SE +/- 0.81, N = 396.841. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mxps13-hpc-baseline-x11-20210121-1510152025SE +/- 0.32, N = 320.00MIN: 18.96 / MAX: 39.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdxps13-hpc-baseline-x11-20210121-1816243240SE +/- 0.05, N = 334.80MIN: 34.2 / MAX: 47.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyxps13-hpc-baseline-x11-20210121-1918273645SE +/- 0.06, N = 339.16MIN: 38.15 / MAX: 57.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50xps13-hpc-baseline-x11-20210121-11020304050SE +/- 0.02, N = 344.68MIN: 44.31 / MAX: 47.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetxps13-hpc-baseline-x11-20210121-148121620SE +/- 0.02, N = 317.46MIN: 17.26 / MAX: 34.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18xps13-hpc-baseline-x11-20210121-1510152025SE +/- 0.02, N = 319.70MIN: 18.91 / MAX: 23.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16xps13-hpc-baseline-x11-20210121-11428425670SE +/- 0.35, N = 363.01MIN: 61.63 / MAX: 67.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetxps13-hpc-baseline-x11-20210121-1510152025SE +/- 0.04, N = 321.93MIN: 20.82 / MAX: 40.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefacexps13-hpc-baseline-x11-20210121-10.61651.2331.84952.4663.0825SE +/- 0.03, N = 32.74MIN: 2.58 / MAX: 5.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0xps13-hpc-baseline-x11-20210121-13691215SE +/- 0.12, N = 310.79MIN: 10.38 / MAX: 13.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetxps13-hpc-baseline-x11-20210121-1246810SE +/- 0.10, N = 37.40MIN: 7.02 / MAX: 22.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2xps13-hpc-baseline-x11-20210121-13691215SE +/- 0.20, N = 39.30MIN: 8.8 / MAX: 11.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3xps13-hpc-baseline-x11-20210121-1246810SE +/- 0.10, N = 36.86MIN: 6.51 / MAX: 9.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2xps13-hpc-baseline-x11-20210121-1246810SE +/- 0.14, N = 37.97MIN: 7.48 / MAX: 11.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetxps13-hpc-baseline-x11-20210121-1714212835SE +/- 0.03, N = 330.46MIN: 29.98 / MAX: 51.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mxps13-hpc-baseline-x11-20210121-1510152025SE +/- 0.33, N = 319.95MIN: 18.8 / MAX: 48.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdxps13-hpc-baseline-x11-20210121-1816243240SE +/- 0.04, N = 334.81MIN: 34.14 / MAX: 53.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyxps13-hpc-baseline-x11-20210121-1918273645SE +/- 0.02, N = 339.16MIN: 38.06 / MAX: 56.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50xps13-hpc-baseline-x11-20210121-11020304050SE +/- 0.01, N = 344.77MIN: 44.4 / MAX: 63.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetxps13-hpc-baseline-x11-20210121-148121620SE +/- 0.20, N = 317.26MIN: 16.35 / MAX: 21.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18xps13-hpc-baseline-x11-20210121-1510152025SE +/- 0.35, N = 319.37MIN: 17.65 / MAX: 27.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16xps13-hpc-baseline-x11-20210121-11428425670SE +/- 0.09, N = 362.69MIN: 61.54 / MAX: 87.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetxps13-hpc-baseline-x11-20210121-1510152025SE +/- 0.35, N = 321.70MIN: 20.8 / MAX: 42.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefacexps13-hpc-baseline-x11-20210121-10.61881.23761.85642.47523.094SE +/- 0.05, N = 32.75MIN: 2.59 / MAX: 5.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0xps13-hpc-baseline-x11-20210121-13691215SE +/- 0.13, N = 310.75MIN: 10.35 / MAX: 13.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetxps13-hpc-baseline-x11-20210121-1246810SE +/- 0.13, N = 37.35MIN: 6.99 / MAX: 11.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2xps13-hpc-baseline-x11-20210121-13691215SE +/- 0.18, N = 39.31MIN: 8.81 / MAX: 12.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3xps13-hpc-baseline-x11-20210121-1246810SE +/- 0.05, N = 36.92MIN: 6.5 / MAX: 21.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2xps13-hpc-baseline-x11-20210121-1246810SE +/- 0.14, N = 37.98MIN: 7.53 / MAX: 10.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetxps13-hpc-baseline-x11-20210121-1714212835SE +/- 0.06, N = 330.44MIN: 29.88 / MAX: 49.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionxps13-hpc-baseline-x11-20210121-13691215SE +/- 0.05, N = 313.31

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUxps13-hpc-baseline-x11-20210121-120406080100SE +/- 0.75, N = 381.82

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4xps13-hpc-baseline-x11-20210121-112M24M36M48M60MSE +/- 256311.47, N = 3556749701. (CXX) g++ options: -O3 -fopenmp

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobilexps13-hpc-baseline-x11-20210121-190K180K270K360K450KSE +/- 2696.71, N = 3403303

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatxps13-hpc-baseline-x11-20210121-180K160K240K320K400KSE +/- 2594.63, N = 3376705

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantxps13-hpc-baseline-x11-20210121-180K160K240K320K400KSE +/- 2796.41, N = 3381376

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetxps13-hpc-baseline-x11-20210121-1120K240K360K480K600KSE +/- 3101.00, N = 3568577

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure Solverxps13-hpc-baseline-x11-20210121-18001600240032004000SE +/- 1.92, N = 33635.761. (CC) gcc options: -O3 -mavx2

Intel MPI Benchmarks

Intel MPI Benchmarks for stressing MPI implementations. At this point the test profile aggregates results for some common MPI functionality. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 PingPongxps13-hpc-baseline-x11-20210121-16001200180024003000SE +/- 23.03, N = 152893.25MIN: 7.22 / MAX: 11197.151. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: Smallxps13-hpc-baseline-x11-20210121-117003400510068008500SE +/- 40.53, N = 37771.401. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP MRI Griddingxps13-hpc-baseline-x11-20210121-11122334455SE +/- 0.52, N = 348.431. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmxps13-hpc-baseline-x11-20210121-1816243240SE +/- 0.01, N = 334.00

Nebular Empirical Analysis Tool

NEAT is the Nebular Empirical Analysis Tool for empirical analysis of ionised nebulae, with uncertainty propagation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29xps13-hpc-baseline-x11-20210121-1816243240SE +/- 0.31, N = 335.241. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural Networkxps13-hpc-baseline-x11-20210121-114002800420056007000SE +/- 384.66, N = 1564181. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28xps13-hpc-baseline-x11-20210121-1714212835SE +/- 0.12, N = 332.091. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

R Benchmark

This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterR Benchmarkxps13-hpc-baseline-x11-20210121-10.0670.1340.2010.2680.335SE +/- 0.0002, N = 30.29771. R scripting front-end version 3.6.3 (2020-02-29)

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2xps13-hpc-baseline-x11-20210121-190180270360450SE +/- 0.53, N = 3410.46MIN: 406.64 / MAX: 492.581. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1xps13-hpc-baseline-x11-20210121-190180270360450SE +/- 0.67, N = 3404.11MIN: 401.94 / MAX: 413.421. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP Stencilxps13-hpc-baseline-x11-20210121-148121620SE +/- 0.21, N = 417.051. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamicsxps13-hpc-baseline-x11-20210121-1612182430SE +/- 0.15, N = 324.09

Intel MPI Benchmarks

Intel MPI Benchmarks for stressing MPI implementations. At this point the test profile aggregates results for some common MPI functionality. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Msg/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-P2P PingPongxps13-hpc-baseline-x11-20210121-1600K1200K1800K2400K3000KSE +/- 17594.43, N = 33031420MIN: 1788 / MAX: 75852921. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangexps13-hpc-baseline-x11-20210121-1306090120150SE +/- 0.82, N = 3149.95MIN: 0.5 / MAX: 2043.911. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangexps13-hpc-baseline-x11-20210121-19001800270036004500SE +/- 27.24, N = 34327.73MAX: 16485.011. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

ASKAP

This is a CUDA benchmark of ATNF's ASKAP Benchmark with currently using the tConvolveCuda sub-test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MT - Degriddingxps13-hpc-baseline-x11-20210121-130060090012001500SE +/- 1.65, N = 31409.391. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MT - Griddingxps13-hpc-baseline-x11-20210121-130060090012001500SE +/- 4.65, N = 31434.081. (CXX) g++ options: -lpthread

Scikit-Learn

Scikit-learn is a Python module for machine learning Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1xps13-hpc-baseline-x11-20210121-148121620SE +/- 0.01, N = 317.46

ArrayFire

ArrayFire is an GPU and CPU numeric processing library, this test uses the built-in CPU and OpenCL ArrayFire benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterArrayFire 3.7Test: BLAS CPUxps13-hpc-baseline-x11-20210121-180160240320400SE +/- 0.37, N = 3352.291. (CXX) g++ options: -rdynamic

ASKAP

This is a CUDA benchmark of ATNF's ASKAP Benchmark with currently using the tConvolveCuda sub-test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve OpenMP - Degriddingxps13-hpc-baseline-x11-20210121-130060090012001500SE +/- 0.00, N = 31504.271. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve OpenMP - Griddingxps13-hpc-baseline-x11-20210121-130060090012001500SE +/- 0.00, N = 31408.761. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - Degriddingxps13-hpc-baseline-x11-20210121-130060090012001500SE +/- 0.63, N = 31424.461. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - Griddingxps13-hpc-baseline-x11-20210121-130060090012001500SE +/- 2.32, N = 31462.961. (CXX) g++ options: -lpthread

Intel MPI Benchmarks

Intel MPI Benchmarks for stressing MPI implementations. At this point the test profile aggregates results for some common MPI functionality. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvxps13-hpc-baseline-x11-20210121-120406080100SE +/- 0.44, N = 393.05MIN: 0.33 / MAX: 1613.771. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvxps13-hpc-baseline-x11-20210121-18001600240032004000SE +/- 45.18, N = 33748.22MAX: 17800.841. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0xps13-hpc-baseline-x11-20210121-1246810SE +/- 0.037, N = 58.676

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAxps13-hpc-baseline-x11-20210121-13691215SE +/- 0.03, N = 310.811. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP CUTCPxps13-hpc-baseline-x11-20210121-13691215SE +/- 0.015742, N = 39.9431141. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2xps13-hpc-baseline-x11-20210121-180M160M240M320M400MSE +/- 671511.45, N = 33526312001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3xps13-hpc-baseline-x11-20210121-130060090012001500SE +/- 0.76, N = 31196.101. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0Test: N=256, 1D Complex FFT Routinexps13-hpc-baseline-x11-20210121-15K10K15K20K25KSE +/- 148.39, N = 323348.111. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

94 Results Shown

GPAW
PlaidML
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
CP2K Molecular Dynamics
PlaidML
LeelaChessZero
Monte Carlo Simulations of Ionised Nebulae
GROMACS
Numpy Benchmark
NAMD
Timed HMMer Search
Mlpack Benchmark
TensorFlow Lite:
  Inception V4
  Inception ResNet V2
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Mlpack Benchmark
High Performance Conjugate Gradient
Parboil
Timed MrBayes Analysis
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
  Vulkan GPU - regnety_400m
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
Mlpack Benchmark
DeepSpeech
Kripke
TensorFlow Lite:
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  SqueezeNet
Himeno Benchmark
Intel MPI Benchmarks
miniFE
Parboil
Mlpack Benchmark
Nebular Empirical Analysis Tool
OpenCV
RNNoise
R Benchmark
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
Parboil
Dolfyn
Intel MPI Benchmarks:
  IMB-P2P PingPong
  IMB-MPI1 Exchange
  IMB-MPI1 Exchange
ASKAP:
  tConvolve MT - Degridding
  tConvolve MT - Gridding
Scikit-Learn
ArrayFire
ASKAP:
  tConvolve OpenMP - Degridding
  tConvolve OpenMP - Gridding
  tConvolve MPI - Degridding
  tConvolve MPI - Gridding
Intel MPI Benchmarks:
  IMB-MPI1 Sendrecv:
    Average usec
    Average Mbytes/sec
GNU Octave Benchmark
Timed MAFFT Alignment
Parboil
Algebraic Multi-Grid Benchmark
LULESH
FFTE