Cascadelake

2 x Intel Xeon Platinum 8280 testing with a GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS) and llvmpipe 377GB on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1911242-HU-CASCADELA55
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
2 x Intel Xeon Platinum 8280
November 24 2019
  5 Hours, 43 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


CascadelakeOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8280 @ 4.00GHz (56 Cores / 112 Threads)GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS)Intel Sky Lake-E DMI3 Registers386048MB280GB INTEL SSDPED1D280GAllvmpipe 377GBVE2282 x Intel X722 for 1GbE + 2 x QLogic FastLinQ QL41000 10/25/40/50GbEUbuntu 19.105.4.0-rc7-12nov-vulns (x86_64) 20191112GNOME Shell 3.34.1X Server 1.20.5modesetting 1.20.53.3 Mesa 19.2.1 (LLVM 9.0 256 bits)GCC 9.2.1 20191008ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCascadelake BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500002c- itlb_multihit: KVM: Vulnerable + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + tsx_async_abort: Mitigation of TSX disabled

Cascadelakeminife: Smallnamd: ATPase Simulation - 327,506 Atomspennant: sedovbigpennant: leblancbiggraphics-magick: Swirlgraphics-magick: Rotategraphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Resizinggraphics-magick: Noise-Gaussiangraphics-magick: HWB Color Spacemkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - u8s8f32mkl-dnn: Convolution Batch conv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16dav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitrav1e: 1080p To AV1 Video Encodemt-dgemm: Sustained Floating-Point Ratebuild2: Time To Compileaskap: tConvolve MT - Griddingaskap: tConvolve MT - Degriddingaskap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingaskap: tConvolve OpenMP - Griddingaskap: tConvolve OpenMP - Degriddinggromacs: Water Benchmarkblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Only2 x Intel Xeon Platinum 828014958.60.3616672.8568165.81363145661154079211064877821.564099.826072.085132.943493.7276453.57814.65801495.5262999.491.241001.1548449.69121523.26876.1140.4331031888.21221.5677.9204219.81541763.7222.85413.896084.35352320.6587.695091779.6486.7854348.00201.16397.5466.370.88020.28790658.8464196.786316.946775.399634.088834.8411851.25.74638.70100.2664.63154.44127.85OpenBenchmarking.org

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: Small2 x Intel Xeon Platinum 82803K6K9K12K15KSE +/- 521.46, N = 1214958.61. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.13b1ATPase Simulation - 327,506 Atoms2 x Intel Xeon Platinum 82800.08140.16280.24420.32560.407SE +/- 0.00019, N = 120.36166

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbig2 x Intel Xeon Platinum 82801632486480SE +/- 0.12, N = 372.861. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbig2 x Intel Xeon Platinum 82801530456075SE +/- 0.03, N = 365.811. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirl2 x Intel Xeon Platinum 828030060090012001500SE +/- 2.40, N = 314561. (CC) gcc options: -fopenmp -O2 -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotate2 x Intel Xeon Platinum 8280130260390520650SE +/- 8.41, N = 36111. (CC) gcc options: -fopenmp -O2 -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Sharpen2 x Intel Xeon Platinum 82801202403604806005401. (CC) gcc options: -fopenmp -O2 -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Enhanced2 x Intel Xeon Platinum 82802004006008001000SE +/- 0.88, N = 37921. (CC) gcc options: -fopenmp -O2 -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizing2 x Intel Xeon Platinum 82802004006008001000SE +/- 8.08, N = 311061. (CC) gcc options: -fopenmp -O2 -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussian2 x Intel Xeon Platinum 82801102203304405504871. (CC) gcc options: -fopenmp -O2 -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color Space2 x Intel Xeon Platinum 82802004006008001000SE +/- 10.07, N = 157821. (CC) gcc options: -fopenmp -O2 -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lbz2 -lxml2 -lz -lm -lpthread

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: f322 x Intel Xeon Platinum 82800.35190.70381.05571.40761.7595SE +/- 0.01400, N = 31.56409MIN: 1.391. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: f322 x Intel Xeon Platinum 82803691215SE +/- 0.01313, N = 39.82607MIN: 9.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f322 x Intel Xeon Platinum 82800.46920.93841.40761.87682.346SE +/- 0.01215, N = 32.08513MIN: 1.891. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f322 x Intel Xeon Platinum 82800.66231.32461.98692.64923.3115SE +/- 0.04252, N = 32.94349MIN: 2.471. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: bf16bf16bf162 x Intel Xeon Platinum 82800.83871.67742.51613.35484.1935SE +/- 0.00212, N = 33.72764MIN: 3.461. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: bf16bf16bf162 x Intel Xeon Platinum 82801224364860SE +/- 0.24, N = 353.58MIN: 32.541. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: f322 x Intel Xeon Platinum 82801.04812.09623.14434.19245.2405SE +/- 0.00345, N = 34.65801MIN: 4.451. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: f322 x Intel Xeon Platinum 8280110220330440550SE +/- 0.59, N = 3495.53MIN: 488.791. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f322 x Intel Xeon Platinum 82806001200180024003000SE +/- 8.66, N = 32999.49MIN: 2925.721. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: f322 x Intel Xeon Platinum 82800.27920.55840.83761.11681.396SE +/- 0.00311, N = 31.24100MIN: 1.171. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: f322 x Intel Xeon Platinum 82800.25980.51960.77941.03921.299SE +/- 0.00073, N = 31.15484MIN: 1.121. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: f322 x Intel Xeon Platinum 82801122334455SE +/- 0.06, N = 349.69MIN: 48.681. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f322 x Intel Xeon Platinum 828030060090012001500SE +/- 1.75, N = 31523.26MIN: 1514.951. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: f322 x Intel Xeon Platinum 82802004006008001000SE +/- 0.32, N = 3876.11MIN: 869.341. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f322 x Intel Xeon Platinum 82800.09740.19480.29220.38960.487SE +/- 0.006036, N = 30.433103MIN: 0.361. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f322 x Intel Xeon Platinum 8280400800120016002000SE +/- 0.34, N = 31888.21MIN: 1873.741. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Recurrent Neural Network Training - Data Type: f322 x Intel Xeon Platinum 828050100150200250SE +/- 3.40, N = 3221.57MIN: 209.831. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf162 x Intel Xeon Platinum 8280246810SE +/- 0.01023, N = 37.92042MIN: 7.721. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f322 x Intel Xeon Platinum 8280510152025SE +/- 0.07, N = 319.82MIN: 18.341. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: bf16bf16bf162 x Intel Xeon Platinum 8280400800120016002000SE +/- 0.19, N = 31763.72MIN: 1756.31. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: f322 x Intel Xeon Platinum 8280510152025SE +/- 0.05, N = 322.85MIN: 21.851. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf162 x Intel Xeon Platinum 82800.87661.75322.62983.50644.383SE +/- 0.00135, N = 33.89608MIN: 3.81. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf162 x Intel Xeon Platinum 82800.97951.9592.93853.9184.8975SE +/- 0.00398, N = 34.35352MIN: 4.321. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf162 x Intel Xeon Platinum 828070140210280350SE +/- 0.19, N = 3320.66MIN: 319.181. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f322 x Intel Xeon Platinum 8280246810SE +/- 0.01213, N = 37.69509MIN: 6.751. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf162 x Intel Xeon Platinum 8280400800120016002000SE +/- 2.61, N = 31779.64MIN: 1769.321. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf162 x Intel Xeon Platinum 828020406080100SE +/- 0.02, N = 386.79MIN: 85.641. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Chimera 1080p2 x Intel Xeon Platinum 828080160240320400SE +/- 0.62, N = 3348.00MIN: 219.64 / MAX: 439.11. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Summer Nature 4K2 x Intel Xeon Platinum 82804080120160200SE +/- 3.40, N = 15201.16MIN: 72.88 / MAX: 233.971. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Summer Nature 1080p2 x Intel Xeon Platinum 828090180270360450SE +/- 4.98, N = 3397.54MIN: 150.12 / MAX: 456.591. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Chimera 1080p 10-bit2 x Intel Xeon Platinum 82801530456075SE +/- 0.06, N = 366.37MIN: 49.14 / MAX: 105.331. (CC) gcc options: -pthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.11080p To AV1 Video Encode2 x Intel Xeon Platinum 82800.1980.3960.5940.7920.99SE +/- 0.002, N = 30.880

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Rate2 x Intel Xeon Platinum 8280510152025SE +/- 0.11, N = 320.291. (CC) gcc options: -O3 -march=native -fopenmp

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To Compile2 x Intel Xeon Platinum 82801326395265SE +/- 0.17, N = 358.85

ASKAP

This is a CUDA benchmark of ATNF's ASKAP Benchmark with currently using the tConvolveCuda sub-test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MT - Gridding2 x Intel Xeon Platinum 82809001800270036004500SE +/- 135.78, N = 134196.781. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MT - Degridding2 x Intel Xeon Platinum 828014002800420056007000SE +/- 175.74, N = 136316.941. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - Gridding2 x Intel Xeon Platinum 828015003000450060007500SE +/- 7.18, N = 36775.391. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - Degridding2 x Intel Xeon Platinum 82802K4K6K8K10KSE +/- 5.49, N = 39634.081. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve OpenMP - Gridding2 x Intel Xeon Platinum 82802K4K6K8K10KSE +/- 108.12, N = 158834.841. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve OpenMP - Degridding2 x Intel Xeon Platinum 82803K6K9K12K15KSE +/- 175.98, N = 1511851.21. (CXX) g++ options: -lpthread

GROMACS

The Gromacs molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2019.4Water Benchmark2 x Intel Xeon Platinum 82801.29292.58583.87875.17166.4645SE +/- 0.025, N = 35.7461. (CXX) g++ options: -mavx512f -mfma -std=c++11 -O3 -funroll-all-loops -pthread -lrt -lpthread -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: BMW27 - Compute: CPU-Only2 x Intel Xeon Platinum 8280918273645SE +/- 0.10, N = 338.70

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Classroom - Compute: CPU-Only2 x Intel Xeon Platinum 828020406080100SE +/- 0.01, N = 3100.26

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Fishy Cat - Compute: CPU-Only2 x Intel Xeon Platinum 82801428425670SE +/- 0.06, N = 364.63

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Barbershop - Compute: CPU-Only2 x Intel Xeon Platinum 8280306090120150SE +/- 0.17, N = 3154.44

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Pabellon Barcelona - Compute: CPU-Only2 x Intel Xeon Platinum 8280306090120150SE +/- 0.03, N = 3127.85

57 Results Shown

miniFE
NAMD
Pennant:
  sedovbig
  leblancbig
GraphicsMagick:
  Swirl
  Rotate
  Sharpen
  Enhanced
  Resizing
  Noise-Gaussian
  HWB Color Space
MKL-DNN DNNL:
  IP Batch 1D - f32
  IP Batch All - f32
  IP Batch 1D - u8s8f32
  IP Batch All - u8s8f32
  IP Batch 1D - bf16bf16bf16
  IP Batch All - bf16bf16bf16
  Convolution Batch conv_3d - f32
  Convolution Batch conv_all - f32
  Convolution Batch conv_3d - u8s8f32
  Deconvolution Batch deconv_1d - f32
  Deconvolution Batch deconv_3d - f32
  Convolution Batch conv_alexnet - f32
  Convolution Batch conv_all - u8s8f32
  Deconvolution Batch deconv_all - f32
  Deconvolution Batch deconv_1d - u8s8f32
  Deconvolution Batch deconv_3d - u8s8f32
  Recurrent Neural Network Training - f32
  Convolution Batch conv_3d - bf16bf16bf16
  Convolution Batch conv_alexnet - u8s8f32
  Convolution Batch conv_all - bf16bf16bf16
  Convolution Batch conv_googlenet_v3 - f32
  Deconvolution Batch deconv_1d - bf16bf16bf16
  Deconvolution Batch deconv_3d - bf16bf16bf16
  Convolution Batch conv_alexnet - bf16bf16bf16
  Convolution Batch conv_googlenet_v3 - u8s8f32
  Deconvolution Batch deconv_all - bf16bf16bf16
  Convolution Batch conv_googlenet_v3 - bf16bf16bf16
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
rav1e
ACES DGEMM
Build2
ASKAP:
  tConvolve MT - Gridding
  tConvolve MT - Degridding
  tConvolve MPI - Gridding
  tConvolve MPI - Degridding
  tConvolve OpenMP - Gridding
  tConvolve OpenMP - Degridding
GROMACS
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only