Xeon Gold

Intel Xeon Gold 5218 testing with a Supermicro X11SPL-F v1.02 (3.1 BIOS) and llvmpipe 188GB on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2002201-VE-XEONGOLD541
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Xeon Gold 5218
February 19 2020
  12 Hours, 55 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon GoldOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon Gold 5218 @ 3.90GHz (16 Cores / 32 Threads)Supermicro X11SPL-F v1.02 (3.1 BIOS)Intel Sky Lake-E DMI3 Registers188GB3841GB Micron_9300_MTFDHAL3T8TDPllvmpipe 188GBVE2282 x Intel I210Ubuntu 19.105.5.0-050500-generic (x86_64)GNOME Shell 3.34.1X Server 1.20.5modesetting 1.20.53.3 Mesa 19.2.8 (LLVM 9.0 256 bits)GCC 9.2.1 20191008ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionXeon Gold BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500002c- Python 2.7.17 + Python 3.7.5- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled

Xeon Goldmkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16fftw: Stock - 1D FFT Size 32fftw: Stock - 1D FFT Size 64mkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16fftw: Stock - 2D FFT Size 32fftw: Stock - 2D FFT Size 64fftw: Stock - 1D FFT Size 128mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16fftw: Stock - 1D FFT Size 256fftw: Stock - 1D FFT Size 512fftw: Stock - 2D FFT Size 128fftw: Stock - 2D FFT Size 256mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16fftw: Stock - 2D FFT Size 512fftw: Stock - 1D FFT Size 1024fftw: Stock - 1D FFT Size 2048fftw: Stock - 1D FFT Size 4096fftw: Stock - 2D FFT Size 1024fftw: Stock - 2D FFT Size 2048fftw: Stock - 2D FFT Size 4096fftw: Float + SSE - 1D FFT Size 32fftw: Float + SSE - 1D FFT Size 64fftw: Float + SSE - 2D FFT Size 32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32fftw: Float + SSE - 2D FFT Size 64fftw: Float + SSE - 1D FFT Size 128fftw: Float + SSE - 1D FFT Size 256fftw: Float + SSE - 1D FFT Size 512fftw: Float + SSE - 2D FFT Size 128fftw: Float + SSE - 2D FFT Size 256fftw: Float + SSE - 2D FFT Size 512mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: IP Batch 1D - f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Convolution Batch conv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32mkl-dnn: Convolution Batch conv_alexnet - f32fftw: Float + SSE - 1D FFT Size 1024mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16fftw: Float + SSE - 1D FFT Size 2048fftw: Float + SSE - 1D FFT Size 4096fftw: Float + SSE - 2D FFT Size 1024fftw: Float + SSE - 2D FFT Size 2048fftw: Float + SSE - 2D FFT Size 4096build-ffmpeg: Time To Compilebuild-gdb: Time To Compilebuild-imagemagick: Time To Compilebuild-mplayer: Time To Compilebuild-apache: Time To Compilebuild-llvm: Time To Compilebuild-php: Time To Compilebuild-gcc: Time To Compilebuild-linux-kernel: Time To Compileaobench: 2048 x 2048 - Total Timetungsten: Hairtungsten: Water Caustictungsten: Non-Exponentialtungsten: Volumetric Causticc-ray: Total Time - 4K, 16 Rays Per Pixelttsiod-renderer: Phong Rendering With Soft-Shadow Mappingv-ray: CPUblender: BMW27 - OpenCLblender: BMW27 - CPU-Onlyblender: Classroom - OpenCLblender: Fishy Cat - OpenCLblender: Barbershop - OpenCLblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - OpenCLblender: Pabellon Barcelona - CPU-Onlypovray: Trace Timeradiance: Serialradiance: SMP Parallelembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objoidn: Memorialsmallpt: Global Illumination Renderer; 128 Samplesindigobench: Bedroomindigobench: Supercarluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismospray: San Miguel - SciVisospray: XFrog Forest - SciVisospray: San Miguel - Path Tracerospray: NASA Streamlines - SciVisospray: XFrog Forest - Path Tracerospray: Magnetic Reconnection - SciVisospray: NASA Streamlines - Path Tracerospray: Magnetic Reconnection - Path Tracerrays1bench: Large Sceneappleseed: Emilyappleseed: Disney Materialappleseed: Material Testerbuild2: Time To CompileXeon Gold 52186792.397700.77472.2488.9208037.05527.55355.816.84195241.57493.67023.25960.81888.906205.57572.97180.17056.46198.84837.34224.6137271595234928133.100300141911827769392352343320850192980.9930715.285021721.205.8559935.290540.575610330.313.01992395.118.9976512.93744.883670.9256509.519737304.3711821.6241.1124.0754682.2565301.613491965845.3221.3683458834484019316150841205747.571136.68827.43930.01826.181293.73856.674959.66359.28436.25722.370026.17388.277669.5004957.510469.84017049462.52149.15474.071236.44724.54431.59229.16588.121412.20530.6552.302757.569239.60612.354014.358315.192113.788119.259516.639214.838.7021.8184.30968754.6690196083919.3618.873.011.7523.261.6721.744.37333.3356.59323.236487181.761631179.06886892.409OpenBenchmarking.org

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16Xeon Gold 521815003000450060007500SE +/- 3.47, N = 36792.39MIN: 6719.391. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 32Xeon Gold 521817003400510068008500SE +/- 91.10, N = 37700.71. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 64Xeon Gold 521816003200480064008000SE +/- 11.65, N = 37472.21. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16Xeon Gold 5218110220330440550SE +/- 0.32, N = 3488.92MIN: 483.011. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 32Xeon Gold 52182K4K6K8K10KSE +/- 390.57, N = 158037.01. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 64Xeon Gold 521812002400360048006000SE +/- 14.41, N = 35527.51. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 128Xeon Gold 521811002200330044005500SE +/- 72.39, N = 35355.81. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16Xeon Gold 521848121620SE +/- 0.03, N = 316.84MIN: 16.471. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 256Xeon Gold 521811002200330044005500SE +/- 30.00, N = 35241.51. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 512Xeon Gold 521816003200480064008000SE +/- 7.05, N = 37493.61. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 128Xeon Gold 521815003000450060007500SE +/- 164.04, N = 157023.21. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 256Xeon Gold 521813002600390052006500SE +/- 160.35, N = 155960.81. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16Xeon Gold 5218400800120016002000SE +/- 2.46, N = 31888.90MIN: 1880.021. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 512Xeon Gold 521813002600390052006500SE +/- 12.16, N = 36205.51. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 1024Xeon Gold 521816003200480064008000SE +/- 35.32, N = 37572.91. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 2048Xeon Gold 521815003000450060007500SE +/- 24.29, N = 37180.11. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 4096Xeon Gold 521815003000450060007500SE +/- 36.04, N = 37056.41. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 1024Xeon Gold 521813002600390052006500SE +/- 18.19, N = 36198.81. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 2048Xeon Gold 521810002000300040005000SE +/- 28.29, N = 34837.31. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096Xeon Gold 52189001800270036004500SE +/- 6.76, N = 34224.61. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 32Xeon Gold 52183K6K9K12K15KSE +/- 161.53, N = 3137271. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 64Xeon Gold 52183K6K9K12K15KSE +/- 203.68, N = 3159521. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 32Xeon Gold 52187K14K21K28K35KSE +/- 764.03, N = 15349281. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Xeon Gold 5218306090120150SE +/- 0.09, N = 3133.10MIN: 130.31. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 64Xeon Gold 52186K12K18K24K30KSE +/- 364.33, N = 3300141. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 128Xeon Gold 52184K8K12K16K20KSE +/- 148.93, N = 14191181. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 256Xeon Gold 52186K12K18K24K30KSE +/- 362.39, N = 15277691. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 512Xeon Gold 52188K16K24K32K40KSE +/- 359.64, N = 15392351. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 128Xeon Gold 52185K10K15K20K25KSE +/- 252.28, N = 7234331. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 256Xeon Gold 52184K8K12K16K20KSE +/- 261.39, N = 5208501. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 512Xeon Gold 52184K8K12K16K20KSE +/- 138.66, N = 3192981. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Xeon Gold 52180.22340.44680.67020.89361.117SE +/- 0.001821, N = 30.993071MIN: 0.961. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: f32Xeon Gold 52181.18912.37823.56734.75645.9455SE +/- 0.04179, N = 35.28502MIN: 4.821. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: f32Xeon Gold 5218400800120016002000SE +/- 0.75, N = 31721.20MIN: 1693.331. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: f32Xeon Gold 52181.31762.63523.95285.27046.588SE +/- 0.01377, N = 35.85599MIN: 5.761. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32Xeon Gold 5218816243240SE +/- 0.03, N = 335.29MIN: 34.291. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16Xeon Gold 5218918273645SE +/- 0.11, N = 340.58MIN: 39.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16Xeon Gold 52182K4K6K8K10KSE +/- 2.47, N = 310330.3MIN: 102671. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: f32Xeon Gold 52183691215SE +/- 0.01, N = 313.02MIN: 12.611. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: f32Xeon Gold 52185001000150020002500SE +/- 2.58, N = 32395.11MIN: 2361.651. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: bf16bf16bf16Xeon Gold 52183691215SE +/- 0.02626, N = 38.99765MIN: 8.641. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: bf16bf16bf16Xeon Gold 52183691215SE +/- 0.02, N = 312.94MIN: 9.641. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f32Xeon Gold 52181.09882.19763.29644.39525.494SE +/- 0.01863, N = 34.88367MIN: 4.671. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f32Xeon Gold 52180.20830.41660.62490.83321.0415SE +/- 0.006102, N = 30.925650MIN: 0.871. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: f32Xeon Gold 52183691215SE +/- 0.02708, N = 39.51973MIN: 9.061. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Xeon Gold 521816003200480064008000SE +/- 5.83, N = 37304.37MIN: 7288.581. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f32Xeon Gold 52183K6K9K12K15KSE +/- 6.41, N = 311821.6MIN: 11788.61. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Recurrent Neural Network Training - Data Type: f32Xeon Gold 521850100150200250SE +/- 0.79, N = 3241.11MIN: 233.621. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: f32Xeon Gold 52180.9171.8342.7513.6684.585SE +/- 0.01401, N = 34.07546MIN: 3.941. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32Xeon Gold 521820406080100SE +/- 0.08, N = 382.26MIN: 80.661. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: f32Xeon Gold 521870140210280350SE +/- 0.52, N = 3301.61MIN: 297.341. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 1024Xeon Gold 521811K22K33K44K55KSE +/- 332.27, N = 3491961. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f32Xeon Gold 521813002600390052006500SE +/- 4.39, N = 35845.32MIN: 5816.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16Xeon Gold 5218510152025SE +/- 0.03, N = 321.37MIN: 21.241. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 2048Xeon Gold 521810K20K30K40K50KSE +/- 604.43, N = 13458831. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096Xeon Gold 521810K20K30K40K50KSE +/- 323.08, N = 3448401. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 1024Xeon Gold 52184K8K12K16K20KSE +/- 168.47, N = 3193161. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048Xeon Gold 52183K6K9K12K15KSE +/- 124.29, N = 3150841. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Xeon Gold 52183K6K9K12K15KSE +/- 32.95, N = 3120571. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

Timed FFmpeg Compilation

This test times how long it takes to build FFmpeg. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileXeon Gold 52181122334455SE +/- 0.02, N = 347.57

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileXeon Gold 5218306090120150SE +/- 0.25, N = 3136.69

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileXeon Gold 5218612182430SE +/- 0.08, N = 327.44

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileXeon Gold 5218714212835SE +/- 0.05, N = 330.02

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileXeon Gold 5218612182430SE +/- 0.12, N = 326.18

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 6.0.1Time To CompileXeon Gold 521860120180240300293.74

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileXeon Gold 52181326395265SE +/- 0.03, N = 356.67

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 8.2Time To CompileXeon Gold 52182004006008001000SE +/- 0.94, N = 3959.66

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileXeon Gold 52181326395265SE +/- 1.01, N = 359.28

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeXeon Gold 5218816243240SE +/- 0.01, N = 336.261. (CC) gcc options: -lm -O3

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairXeon Gold 5218510152025SE +/- 0.02, N = 322.371. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticXeon Gold 5218612182430SE +/- 0.14, N = 326.171. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialXeon Gold 5218246810SE +/- 0.10378, N = 158.277661. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticXeon Gold 52183691215SE +/- 0.05813, N = 39.500491. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelXeon Gold 52181326395265SE +/- 0.03, N = 357.511. (CC) gcc options: -lm -lpthread -O3

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingXeon Gold 5218100200300400500SE +/- 0.95, N = 3469.841. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUXeon Gold 52184K8K12K16K20KSE +/- 143.66, N = 317049

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: BMW27 - Compute: OpenCLXeon Gold 5218100200300400500SE +/- 1.89, N = 3462.52

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: BMW27 - Compute: CPU-OnlyXeon Gold 5218306090120150SE +/- 0.12, N = 3149.15

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Classroom - Compute: OpenCLXeon Gold 5218100200300400500SE +/- 0.99, N = 3474.07

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Fishy Cat - Compute: OpenCLXeon Gold 521830060090012001500SE +/- 0.96, N = 31236.44

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Barbershop - Compute: OpenCLXeon Gold 5218160320480640800SE +/- 2.80, N = 3724.54

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Classroom - Compute: CPU-OnlyXeon Gold 521890180270360450SE +/- 0.95, N = 3431.59

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Fishy Cat - Compute: CPU-OnlyXeon Gold 521850100150200250SE +/- 0.07, N = 3229.16

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Barbershop - Compute: CPU-OnlyXeon Gold 5218130260390520650SE +/- 0.22, N = 3588.12

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Pabellon Barcelona - Compute: OpenCLXeon Gold 521830060090012001500SE +/- 2.80, N = 31412.20

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Pabellon Barcelona - Compute: CPU-OnlyXeon Gold 5218110220330440550SE +/- 0.33, N = 3530.65

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeXeon Gold 52181224364860SE +/- 1.09, N = 1552.301. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SerialXeon Gold 5218160320480640800757.57

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SMP ParallelXeon Gold 521850100150200250239.61

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: CrownXeon Gold 52183691215SE +/- 0.01, N = 312.35MIN: 12.26 / MAX: 12.5

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: CrownXeon Gold 521848121620SE +/- 0.01, N = 314.36MIN: 14.21 / MAX: 14.55

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian DragonXeon Gold 521848121620SE +/- 0.02, N = 315.19MIN: 15.11 / MAX: 15.32

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian Dragon ObjXeon Gold 521848121620SE +/- 0.02, N = 313.79MIN: 13.7 / MAX: 13.9

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian DragonXeon Gold 5218510152025SE +/- 0.01, N = 319.26MIN: 19.15 / MAX: 19.41

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjXeon Gold 521848121620SE +/- 0.01, N = 316.64MIN: 16.53 / MAX: 16.8

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.0.0Scene: MemorialXeon Gold 521848121620SE +/- 0.09, N = 314.83

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesXeon Gold 5218246810SE +/- 0.018, N = 38.7021. (CXX) g++ options: -fopenmp -O3

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.0.64Scene: BedroomXeon Gold 52180.40910.81821.22731.63642.0455SE +/- 0.002, N = 31.818

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.0.64Scene: SupercarXeon Gold 52180.96951.9392.90853.8784.8475SE +/- 0.007, N = 34.309

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: DLSCXeon Gold 521815K30K45K60K75KSE +/- 0.01, N = 368754.67MIN: 1.78 / MAX: 68754.69

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: Rainbow Colors and PrismXeon Gold 52188001600240032004000SE +/- 0.00, N = 33919.36MIN: 1.78

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisXeon Gold 5218510152025SE +/- 0.00, N = 1218.87MIN: 15.38 / MAX: 19.23

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisXeon Gold 52180.67731.35462.03192.70923.3865SE +/- 0.00, N = 33.01MIN: 2.81 / MAX: 3.02

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerXeon Gold 52180.39380.78761.18141.57521.969SE +/- 0.00, N = 31.75MIN: 1.7

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisXeon Gold 5218612182430SE +/- 0.00, N = 1223.26MIN: 20 / MAX: 23.81

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerXeon Gold 52180.37580.75161.12741.50321.879SE +/- 0.00, N = 61.67MIN: 1.61 / MAX: 1.68

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisXeon Gold 5218510152025SE +/- 0.00, N = 1221.74MIN: 18.52 / MAX: 22.22

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerXeon Gold 52180.98331.96662.94993.93324.9165SE +/- 0.00, N = 124.37MIN: 4.08 / MAX: 4.44

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerXeon Gold 521870140210280350SE +/- 0.00, N = 12333.33MIN: 200

rays1bench

This is a test of rays1bench, a simple path-tracer / ray-tracing that supports SSE and AVX instructions, multi-threading, and other features. This test profile is measuring the performance of the "large scene" in rays1bench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneXeon Gold 52181326395265SE +/- 0.13, N = 356.59

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyXeon Gold 521870140210280350323.24

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialXeon Gold 52184080120160200181.76

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterXeon Gold 52184080120160200179.07

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileXeon Gold 521820406080100SE +/- 0.50, N = 392.41

114 Results Shown

MKL-DNN DNNL
FFTW:
  Stock - 1D FFT Size 32
  Stock - 1D FFT Size 64
MKL-DNN DNNL
FFTW:
  Stock - 2D FFT Size 32
  Stock - 2D FFT Size 64
  Stock - 1D FFT Size 128
MKL-DNN DNNL
FFTW:
  Stock - 1D FFT Size 256
  Stock - 1D FFT Size 512
  Stock - 2D FFT Size 128
  Stock - 2D FFT Size 256
MKL-DNN DNNL
FFTW:
  Stock - 2D FFT Size 512
  Stock - 1D FFT Size 1024
  Stock - 1D FFT Size 2048
  Stock - 1D FFT Size 4096
  Stock - 2D FFT Size 1024
  Stock - 2D FFT Size 2048
  Stock - 2D FFT Size 4096
  Float + SSE - 1D FFT Size 32
  Float + SSE - 1D FFT Size 64
  Float + SSE - 2D FFT Size 32
MKL-DNN DNNL
FFTW:
  Float + SSE - 2D FFT Size 64
  Float + SSE - 1D FFT Size 128
  Float + SSE - 1D FFT Size 256
  Float + SSE - 1D FFT Size 512
  Float + SSE - 2D FFT Size 128
  Float + SSE - 2D FFT Size 256
  Float + SSE - 2D FFT Size 512
MKL-DNN DNNL:
  Deconvolution Batch deconv_1d - u8s8f32
  IP Batch 1D - f32
  Deconvolution Batch deconv_all - f32
  Deconvolution Batch deconv_3d - f32
  Convolution Batch conv_googlenet_v3 - u8s8f32
  Convolution Batch conv_3d - bf16bf16bf16
  Convolution Batch conv_all - bf16bf16bf16
  Convolution Batch conv_3d - f32
  Convolution Batch conv_all - f32
  IP Batch 1D - bf16bf16bf16
  IP Batch All - bf16bf16bf16
  IP Batch All - u8s8f32
  IP Batch 1D - u8s8f32
  IP Batch All - f32
  Deconvolution Batch deconv_3d - u8s8f32
  Convolution Batch conv_3d - u8s8f32
  Recurrent Neural Network Training - f32
  Deconvolution Batch deconv_1d - f32
  Convolution Batch conv_alexnet - u8s8f32
  Convolution Batch conv_alexnet - f32
FFTW
MKL-DNN DNNL:
  Convolution Batch conv_all - u8s8f32
  Deconvolution Batch deconv_3d - bf16bf16bf16
FFTW:
  Float + SSE - 1D FFT Size 2048
  Float + SSE - 1D FFT Size 4096
  Float + SSE - 2D FFT Size 1024
  Float + SSE - 2D FFT Size 2048
  Float + SSE - 2D FFT Size 4096
Timed FFmpeg Compilation
Timed GDB GNU Debugger Compilation
Timed ImageMagick Compilation
Timed MPlayer Compilation
Timed Apache Compilation
Timed LLVM Compilation
Timed PHP Compilation
Timed GCC Compilation
Timed Linux Kernel Compilation
AOBench
Tungsten Renderer:
  Hair
  Water Caustic
  Non-Exponential
  Volumetric Caustic
C-Ray
TTSIOD 3D Renderer
Chaos Group V-RAY
Blender:
  BMW27 - OpenCL
  BMW27 - CPU-Only
  Classroom - OpenCL
  Fishy Cat - OpenCL
  Barbershop - OpenCL
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - OpenCL
  Pabellon Barcelona - CPU-Only
POV-Ray
Radiance Benchmark:
  Serial
  SMP Parallel
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
Intel Open Image Denoise
Smallpt
IndigoBench:
  Bedroom
  Supercar
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
OSPray:
  San Miguel - SciVis
  XFrog Forest - SciVis
  San Miguel - Path Tracer
  NASA Streamlines - SciVis
  XFrog Forest - Path Tracer
  Magnetic Reconnection - SciVis
  NASA Streamlines - Path Tracer
  Magnetic Reconnection - Path Tracer
rays1bench
Appleseed:
  Emily
  Disney Material
  Material Tester
Build2