Ryzen 9 3900X + TITAN RTX

AMD Ryzen 9 3900X 12-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (1001 BIOS) and NVIDIA TITAN RTX 24GB on Ubuntu 19.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1910024-PTS-RYZEN93938
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
TITAN RTX + 3900X
October 01 2019
  9 Hours, 9 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 9 3900X + TITAN RTXOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (1001 BIOS)AMD Device 148016384MBSamsung SSD 970 EVO 250GBNVIDIA TITAN RTX 24GB (1350/7000MHz)NVIDIA TU102 HD AudioASUS VP28URealtek Device 8125 + Intel I211 + Intel Device 2723Ubuntu 19.045.0.0-29-generic (x86_64)GNOME Shell 3.32.2X Server 1.20.4NVIDIA 435.214.6.0GCC 8.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionRyzen 9 3900X + TITAN RTX BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand- GPU Compute Cores: 4608- l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: always-on RSB filling

Ryzen 9 3900X + TITAN RTXblender: Pabellon Barcelona - OpenCLmkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32blender: Fishy Cat - OpenCLshoc: OpenCL - Max SP Flopsmkl-dnn: Convolution Batch conv_all - f32blender: Barbershop - OpenCLmkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32blender: Barbershop - CPU-Onlyblender: Classroom - OpenCLblender: Pabellon Barcelona - CPU-Onlymkl-dnn: Deconvolution Batch deconv_all - f32blender: Classroom - CPU-Onlyblender: BMW27 - OpenCLospray: San Miguel - Path Tracerblender: Fishy Cat - CPU-Onlyospray: NASA Streamlines - Path Tracermkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32ospray: San Miguel - SciVismkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32luxmark: GPU - Hotelluxmark: GPU - Microphoneluxmark: GPU - Luxball HDRospray: XFrog Forest - Path Tracerblender: BMW27 - CPU-Onlymkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32luxcorerender-cl: Foodluxcorerender-cl: LuxCore Benchmarkospray: Magnetic Reconnection - SciVisluxcorerender-cl: DLSCospray: XFrog Forest - SciVisluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismnamd-cuda: ATPase Simulation - 327,506 Atomsembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer - Asian Dragon Objmkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch All - f32embree: Pathtracer ISPC - Crownembree: Pathtracer - Crownmkl-dnn: Convolution Batch conv_3d - f32embree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Asian Dragonmkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32tungsten: Non-Exponentialmkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32shoc: OpenCL - Texture Read Bandwidthtungsten: Water Causticmkl-dnn: Deconvolution Batch deconv_1d - f32tungsten: Hairmkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - f32luxcorerender-cl: Rainbow Colors and Prismospray: NASA Streamlines - SciVisoidn: Memorialshoc: OpenCL - Bus Speed Readbacktungsten: Volumetric Causticmkl-dnn: Deconvolution Batch deconv_3d - f32shoc: OpenCL - Triadospray: Magnetic Reconnection - Path Tracershoc: OpenCL - FFT SPshoc: OpenCL - Bus Speed Downloadshoc: OpenCL - MD5 HashTITAN RTX + 3900X864.1139829.4739883.63755.5517423.932088.81503.5931708.60461.91407.46389.826144.28316.13295.881.46165.705.551619.4019.231613.96111.50987830616459321.88113.139881.349854.172.997.5112.999.643.582.282.270.1793914.1914.80743.05741.05209.9614.7115.3618.7816.4816.536065.026.853550.455522.183120.773664.463644.651165.8223.7724.4617.46254.5667.4766.9717.3616.1227.5310.4313.547.425.0212.892001570.1313.1437.51OpenBenchmarking.org

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Pabellon Barcelona - Compute: OpenCLTITAN RTX + 3900X2004006008001000SE +/- 7.65, N = 3864.11

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s32TITAN RTX + 3900X9K18K27K36K45KSE +/- 76.66, N = 339829.47MIN: 38802.21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s32TITAN RTX + 3900X9K18K27K36K45KSE +/- 20.25, N = 339883.63MIN: 38845.91. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Fishy Cat - Compute: OpenCLTITAN RTX + 3900X160320480640800SE +/- 2.67, N = 3755.55

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Max SP FlopsTITAN RTX + 3900X4K8K12K16K20KSE +/- 183.98, N = 317423.931. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32TITAN RTX + 3900X400800120016002000SE +/- 4.85, N = 32088.81MIN: 2068.351. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Barbershop - Compute: OpenCLTITAN RTX + 3900X110220330440550SE +/- 1.14, N = 3503.59

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32TITAN RTX + 3900X7K14K21K28K35KSE +/- 33.55, N = 331708.60MIN: 30559.91. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Barbershop - Compute: CPU-OnlyTITAN RTX + 3900X100200300400500SE +/- 0.24, N = 3461.91

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Classroom - Compute: OpenCLTITAN RTX + 3900X90180270360450SE +/- 2.34, N = 3407.46

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Pabellon Barcelona - Compute: CPU-OnlyTITAN RTX + 3900X80160240320400SE +/- 0.23, N = 3389.82

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32TITAN RTX + 3900X13002600390052006500SE +/- 15.46, N = 36144.28MIN: 5735.211. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Classroom - Compute: CPU-OnlyTITAN RTX + 3900X70140210280350SE +/- 0.53, N = 3316.13

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: BMW27 - Compute: OpenCLTITAN RTX + 3900X60120180240300SE +/- 0.33, N = 3295.88

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerTITAN RTX + 3900X0.32850.6570.98551.3141.6425SE +/- 0.00, N = 31.46MIN: 1.42 / MAX: 1.47

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Fishy Cat - Compute: CPU-OnlyTITAN RTX + 3900X4080120160200SE +/- 0.12, N = 3165.70

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerTITAN RTX + 3900X1.24882.49763.74644.99526.244SE +/- 0.00, N = 125.55MIN: 5.46 / MAX: 5.62

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32TITAN RTX + 3900X30060090012001500SE +/- 6.80, N = 31619.40MIN: 1517.871. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisTITAN RTX + 3900X510152025SE +/- 0.00, N = 1219.23MIN: 18.52 / MAX: 20.41

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32TITAN RTX + 3900X30060090012001500SE +/- 5.05, N = 31613.96MIN: 1510.521. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32TITAN RTX + 3900X20406080100SE +/- 0.32, N = 3111.50MIN: 109.561. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

LuxMark

LuxMark is a multi-platform OpenGL benchmark using LuxRender. LuxMark supports targeting different OpenCL devices and has multiple scenes available for rendering. LuxMark is a fully open-source OpenCL program with real-world rendering examples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterLuxMark 3.1OpenCL Device: GPU - Scene: HotelTITAN RTX + 3900X2K4K6K8K10KSE +/- 76.62, N = 39878

OpenBenchmarking.orgScore, More Is BetterLuxMark 3.1OpenCL Device: GPU - Scene: MicrophoneTITAN RTX + 3900X7K14K21K28K35KSE +/- 14.33, N = 330616

OpenBenchmarking.orgScore, More Is BetterLuxMark 3.1OpenCL Device: GPU - Scene: Luxball HDRTITAN RTX + 3900X10K20K30K40K50KSE +/- 7.31, N = 345932

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerTITAN RTX + 3900X0.4230.8461.2691.6922.115SE +/- 0.00, N = 31.88MIN: 1.87 / MAX: 1.9

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: BMW27 - Compute: CPU-OnlyTITAN RTX + 3900X306090120150SE +/- 0.27, N = 3113.13

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32TITAN RTX + 3900X2K4K6K8K10KSE +/- 4.48, N = 39881.34MIN: 9857.421. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32TITAN RTX + 3900X2K4K6K8K10KSE +/- 11.69, N = 39854.17MIN: 9819.921. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.2Scene: FoodTITAN RTX + 3900X0.67281.34562.01842.69123.364SE +/- 0.04, N = 32.99MIN: 0.33 / MAX: 3.65

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.2Scene: LuxCore BenchmarkTITAN RTX + 3900X246810SE +/- 0.00, N = 37.51MIN: 0.38 / MAX: 8.49

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisTITAN RTX + 3900X3691215SE +/- 0.00, N = 1212.99MIN: 12.35 / MAX: 13.16

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.2Scene: DLSCTITAN RTX + 3900X3691215SE +/- 0.02, N = 39.64MIN: 8.7 / MAX: 9.74

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisTITAN RTX + 3900X0.80551.6112.41653.2224.0275SE +/- 0.00, N = 33.58MIN: 3.51 / MAX: 3.62

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: DLSCTITAN RTX + 3900X0.5131.0261.5392.0522.565SE +/- 0.01, N = 32.28MIN: 2.19 / MAX: 2.35

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: Rainbow Colors and PrismTITAN RTX + 3900X0.51081.02161.53242.04322.554SE +/- 0.03, N = 32.27MIN: 2.18 / MAX: 2.35

NAMD CUDA

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. This version of the NAMD test profile uses CUDA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD CUDA 2.13ATPase Simulation - 327,506 AtomsTITAN RTX + 3900X0.04040.08080.12120.16160.202SE +/- 0.00039, N = 120.17939

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjTITAN RTX + 3900X48121620SE +/- 0.01, N = 314.19MIN: 14.09 / MAX: 14.47

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian Dragon ObjTITAN RTX + 3900X48121620SE +/- 0.00, N = 314.80MIN: 14.71 / MAX: 15.07

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32TITAN RTX + 3900X160320480640800SE +/- 1.14, N = 3743.05MIN: 660.931. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32TITAN RTX + 3900X160320480640800SE +/- 2.75, N = 3741.05MIN: 646.261. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32TITAN RTX + 3900X50100150200250SE +/- 0.82, N = 3209.96MIN: 167.071. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: CrownTITAN RTX + 3900X48121620SE +/- 0.01, N = 314.71MIN: 14.59 / MAX: 14.95

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: CrownTITAN RTX + 3900X48121620SE +/- 0.01, N = 315.36MIN: 15.24 / MAX: 15.76

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32TITAN RTX + 3900X510152025SE +/- 0.11, N = 318.78MIN: 18.221. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian DragonTITAN RTX + 3900X48121620SE +/- 0.01, N = 316.48MIN: 16.37 / MAX: 16.72

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian DragonTITAN RTX + 3900X48121620SE +/- 0.01, N = 316.53MIN: 16.41 / MAX: 16.87

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32TITAN RTX + 3900X13002600390052006500SE +/- 10.64, N = 36065.02MIN: 6030.441. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialTITAN RTX + 3900X246810SE +/- 0.12, N = 156.851. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lGL -lGLU -lpthread -ldl

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32TITAN RTX + 3900X8001600240032004000SE +/- 5.47, N = 33550.45MIN: 3538.571. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32TITAN RTX + 3900X12002400360048006000SE +/- 9.29, N = 35522.18MIN: 5491.731. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32TITAN RTX + 3900X7001400210028003500SE +/- 5.88, N = 33120.77MIN: 3104.261. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32TITAN RTX + 3900X8001600240032004000SE +/- 18.68, N = 33664.46MIN: 3507.751. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32TITAN RTX + 3900X8001600240032004000SE +/- 28.89, N = 33644.65MIN: 3461.361. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Texture Read BandwidthTITAN RTX + 3900X30060090012001500SE +/- 3.03, N = 31165.821. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticTITAN RTX + 3900X612182430SE +/- 0.05, N = 323.771. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lGL -lGLU -lpthread -ldl

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32TITAN RTX + 3900X612182430SE +/- 0.12, N = 324.46MIN: 23.391. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairTITAN RTX + 3900X48121620SE +/- 0.01, N = 317.461. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lGL -lGLU -lpthread -ldl

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32TITAN RTX + 3900X60120180240300SE +/- 2.91, N = 3254.56MIN: 249.871. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32TITAN RTX + 3900X1530456075SE +/- 0.48, N = 367.47MIN: 50.271. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32TITAN RTX + 3900X1530456075SE +/- 0.54, N = 366.97MIN: 53.151. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32TITAN RTX + 3900X48121620SE +/- 0.04, N = 317.36MIN: 11.171. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.2Scene: Rainbow Colors and PrismTITAN RTX + 3900X48121620SE +/- 0.02, N = 316.12MIN: 14.52 / MAX: 16.69

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisTITAN RTX + 3900X612182430SE +/- 0.25, N = 327.53MIN: 26.32 / MAX: 27.78

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.0.0Scene: MemorialTITAN RTX + 3900X3691215SE +/- 0.02, N = 310.43

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Bus Speed ReadbackTITAN RTX + 3900X3691215SE +/- 0.00, N = 1513.541. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticTITAN RTX + 3900X246810SE +/- 0.01, N = 37.421. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lGL -lGLU -lpthread -ldl

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32TITAN RTX + 3900X1.12952.2593.38854.5185.6475SE +/- 0.00, N = 45.02MIN: 4.931. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: TriadTITAN RTX + 3900X3691215SE +/- 0.00, N = 412.891. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerTITAN RTX + 3900X4080120160200200MIN: 166.67 / MAX: 250

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: FFT SPTITAN RTX + 3900X30060090012001500SE +/- 1.35, N = 31570.131. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: Bus Speed DownloadTITAN RTX + 3900X3691215SE +/- 0.00, N = 313.141. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGHash/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2015-11-10Target: OpenCL - Benchmark: MD5 HashTITAN RTX + 3900X918273645SE +/- 0.08, N = 337.511. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

72 Results Shown

Blender
MKL-DNN:
  Convolution Batch conv_all - u8s8u8s32
  Convolution Batch conv_all - u8s8f32s32
Blender
SHOC Scalable HeterOgeneous Computing
MKL-DNN
Blender
MKL-DNN
Blender:
  Barbershop - CPU-Only
  Classroom - OpenCL
  Pabellon Barcelona - CPU-Only
MKL-DNN
Blender:
  Classroom - CPU-Only
  BMW27 - OpenCL
OSPray
Blender
OSPray
MKL-DNN
OSPray
MKL-DNN:
  Convolution Batch conv_googlenet_v3 - u8s8f32s32
  Convolution Batch conv_googlenet_v3 - f32
LuxMark:
  GPU - Hotel
  GPU - Microphone
  GPU - Luxball HDR
OSPray
Blender
MKL-DNN:
  Convolution Batch conv_3d - u8s8f32s32
  Convolution Batch conv_3d - u8s8u8s32
LuxCoreRender OpenCL:
  Food
  LuxCore Benchmark
OSPray
LuxCoreRender OpenCL
OSPray
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
NAMD CUDA
Embree:
  Pathtracer ISPC - Asian Dragon Obj
  Pathtracer - Asian Dragon Obj
MKL-DNN:
  IP Batch All - u8s8f32s32
  IP Batch All - u8s8u8s32
  IP Batch All - f32
Embree:
  Pathtracer ISPC - Crown
  Pathtracer - Crown
MKL-DNN
Embree:
  Pathtracer ISPC - Asian Dragon
  Pathtracer - Asian Dragon
MKL-DNN
Tungsten Renderer
MKL-DNN:
  Deconvolution Batch deconv_1d - u8s8u8s32
  Deconvolution Batch deconv_3d - u8s8f32s32
  Deconvolution Batch deconv_1d - u8s8f32s32
  Convolution Batch conv_alexnet - u8s8u8s32
  Convolution Batch conv_alexnet - u8s8f32s32
SHOC Scalable HeterOgeneous Computing
Tungsten Renderer
MKL-DNN
Tungsten Renderer
MKL-DNN:
  Convolution Batch conv_alexnet - f32
  IP Batch 1D - u8s8f32s32
  IP Batch 1D - u8s8u8s32
  IP Batch 1D - f32
LuxCoreRender OpenCL
OSPray
Intel Open Image Denoise
SHOC Scalable HeterOgeneous Computing
Tungsten Renderer
MKL-DNN
SHOC Scalable HeterOgeneous Computing
OSPray
SHOC Scalable HeterOgeneous Computing:
  OpenCL - FFT SP
  OpenCL - Bus Speed Download
  OpenCL - MD5 Hash