Core i9 10980XE Vet Intel Core i9-10980XE testing with a Gigabyte X299X DESIGNARE 10G (F1 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 20.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2001096-HU-COREI910905 Intel Core i9-10980XE Processor: Intel Core i9-10980XE @ 4.60GHz (18 Cores / 36 Threads), Motherboard: Gigabyte X299X DESIGNARE 10G (F1 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 32768MB, Disk: Samsung SSD 970 PRO 512GB, Graphics: AMD Radeon RX 56/64 8GB (1590/800MHz), Audio: Realtek ALC1220, Monitor: DELL P2415Q, Network: 2 x Intel 10G X550T + Intel Wi-Fi 6 AX200
OS: Ubuntu 20.04, Kernel: 5.4.0-9-generic (x86_64), Desktop: GNOME Shell 3.34.1, Display Server: X Server 1.20.5, Display Driver: amdgpu 19.1.0, OpenGL: 4.5 Mesa 19.2.4 (LLVM 9.0.0), Compiler: GCC 9.2.1 20191130, File-System: ext4, Screen Resolution: 3840x2160
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500002cSecurity Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled
Core i9 10980XE Vet OpenBenchmarking.org Phoronix Test Suite Intel Core i9-10980XE @ 4.60GHz (18 Cores / 36 Threads) Gigabyte X299X DESIGNARE 10G (F1 BIOS) Intel Sky Lake-E DMI3 Registers 32768MB Samsung SSD 970 PRO 512GB AMD Radeon RX 56/64 8GB (1590/800MHz) Realtek ALC1220 DELL P2415Q 2 x Intel 10G X550T + Intel Wi-Fi 6 AX200 Ubuntu 20.04 5.4.0-9-generic (x86_64) GNOME Shell 3.34.1 X Server 1.20.5 amdgpu 19.1.0 4.5 Mesa 19.2.4 (LLVM 9.0.0) GCC 9.2.1 20191130 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL Compiler File-System Screen Resolution Core I9 10980XE Vet Benchmarks System Logs - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500002c - itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled
Core i9 10980XE Vet mkl-dnn: IP Batch 1D - f32 mkl-dnn: IP Batch All - f32 mkl-dnn: IP Batch 1D - u8s8f32 mkl-dnn: IP Batch All - u8s8f32 mkl-dnn: IP Batch 1D - bf16bf16bf16 mkl-dnn: IP Batch All - bf16bf16bf16 mkl-dnn: Convolution Batch conv_3d - f32 mkl-dnn: Convolution Batch conv_all - f32 mkl-dnn: Convolution Batch conv_3d - u8s8f32 mkl-dnn: Deconvolution Batch deconv_1d - f32 mkl-dnn: Deconvolution Batch deconv_3d - f32 mkl-dnn: Convolution Batch conv_alexnet - f32 mkl-dnn: Convolution Batch conv_all - u8s8f32 mkl-dnn: Deconvolution Batch deconv_all - f32 mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32 mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32 mkl-dnn: Recurrent Neural Network Training - f32 mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16 mkl-dnn: Convolution Batch conv_alexnet - u8s8f32 mkl-dnn: Convolution Batch conv_all - bf16bf16bf16 mkl-dnn: Convolution Batch conv_googlenet_v3 - f32 mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16 mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16 mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16 mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32 mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16 mkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16 ospray: San Miguel - SciVis ospray: XFrog Forest - SciVis ospray: San Miguel - Path Tracer ospray: NASA Streamlines - SciVis ospray: XFrog Forest - Path Tracer ospray: Magnetic Reconnection - SciVis ospray: NASA Streamlines - Path Tracer ospray: Magnetic Reconnection - Path Tracer embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj oidn: Memorial luxcorerender: DLSC luxcorerender: Rainbow Colors and Prism tungsten: Hair tungsten: Water Caustic tungsten: Non-Exponential tungsten: Volumetric Caustic blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only appleseed: Emily appleseed: Disney Material appleseed: Material Tester Intel Core i9-10980XE 4.72350 12.3436 0.632318 4.28049 5.57573 16.8051 12.5526 1126.00 7699.02 1.80400 2.58125 125.743 3772.52 1362.63 0.456435 4785.35 154.880 19.7111 40.1957 4764.91 63.9261 8.52410 10.7049 869.378 19.6814 3749.33 224.395 27.89 4.58 2.49 36.71 2.51 29.41 6.85 500 18.9597 20.8633 22.5000 20.3140 27.0224 23.2313 22.50 2.90 2.77 14.9442 21.5535 6.76718 7.38478 92.97 268.25 142.90 373.50 331.38 230.486907 121.343099 130.146979 OpenBenchmarking.org
MKL-DNN DNNL This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch 1D - Data Type: f32 Intel Core i9-10980XE 1.0628 2.1256 3.1884 4.2512 5.314 SE +/- 0.02289, N = 3 4.72350 MIN: 4.15 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch All - Data Type: f32 Intel Core i9-10980XE 3 6 9 12 15 SE +/- 0.06, N = 3 12.34 MIN: 11.79 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch 1D - Data Type: u8s8f32 Intel Core i9-10980XE 0.1423 0.2846 0.4269 0.5692 0.7115 SE +/- 0.000881, N = 3 0.632318 MIN: 0.61 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch All - Data Type: u8s8f32 Intel Core i9-10980XE 0.9631 1.9262 2.8893 3.8524 4.8155 SE +/- 0.02154, N = 3 4.28049 MIN: 4.1 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch 1D - Data Type: bf16bf16bf16 Intel Core i9-10980XE 1.2545 2.509 3.7635 5.018 6.2725 SE +/- 0.00538, N = 3 5.57573 MIN: 5.49 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch All - Data Type: bf16bf16bf16 Intel Core i9-10980XE 4 8 12 16 20 SE +/- 0.06, N = 3 16.81 MIN: 15.73 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_3d - Data Type: f32 Intel Core i9-10980XE 3 6 9 12 15 SE +/- 0.02, N = 3 12.55 MIN: 12.39 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_all - Data Type: f32 Intel Core i9-10980XE 200 400 600 800 1000 SE +/- 0.22, N = 3 1126.00 MIN: 1119.3 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_3d - Data Type: u8s8f32 Intel Core i9-10980XE 1600 3200 4800 6400 8000 SE +/- 11.81, N = 3 7699.02 MIN: 7681 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_1d - Data Type: f32 Intel Core i9-10980XE 0.4059 0.8118 1.2177 1.6236 2.0295 SE +/- 0.00217, N = 3 1.80400 MIN: 1.77 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_3d - Data Type: f32 Intel Core i9-10980XE 0.5808 1.1616 1.7424 2.3232 2.904 SE +/- 0.00333, N = 3 2.58125 MIN: 2.55 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_alexnet - Data Type: f32 Intel Core i9-10980XE 30 60 90 120 150 SE +/- 0.22, N = 3 125.74 MIN: 125 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_all - Data Type: u8s8f32 Intel Core i9-10980XE 800 1600 2400 3200 4000 SE +/- 16.84, N = 3 3772.52 MIN: 3737.91 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_all - Data Type: f32 Intel Core i9-10980XE 300 600 900 1200 1500 SE +/- 0.26, N = 3 1362.63 MIN: 1357.96 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 Intel Core i9-10980XE 0.1027 0.2054 0.3081 0.4108 0.5135 SE +/- 0.000383, N = 3 0.456435 MIN: 0.44 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 Intel Core i9-10980XE 1000 2000 3000 4000 5000 SE +/- 2.03, N = 3 4785.35 MIN: 4780.07 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Recurrent Neural Network Training - Data Type: f32 Intel Core i9-10980XE 30 60 90 120 150 SE +/- 0.23, N = 3 154.88 MIN: 153.21 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16 Intel Core i9-10980XE 5 10 15 20 25 SE +/- 0.01, N = 3 19.71 MIN: 19.54 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32 Intel Core i9-10980XE 9 18 27 36 45 SE +/- 0.14, N = 3 40.20 MIN: 39.58 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16 Intel Core i9-10980XE 1000 2000 3000 4000 5000 SE +/- 0.31, N = 3 4764.91 MIN: 4756.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32 Intel Core i9-10980XE 14 28 42 56 70 SE +/- 0.03, N = 3 63.93 MIN: 63.16 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 Intel Core i9-10980XE 2 4 6 8 10 SE +/- 0.00152, N = 3 8.52410 MIN: 8.48 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 Intel Core i9-10980XE 3 6 9 12 15 SE +/- 0.00, N = 3 10.70 MIN: 10.6 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16 Intel Core i9-10980XE 200 400 600 800 1000 SE +/- 0.17, N = 3 869.38 MIN: 868.48 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32 Intel Core i9-10980XE 5 10 15 20 25 SE +/- 0.02, N = 3 19.68 MIN: 19.37 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16 Intel Core i9-10980XE 800 1600 2400 3200 4000 SE +/- 0.81, N = 3 3749.33 MIN: 3744.76 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16 Intel Core i9-10980XE 50 100 150 200 250 SE +/- 0.01, N = 3 224.40 MIN: 223.58 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OSPray Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: San Miguel - Renderer: SciVis Intel Core i9-10980XE 7 14 21 28 35 SE +/- 0.11, N = 7 27.89 MIN: 25.64 / MAX: 28.57
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: XFrog Forest - Renderer: SciVis Intel Core i9-10980XE 1.0305 2.061 3.0915 4.122 5.1525 SE +/- 0.01, N = 3 4.58 MIN: 4.31 / MAX: 4.63
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: San Miguel - Renderer: Path Tracer Intel Core i9-10980XE 0.5603 1.1206 1.6809 2.2412 2.8015 SE +/- 0.00, N = 3 2.49 MIN: 2.42 / MAX: 2.51
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: NASA Streamlines - Renderer: SciVis Intel Core i9-10980XE 8 16 24 32 40 SE +/- 0.33, N = 4 36.71 MIN: 31.25 / MAX: 37.04
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: XFrog Forest - Renderer: Path Tracer Intel Core i9-10980XE 0.5648 1.1296 1.6944 2.2592 2.824 SE +/- 0.00, N = 8 2.51 MIN: 2.44 / MAX: 2.53
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: Magnetic Reconnection - Renderer: SciVis Intel Core i9-10980XE 7 14 21 28 35 SE +/- 0.00, N = 12 29.41 MIN: 28.57 / MAX: 30.3
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: NASA Streamlines - Renderer: Path Tracer Intel Core i9-10980XE 2 4 6 8 10 SE +/- 0.00, N = 12 6.85 MIN: 6.25 / MAX: 7.04
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer ISPC - Model: Crown Intel Core i9-10980XE 5 10 15 20 25 SE +/- 0.02, N = 3 20.86 MIN: 20.68 / MAX: 21.14
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer - Model: Asian Dragon Intel Core i9-10980XE 5 10 15 20 25 SE +/- 0.03, N = 3 22.50 MIN: 22.38 / MAX: 22.7
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer - Model: Asian Dragon Obj Intel Core i9-10980XE 5 10 15 20 25 SE +/- 0.01, N = 3 20.31 MIN: 20.21 / MAX: 20.49
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer ISPC - Model: Asian Dragon Intel Core i9-10980XE 6 12 18 24 30 SE +/- 0.02, N = 3 27.02 MIN: 26.89 / MAX: 27.31
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj Intel Core i9-10980XE 6 12 18 24 30 SE +/- 0.03, N = 3 23.23 MIN: 23.07 / MAX: 23.5
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.2 Scene: Rainbow Colors and Prism Intel Core i9-10980XE 0.6233 1.2466 1.8699 2.4932 3.1165 SE +/- 0.04, N = 5 2.77 MIN: 2.61 / MAX: 2.89
Tungsten Renderer Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Hair Intel Core i9-10980XE 4 8 12 16 20 SE +/- 0.05, N = 3 14.94 1. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Water Caustic Intel Core i9-10980XE 5 10 15 20 25 SE +/- 0.04, N = 3 21.55 1. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Non-Exponential Intel Core i9-10980XE 2 4 6 8 10 SE +/- 0.12739, N = 15 6.76718 1. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Volumetric Caustic Intel Core i9-10980XE 2 4 6 8 10 SE +/- 0.07327, N = 3 7.38478 1. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.81 Blend File: Pabellon Barcelona - Compute: CPU-Only Intel Core i9-10980XE 70 140 210 280 350 SE +/- 0.41, N = 3 331.38
Intel Core i9-10980XE Processor: Intel Core i9-10980XE @ 4.60GHz (18 Cores / 36 Threads), Motherboard: Gigabyte X299X DESIGNARE 10G (F1 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 32768MB, Disk: Samsung SSD 970 PRO 512GB, Graphics: AMD Radeon RX 56/64 8GB (1590/800MHz), Audio: Realtek ALC1220, Monitor: DELL P2415Q, Network: 2 x Intel 10G X550T + Intel Wi-Fi 6 AX200
OS: Ubuntu 20.04, Kernel: 5.4.0-9-generic (x86_64), Desktop: GNOME Shell 3.34.1, Display Server: X Server 1.20.5, Display Driver: amdgpu 19.1.0, OpenGL: 4.5 Mesa 19.2.4 (LLVM 9.0.0), Compiler: GCC 9.2.1 20191130, File-System: ext4, Screen Resolution: 3840x2160
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500002cSecurity Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 8 January 2020 21:46 by user phoronix.