Core i9 10980XE Cascade Lake X

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.30 BIOS) and NVIDIA NV132 11GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2004109-NI-COREI910936
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Core i9-10980XE
April 10 2020
  1 Hour, 44 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 10980XE Cascade Lake XOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBNVIDIA NV132 11GBRealtek ALC1220ASUS MG28UIntel I219-V + Intel I211Ubuntu 20.045.4.0-21-generic (x86_64)GNOME Shell 3.36.0X Server 1.20.7modesetting 1.20.74.3 Mesa 20.0.2GCC 9.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I9 10980XE Cascade Lake X BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500012c- + Python 3.8.2- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled

Core i9 10980XE Cascade Lake Xmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Recurrent Neural Network Inference - f32mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16embree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objoidn: Memorialopenvkl: vklBenchmarkopenvkl: vklBenchmarkVdbVolumeopenvkl: vklBenchmarkStructuredVolumeopenvkl: vklBenchmarkUnstructuredVolumeyafaray: Total Time For Sample SceneIntel Core i9-10980XE2.2436732.78940.5201617.354445.5212168.96551.735052.629510.4665010.672785132.74224.45629.0652910.855219.732921.518523.020921.186827.511224.345225.30265.5823393309.50757672263352.4234232067455.450052698.626OpenBenchmarking.org

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: f32Intel Core i9-10980XE0.50481.00961.51442.01922.524SE +/- 0.00775, N = 32.24367MIN: 2.151. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: f32Intel Core i9-10980XE816243240SE +/- 0.07, N = 332.79MIN: 31.161. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: u8s8f32Intel Core i9-10980XE0.1170.2340.3510.4680.585SE +/- 0.002655, N = 30.520161MIN: 0.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: u8s8f32Intel Core i9-10980XE246810SE +/- 0.00971, N = 37.35444MIN: 7.11. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: bf16bf16bf16Intel Core i9-10980XE1.24232.48463.72694.96926.2115SE +/- 0.02427, N = 35.52121MIN: 5.381. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: bf16bf16bf16Intel Core i9-10980XE1530456075SE +/- 0.01, N = 368.97MIN: 65.941. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Core i9-10980XE0.39040.78081.17121.56161.952SE +/- 0.00428, N = 31.73505MIN: 1.711. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Core i9-10980XE0.59161.18321.77482.36642.958SE +/- 0.01007, N = 32.62951MIN: 2.591. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Intel Core i9-10980XE0.1050.210.3150.420.525SE +/- 0.002005, N = 30.466501MIN: 0.451. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Intel Core i9-10980XE0.15140.30280.45420.60560.757SE +/- 0.007900, N = 60.672785MIN: 0.641. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Training - Data Type: f32Intel Core i9-10980XE306090120150SE +/- 0.34, N = 3132.74MIN: 131.071. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Inference - Data Type: f32Intel Core i9-10980XE612182430SE +/- 0.06, N = 324.46MIN: 23.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16Intel Core i9-10980XE3691215SE +/- 0.00684, N = 39.06529MIN: 8.981. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16Intel Core i9-10980XE3691215SE +/- 0.03, N = 310.86MIN: 10.691. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownIntel Core i9-10980XE510152025SE +/- 0.05, N = 319.73MIN: 19.55 / MAX: 20.06

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownIntel Core i9-10980XE510152025SE +/- 0.03, N = 321.52MIN: 21.3 / MAX: 21.8

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonIntel Core i9-10980XE612182430SE +/- 0.20, N = 323.02MIN: 22.6 / MAX: 23.54

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjIntel Core i9-10980XE510152025SE +/- 0.02, N = 321.19MIN: 21.08 / MAX: 21.39

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonIntel Core i9-10980XE612182430SE +/- 0.11, N = 327.51MIN: 27.19 / MAX: 27.96

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjIntel Core i9-10980XE612182430SE +/- 0.04, N = 324.35MIN: 24.19 / MAX: 24.62

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialIntel Core i9-10980XE612182430SE +/- 0.03, N = 325.30

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkIntel Core i9-10980XE60120180240300SE +/- 0.46, N = 3265.58MIN: 1 / MAX: 1083

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeIntel Core i9-10980XE5M10M15M20M25MSE +/- 61427.44, N = 323393309.51MIN: 1007748 / MAX: 127115496

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeIntel Core i9-10980XE15M30M45M60M75MSE +/- 328294.49, N = 372263352.42MIN: 1128497 / MAX: 596064240

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeIntel Core i9-10980XE400K800K1200K1600K2000KSE +/- 4506.37, N = 32067455.45MIN: 22919 / MAX: 6845002

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneIntel Core i9-10980XE20406080100SE +/- 1.72, N = 1598.631. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread