Core i9 10980XE

Intel Core i9-10980XE testing with a Gigabyte X299X DESIGNARE 10G (F1 BIOS) and AMD Navi 10 8GB on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1911271-HU-COREI910993
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i9 10980XE
November 27 2019
  4 Hours, 23 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 10980XEOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.60GHz (18 Cores / 36 Threads)Gigabyte X299X DESIGNARE 10G (F1 BIOS)Intel Sky Lake-E DMI3 Registers32768MB240GB Force MP510AMD Navi 10 8GB (2100/875MHz)Realtek ALC1220Acer B286HK2 x Intel 10G X550T + Intel Device 2723Ubuntu 19.105.3.0-23-generic (x86_64)GNOME Shell 3.34.1X Server 1.20.5modesetting 1.20.54.5 Mesa 19.2.1 (LLVM 9.0.0)GCC 9.2.1 20191008ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I9 10980XE BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500002c- Python 2.7.17rc1 + Python 3.7.5rc1- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled

Core i9 10980XEminife: Smallnamd: ATPase Simulation - 327,506 Atomsmkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: Convolution Batch conv_3d - u8s8f32mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - u8s8f32mkl-dnn: Convolution Batch conv_all - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16dav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bithimeno: Poisson Pressure Solverbuild-linux-kernel: Time To Compilebuild-llvm: Time To Compilebuild2: Time To Compileaskap: tConvolve MT - Griddingaskap: tConvolve MT - Degriddingaskap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingaskap: tConvolve OpenMP - Griddingaskap: tConvolve OpenMP - Degriddinggromacs: Water Benchmarkblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-OnlyCore i9 10980XE6900.440.990200.6537337.423615.7406420.74947715.213846.890.4648524761.1220.125642.44834766.518.5451310.7182871.39220.93313750.00224.498275.30170.27299.9448.214135.88611243.473225.9972.3601608.442413.601573.752388.743288.124841.021.51092.94267.91143.09373.69331.95OpenBenchmarking.org

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallCore i9 10980XE15003000450060007500SE +/- 15.44, N = 36900.441. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.13b1ATPase Simulation - 327,506 AtomsCore i9 10980XE0.22280.44560.66840.89121.114SE +/- 0.00647, N = 30.99020

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f32Core i9 10980XE0.14710.29420.44130.58840.7355SE +/- 0.001733, N = 30.653733MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f32Core i9 10980XE246810SE +/- 1.89064, N = 127.42361MIN: 4.21. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: bf16bf16bf16Core i9 10980XE1.29162.58323.87485.16646.458SE +/- 0.00504, N = 35.74064MIN: 5.461. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: bf16bf16bf16Core i9 10980XE510152025SE +/- 0.27, N = 320.75MIN: 18.561. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f32Core i9 10980XE17003400510068008500SE +/- 12.23, N = 37715.21MIN: 7662.741. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f32Core i9 10980XE8001600240032004000SE +/- 10.85, N = 33846.89MIN: 3805.981. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Core i9 10980XE0.10460.20920.31380.41840.523SE +/- 0.004036, N = 150.464852MIN: 0.451. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Core i9 10980XE10002000300040005000SE +/- 1.50, N = 34761.12MIN: 4758.061. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16Core i9 10980XE510152025SE +/- 0.01, N = 320.13MIN: 19.921. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32Core i9 10980XE1020304050SE +/- 0.07, N = 342.45MIN: 41.791. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16Core i9 10980XE10002000300040005000SE +/- 0.39, N = 34766.51MIN: 4757.151. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16Core i9 10980XE246810SE +/- 0.01735, N = 38.54513MIN: 8.481. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16Core i9 10980XE3691215SE +/- 0.01, N = 310.72MIN: 10.61. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16Core i9 10980XE2004006008001000SE +/- 2.06, N = 3871.39MIN: 868.491. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32Core i9 10980XE510152025SE +/- 0.01, N = 320.93MIN: 20.531. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16Core i9 10980XE8001600240032004000SE +/- 0.56, N = 33750.00MIN: 3744.261. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16Core i9 10980XE50100150200250SE +/- 0.03, N = 3224.50MIN: 223.61. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Chimera 1080pCore i9 10980XE60120180240300SE +/- 0.46, N = 3275.30MIN: 216.13 / MAX: 336.871. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Summer Nature 4KCore i9 10980XE4080120160200SE +/- 0.87, N = 3170.27MIN: 100.45 / MAX: 182.191. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Summer Nature 1080pCore i9 10980XE70140210280350SE +/- 0.88, N = 3299.94MIN: 188.75 / MAX: 326.811. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.5.0Video Input: Chimera 1080p 10-bitCore i9 10980XE1122334455SE +/- 0.06, N = 348.21MIN: 32.67 / MAX: 103.461. (CC) gcc options: -pthread

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverCore i9 10980XE9001800270036004500SE +/- 11.12, N = 34135.891. (CC) gcc options: -O3 -mavx2

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileCore i9 10980XE1020304050SE +/- 0.47, N = 743.47

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 6.0.1Time To CompileCore i9 10980XE50100150200250225.99

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileCore i9 10980XE1632486480SE +/- 0.20, N = 372.36

ASKAP

This is a CUDA benchmark of ATNF's ASKAP Benchmark with currently using the tConvolveCuda sub-test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MT - GriddingCore i9 10980XE30060090012001500SE +/- 0.36, N = 31608.441. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MT - DegriddingCore i9 10980XE5001000150020002500SE +/- 0.40, N = 32413.601. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - GriddingCore i9 10980XE30060090012001500SE +/- 0.46, N = 31573.751. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - DegriddingCore i9 10980XE5001000150020002500SE +/- 0.79, N = 32388.741. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve OpenMP - GriddingCore i9 10980XE7001400210028003500SE +/- 41.10, N = 33288.121. (CXX) g++ options: -lpthread

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve OpenMP - DegriddingCore i9 10980XE10002000300040005000SE +/- 0.00, N = 34841.021. (CXX) g++ options: -lpthread

GROMACS

The Gromacs molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2019.4Water BenchmarkCore i9 10980XE0.33980.67961.01941.35921.699SE +/- 0.000, N = 121.5101. (CXX) g++ options: -mavx512f -mfma -std=c++11 -O3 -funroll-all-loops -pthread -lrt -lpthread -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: BMW27 - Compute: CPU-OnlyCore i9 10980XE20406080100SE +/- 0.03, N = 392.94

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Classroom - Compute: CPU-OnlyCore i9 10980XE60120180240300SE +/- 0.30, N = 3267.91

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Fishy Cat - Compute: CPU-OnlyCore i9 10980XE306090120150SE +/- 0.08, N = 3143.09

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Barbershop - Compute: CPU-OnlyCore i9 10980XE80160240320400SE +/- 0.30, N = 3373.69

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.81Blend File: Pabellon Barcelona - Compute: CPU-OnlyCore i9 10980XE70140210280350SE +/- 0.12, N = 3331.95