Core i9 10980XE

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.30 BIOS) and NVIDIA NV132 11GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2008179-NE-COREI910909
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i9 10980XE
August 16 2020
  12 Hours, 46 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 10980XEOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBNVIDIA NV132 11GBRealtek ALC1220ASUS MG28UIntel I219-V + Intel I211Ubuntu 20.105.4.0-42-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.3 Mesa 20.1.2GCC 10.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I9 10980XE BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-TMmQyZ/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-TMmQyZ/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance - CPU Microcode: 0x5002f01- OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu1)- Python 3.8.5- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

Core i9 10980XEcryptopp: All Algorithmscryptopp: Keyed Algorithmscryptopp: Unkeyed Algorithmscryptopp: Integer + Elliptic Curve Public Key Algorithmscp2k: Fayalite-FIST Datapolyhedron: acpolyhedron: airpolyhedron: mdbxpolyhedron: doducpolyhedron: linpkpolyhedron: tfft2polyhedron: aermodpolyhedron: rnflowpolyhedron: induct2polyhedron: proteinpolyhedron: capacitapolyhedron: channel2polyhedron: fatigue2polyhedron: gas_dyn2polyhedron: test_fpu2polyhedron: mp_prop_designcompress-zstd: 3compress-zstd: 19onednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: IP Batch 1D - bf16bf16bf16 - CPUonednn: IP Batch All - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_3d - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUbuild-linux-kernel: Time To Compilecompress-gzip: Linux Source Tree Archiving To .tar.gzdaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clusterhugin: Panorama Photo Assistant + Stitching Timestress-ng: MMAPstress-ng: NUMAstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: RdRandstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingecp-candle: P1B2ecp-candle: P3B1ecp-candle: P3B2pyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scoregeekbench: CPU Multi Coregeekbench: CPU Multi Core - Gaussian Blurgeekbench: CPU Multi Core - Face Detectiongeekbench: CPU Multi Core - Horizon Detectiongeekbench: CPU Single Coregeekbench: CPU Single Core - Gaussian Blurgeekbench: CPU Single Core - Face Detectiongeekbench: CPU Single Core - Horizon Detectiontesseract-ocr: Time To OCR 7 Imagesspec-jbb2015: SPECjbb2015-Composite max-jOPSspec-jbb2015: SPECjbb2015-Composite critical-jOPSCore i9 10980XE1803.993756710.847884361.2010875598.6557201069.0714.651.934.246.582.9624.365.5112.7616.1613.1310.0166.442.350.5630.3554.434792.060.32.3613033.21730.5695977.970825.6254064.194810.20601.743552.707519.777130.4621330.762493187.36477.08557.910229.1599410.81451.669660.7673421.7276447.96934.085902.4124517.8611700811337.3544.418406.56525.771277.70161757.023727.17187707954.71184119.01101939.43299353.4487.629167.293591130.9895683.1396962.992595.7412525.7117384021.192171946.51280.818402205.9232.219661.239575.27820426986.892.810615.238120.687.21377.2038.935019591605356416021869.1162.8392.5133346.09.8228.223.1503824118074OpenBenchmarking.org

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: All AlgorithmsCore i9 10980XE400800120016002000SE +/- 0.79, N = 31803.991. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsCore i9 10980XE150300450600750SE +/- 0.39, N = 3710.851. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsCore i9 10980XE80160240320400SE +/- 0.06, N = 3361.201. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsCore i9 10980XE12002400360048006000SE +/- 1.37, N = 35598.661. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. This test profile currently makes use of the OpenMP implementation and using the Fayalite-FIST molecular dynamics run and measures the total time to complete. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 6.1Fayalite-FIST DataCore i9 10980XE20040060080010001069.07

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acCore i9 10980XE1.04632.09263.13894.18525.23154.65

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airCore i9 10980XE0.43430.86861.30291.73722.17151.93

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxCore i9 10980XE0.9541.9082.8623.8164.774.24

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducCore i9 10980XE2468106.58

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkCore i9 10980XE0.6661.3321.9982.6643.332.96

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Core i9 10980XE61218243024.36

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodCore i9 10980XE1.23982.47963.71944.95926.1995.51

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowCore i9 10980XE369121512.76

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Core i9 10980XE4812162016.16

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinCore i9 10980XE369121513.13

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaCore i9 10980XE369121510.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Core i9 10980XE153045607566.4

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Core i9 10980XE102030405042.3

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Core i9 10980XE112233445550.56

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Core i9 10980XE71421283530.35

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designCore i9 10980XE122436486054.43

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Core i9 10980XE10002000300040005000SE +/- 4.68, N = 34792.01. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Core i9 10980XE1326395265SE +/- 0.23, N = 360.31. (CC) gcc options: -O3 -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUCore i9 10980XE0.53131.06261.59392.12522.6565SE +/- 0.01522, N = 32.36130MIN: 2.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUCore i9 10980XE816243240SE +/- 0.11, N = 333.22MIN: 31.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.12820.25640.38460.51280.641SE +/- 0.004685, N = 150.569597MIN: 0.521. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE246810SE +/- 0.06970, N = 37.97082MIN: 7.621. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE1.26572.53143.79715.06286.3285SE +/- 0.00853, N = 35.62540MIN: 5.511. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE1428425670SE +/- 0.02, N = 364.19MIN: 63.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCore i9 10980XE3691215SE +/- 0.10, N = 310.21MIN: 9.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUCore i9 10980XE0.39230.78461.17691.56921.9615SE +/- 0.01905, N = 31.74355MIN: 1.71. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUCore i9 10980XE0.60921.21841.82762.43683.046SE +/- 0.00180, N = 32.70751MIN: 2.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE3691215SE +/- 0.08211, N = 39.77713MIN: 9.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.1040.2080.3120.4160.52SE +/- 0.002297, N = 30.462133MIN: 0.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.17160.34320.51480.68640.858SE +/- 0.006315, N = 30.762493MIN: 0.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i9 10980XE4080120160200SE +/- 2.10, N = 3187.36MIN: 181.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i9 10980XE20406080100SE +/- 3.11, N = 1577.09MIN: 65.011. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE246810SE +/- 0.01443, N = 37.91022MIN: 7.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE3691215SE +/- 0.01137, N = 39.15994MIN: 8.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE3691215SE +/- 0.01, N = 310.81MIN: 10.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCore i9 10980XE0.37570.75141.12711.50281.8785SE +/- 0.00683, N = 31.66966MIN: 1.631. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.17270.34540.51810.69080.8635SE +/- 0.008493, N = 150.767342MIN: 0.671. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE0.38870.77741.16611.55481.9435SE +/- 0.01105, N = 31.72764MIN: 1.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileCore i9 10980XE1122334455SE +/- 0.42, N = 347.97

Gzip Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using Gzip compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCore i9 10980XE816243240SE +/- 0.02, N = 334.09

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingCore i9 10980XE2004006008001000SE +/- 5.84, N = 3902.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageCore i9 10980XE5K10K15K20K25KSE +/- 308.31, N = 524517.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterCore i9 10980XE30060090012001500SE +/- 0.28, N = 31337.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i9 10980XE1020304050SE +/- 0.69, N = 344.42

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPCore i9 10980XE90180270360450SE +/- 4.84, N = 6406.561. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMACore i9 10980XE110220330440550SE +/- 1.62, N = 3525.771. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDCore i9 10980XE30060090012001500SE +/- 0.13, N = 31277.701. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicCore i9 10980XE30K60K90K120K150KSE +/- 1526.17, N = 3161757.021. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoCore i9 10980XE8001600240032004000SE +/- 3.54, N = 33727.171. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocCore i9 10980XE40M80M120M160M200MSE +/- 15240.97, N = 3187707954.711. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandCore i9 10980XE40K80K120K160K200KSE +/- 5.60, N = 3184119.011. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingCore i9 10980XE20K40K60K80K100KSE +/- 661.27, N = 3101939.431. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILECore i9 10980XE60K120K180K240K300KSE +/- 89.24, N = 3299353.441. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheCore i9 10980XE20406080100SE +/- 0.81, N = 1087.621. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressCore i9 10980XE2K4K6K8K10KSE +/- 11.02, N = 39167.291. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresCore i9 10980XE800K1600K2400K3200K4000KSE +/- 1331.82, N = 33591130.981. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathCore i9 10980XE20K40K60K80K100KSE +/- 99.78, N = 395683.131. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathCore i9 10980XE20K40K60K80K100KSE +/- 0.81, N = 396962.991. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingCore i9 10980XE6001200180024003000SE +/- 2.69, N = 32595.741. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityCore i9 10980XE3K6K9K12K15KSE +/- 96.29, N = 312525.711. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingCore i9 10980XE4M8M12M16M20MSE +/- 205586.05, N = 617384021.191. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsCore i9 10980XE500K1000K1500K2000K2500KSE +/- 35786.75, N = 152171946.511. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingCore i9 10980XE60120180240300SE +/- 0.42, N = 3280.811. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingCore i9 10980XE2M4M6M8M10MSE +/- 6012.05, N = 38402205.921. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P1B2Core i9 10980XE71421283532.22

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B1Core i9 10980XE140280420560700661.24

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B2Core i9 10980XE120240360480600575.28

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goCore i9 10980XE4080120160200204

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Core i9 10980XE60120180240300269

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosCore i9 10980XE20406080100SE +/- 0.06, N = 386.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatCore i9 10980XE20406080100SE +/- 0.07, N = 392.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyCore i9 10980XE20406080100106

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibCore i9 10980XE48121620SE +/- 0.00, N = 315.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceCore i9 10980XE80160240320400381

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsCore i9 10980XE510152025SE +/- 0.03, N = 320.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesCore i9 10980XE20406080100SE +/- 0.03, N = 387.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileCore i9 10980XE306090120150137

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupCore i9 10980XE246810SE +/- 0.01, N = 37.20

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateCore i9 10980XE918273645SE +/- 0.03, N = 338.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonCore i9 10980XE80160240320400350

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreCore i9 10980XE4008001200160020001959

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreCore i9 10980XE300600900120015001605

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreCore i9 10980XE80016002400320040003564

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi CoreCore i9 10980XE3K6K9K12K15KSE +/- 5.00, N = 316021

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Gaussian BlurCore i9 10980XE2004006008001000SE +/- 0.98, N = 3869.1

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Face DetectionCore i9 10980XE4080120160200SE +/- 1.46, N = 3162.8

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Multi Core - Horizon DetectionCore i9 10980XE90180270360450SE +/- 1.84, N = 3392.5

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single CoreCore i9 10980XE30060090012001500SE +/- 1.86, N = 31333

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Gaussian BlurCore i9 10980XE1020304050SE +/- 0.24, N = 346.0

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5Test: CPU Single Core - Face DetectionCore i9 10980XE3691215SE +/- 0.01, N = 39.82

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5Test: CPU Single Core - Horizon DetectionCore i9 10980XE714212835SE +/- 0.07, N = 328.2

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesCore i9 10980XE612182430SE +/- 0.01, N = 323.15

SPECjbb 2015

This is a benchmark of SPECjbb 2015. For this test profile to work, you must have a valid license/copy of the SPECjbb 2015 ISO (SPECjbb2015-1.02.iso) in your Phoronix Test Suite download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPSCore i9 10980XE8K16K24K32K40K38241

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPSCore i9 10980XE4K8K12K16K20K18074

99 Results Shown

Crypto++:
  All Algorithms
  Keyed Algorithms
  Unkeyed Algorithms
  Integer + Elliptic Curve Public Key Algorithms
CP2K Molecular Dynamics
Polyhedron Fortran Benchmarks:
  ac
  air
  mdbx
  doduc
  linpk
  tfft2
  aermod
  rnflow
  induct2
  protein
  capacita
  channel2
  fatigue2
  gas_dyn2
  test_fpu2
  mp_prop_design
Zstd Compression:
  3
  19
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  IP Batch 1D - u8s8f32 - CPU
  IP Batch All - u8s8f32 - CPU
  IP Batch 1D - bf16bf16bf16 - CPU
  IP Batch All - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch deconv_1d - u8s8f32 - CPU
  Deconvolution Batch deconv_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_1d - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_3d - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
Timed Linux Kernel Compilation
Gzip Compression
Darmstadt Automotive Parallel Heterogeneous Suite:
  OpenMP - NDT Mapping
  OpenMP - Points2Image
  OpenMP - Euclidean Cluster
Hugin
Stress-NG:
  MMAP
  NUMA
  MEMFD
  Atomic
  Crypto
  Malloc
  RdRand
  Forking
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
ECP-CANDLE:
  P1B2
  P3B1
  P3B2
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
AI Benchmark Alpha:
  Device Inference Score
  Device Training Score
  Device AI Score
Geekbench:
  CPU Multi Core
  CPU Multi Core - Gaussian Blur
  CPU Multi Core - Face Detection
  CPU Multi Core - Horizon Detection
  CPU Single Core
  CPU Single Core - Gaussian Blur
  CPU Single Core - Face Detection
  CPU Single Core - Horizon Detection
Tesseract OCR
SPECjbb 2015:
  SPECjbb2015-Composite max-jOPS
  SPECjbb2015-Composite critical-jOPS