HEDT CPUs July 2020

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.30 BIOS) and NVIDIA NV132 11GB on Pop 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007231-NE-HEDTCPUSJ63
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i9 10980XE
July 23 2020
  5 Hours, 26 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


HEDT CPUs July 2020OpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBNVIDIA NV132 11GBRealtek ALC1220ASUS MG28UIntel I219-V + Intel I211Pop 20.045.4.0-7634-generic (x86_64)GNOME Shell 3.36.3X Server 1.20.8modesetting 1.20.84.3 Mesa 20.0.8GCC 9.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionHEDT CPUs July 2020 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch=skylake --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x5002f01- Python 2.7.18rc1 + Python 3.8.2- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled

HEDT CPUs July 2020wireguard: lczero: BLASrodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterlzbench: XZ 0 - Compressionlzbench: XZ 0 - Decompressionlzbench: Zstd 1 - Compressionlzbench: Zstd 1 - Decompressionlzbench: Zstd 8 - Compressionlzbench: Zstd 8 - Decompressionlzbench: Crush 0 - Compressionlzbench: Crush 0 - Decompressionlzbench: Brotli 0 - Compressionlzbench: Brotli 0 - Decompressionlzbench: Brotli 2 - Compressionlzbench: Brotli 2 - Decompressionlzbench: Libdeflate 1 - Compressionlzbench: Libdeflate 1 - Decompressioncrafty: Elapsed Timetscp: AI Chess Performanceonednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: IP Batch 1D - bf16bf16bf16 - CPUonednn: IP Batch All - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_3d - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimecompress-7zip: Compress Speed Teststockfish: Total Timeasmfish: 1024 Hash Memory, 26 Depthavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-apache: Time To Compilebuild-linux-kernel: Time To Compilecompress-pbzip2: 256MB File Compressionopm: Flow MPI Norne - 1opm: Flow MPI Norne - 2opm: Flow MPI Norne - 4opm: Flow MPI Norne - 8opm: Flow MPI Norne - 16opm: Flow MPI Norne - 18compress-gzip: Linux Source Tree Archiving To .tar.gzcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9encode-flac: WAV To FLACencode-mp3: WAV To MP3m-queens: Time To Solvemontage: Mosaic of M17, K band, 1.5 deg x 1.5 degn-queens: Elapsed Timesystem-decompress-xz: daphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clustergmic: 2D Function Plotting, 1000 Timesgmic: Plotting Isosurface Of A 3D Volume, 1000 Timesgmic: 3D Elevated Function In Rand Colors, 100 Timeshugin: Panorama Photo Assistant + Stitching Timeocrmypdf: Processing 60 Page PDF Documentneatbench: CPUai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scoretesseract-ocr: Time To OCR 7 Imagesbrl-cad: VGR Performance MetricCore i9 10980XE243.2901068114.47097.92764.34011.15114.5514512954514899514651165325397072178172311295922647614108062.1985532.09300.5140847.198605.5323363.48319.856921.715242.609179.392210.4587280.680155171.44556.84077.855449.2065510.81241.428760.3676991.705240.32.2818.313.6133.9698104496523665420915567.20040.2224.8994.74523.27048.6342.234406.912236.575171.095204.705322.177359.10432.18119.2939.0298.80547.73471.5698.6413.350893.5421187.6178189011342.83145.64318.07060.17846.15619.28025.819361547348323.469212219OpenBenchmarking.org

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 10980XE50100150200250SE +/- 0.47, N = 3243.29

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: BLASCore i9 10980XE2004006008001000SE +/- 4.81, N = 310681. (CXX) g++ options: -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i9 10980XE306090120150SE +/- 0.59, N = 3114.471. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DCore i9 10980XE20406080100SE +/- 0.09, N = 397.931. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i9 10980XE1428425670SE +/- 0.54, N = 364.341. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i9 10980XE3691215SE +/- 0.05, N = 311.151. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterCore i9 10980XE48121620SE +/- 0.13, N = 1514.551. (CXX) g++ options: -O2 -lOpenCL

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: CompressionCore i9 10980XE1020304050451. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: DecompressionCore i9 10980XE3060901201501291. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: CompressionCore i9 10980XE1202403604806005451. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: DecompressionCore i9 10980XE3006009001200150014891. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: CompressionCore i9 10980XE20406080100951. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: DecompressionCore i9 10980XE3006009001200150014651. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: CompressionCore i9 10980XE3060901201501161. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: DecompressionCore i9 10980XE1202403604806005321. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: CompressionCore i9 10980XE120240360480600SE +/- 0.33, N = 35391. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: DecompressionCore i9 10980XE150300450600750SE +/- 0.58, N = 37071. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: CompressionCore i9 10980XE501001502002502171. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: DecompressionCore i9 10980XE20040060080010008171. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: CompressionCore i9 10980XE501001502002502311. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: DecompressionCore i9 10980XE30060090012001500SE +/- 0.58, N = 312951. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i9 10980XE2M4M6M8M10MSE +/- 22362.22, N = 392264761. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceCore i9 10980XE300K600K900K1200K1500KSE +/- 884.76, N = 514108061. (CC) gcc options: -O3 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUCore i9 10980XE0.49470.98941.48411.97882.4735SE +/- 0.01914, N = 32.19855MIN: 2.081. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUCore i9 10980XE714212835SE +/- 0.08, N = 332.09MIN: 30.591. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.11570.23140.34710.46280.5785SE +/- 0.002493, N = 30.514084MIN: 0.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE246810SE +/- 0.05636, N = 37.19860MIN: 6.861. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE1.24482.48963.73444.97926.224SE +/- 0.00112, N = 35.53233MIN: 5.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE1428425670SE +/- 0.03, N = 363.48MIN: 62.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCore i9 10980XE3691215SE +/- 0.04742, N = 39.85692MIN: 9.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUCore i9 10980XE0.38590.77181.15771.54361.9295SE +/- 0.00156, N = 31.71524MIN: 1.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUCore i9 10980XE0.58711.17421.76132.34842.9355SE +/- 0.00436, N = 32.60917MIN: 2.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE3691215SE +/- 0.05740, N = 39.39221MIN: 9.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.10320.20640.30960.41280.516SE +/- 0.000428, N = 30.458728MIN: 0.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.1530.3060.4590.6120.765SE +/- 0.003916, N = 30.680155MIN: 0.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i9 10980XE4080120160200SE +/- 1.11, N = 3171.45MIN: 167.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i9 10980XE1326395265SE +/- 1.15, N = 1556.84MIN: 50.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE246810SE +/- 0.00830, N = 37.85544MIN: 7.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE3691215SE +/- 0.00358, N = 39.20655MIN: 9.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE3691215SE +/- 0.00, N = 310.81MIN: 10.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCore i9 10980XE0.32150.6430.96451.2861.6075SE +/- 0.00891, N = 31.42876MIN: 1.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCore i9 10980XE0.08270.16540.24810.33080.4135SE +/- 0.004630, N = 30.367699MIN: 0.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE0.38370.76741.15111.53481.9185SE +/- 0.00114, N = 31.70524MIN: 1.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassCore i9 10980XE0.06750.1350.20250.270.3375SE +/- 0.00, N = 30.31. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i9 10980XE0.5131.0261.5392.0522.565SE +/- 0.00, N = 32.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i9 10980XE510152025SE +/- 0.02, N = 318.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i9 10980XE0.81231.62462.43693.24924.0615SE +/- 0.01, N = 33.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i9 10980XE816243240SE +/- 0.15, N = 333.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestCore i9 10980XE20K40K60K80K100KSE +/- 468.74, N = 3981041. (CXX) g++ options: -pipe -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeCore i9 10980XE11M22M33M44M55MSE +/- 113161.61, N = 3496523661. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthCore i9 10980XE12M24M36M48M60MSE +/- 693981.08, N = 354209155

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Core i9 10980XE1530456075SE +/- 0.09, N = 367.201. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Core i9 10980XE918273645SE +/- 0.20, N = 340.221. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Core i9 10980XE1.10232.20463.30694.40925.5115SE +/- 0.024, N = 34.8991. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Core i9 10980XE1.06762.13523.20284.27045.338SE +/- 0.007, N = 34.7451. (CXX) g++ options: -O3 -fPIC

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileCore i9 10980XE612182430SE +/- 0.01, N = 323.27

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileCore i9 10980XE1122334455SE +/- 0.70, N = 448.63

Parallel BZIP2 Compression

This test measures the time needed to compress a file (a .tar package of the Linux kernel source code) using BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.12256MB File CompressionCore i9 10980XE0.50271.00541.50812.01082.5135SE +/- 0.006, N = 32.2341. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Core i9 10980XE90180270360450SE +/- 0.19, N = 3406.911. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2Core i9 10980XE50100150200250SE +/- 0.07, N = 3236.581. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Core i9 10980XE4080120160200SE +/- 0.12, N = 3171.101. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Core i9 10980XE4080120160200SE +/- 0.08, N = 3204.711. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Core i9 10980XE70140210280350SE +/- 0.11, N = 3322.181. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 18Core i9 10980XE80160240320400SE +/- 0.11, N = 3359.101. flow 2020.04

Gzip Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using Gzip compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCore i9 10980XE714212835SE +/- 0.03, N = 332.18

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i9 10980XE510152025SE +/- 0.01, N = 319.291. (CC) gcc options: -pthread -fvisibility=hidden -O2

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACCore i9 10980XE3691215SE +/- 0.007, N = 59.0291. (CXX) g++ options: -O2 -fvisibility=hidden -lm

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Core i9 10980XE246810SE +/- 0.028, N = 38.8051. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveCore i9 10980XE1122334455SE +/- 0.01, N = 347.731. (CXX) g++ options: -fopenmp -O2 -march=native

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degCore i9 10980XE1632486480SE +/- 0.03, N = 371.571. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeCore i9 10980XE246810SE +/- 0.002, N = 38.6411. (CC) gcc options: -static -fopenmp -O3 -march=native

System XZ Decompression

This test measures the time to decompress a Linux kernel tarball using XZ. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem XZ DecompressionCore i9 10980XE0.75381.50762.26143.01523.769SE +/- 0.004, N = 33.350

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingCore i9 10980XE2004006008001000SE +/- 2.69, N = 3893.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageCore i9 10980XE5K10K15K20K25KSE +/- 169.25, N = 1421187.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterCore i9 10980XE30060090012001500SE +/- 2.51, N = 31342.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i9 10980XE306090120150SE +/- 1.16, N = 3145.641. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i9 10980XE48121620SE +/- 0.01, N = 318.071. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i9 10980XE1326395265SE +/- 0.00, N = 360.181. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i9 10980XE1020304050SE +/- 0.54, N = 346.16

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentCore i9 10980XE510152025SE +/- 0.07, N = 319.28

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUCore i9 10980XE612182430SE +/- 0.38, N = 325.8

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreCore i9 10980XE4008001200160020001936

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreCore i9 10980XE300600900120015001547

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreCore i9 10980XE70014002100280035003483

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesCore i9 10980XE612182430SE +/- 0.01, N = 323.47

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricCore i9 10980XE50K100K150K200K250K2122191. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lGL -lGLdispatch -lX11 -lpthread -ldl -luuid -lm

86 Results Shown

WireGuard + Linux Networking Stack Stress Test
LeelaChessZero
Rodinia:
  OpenMP LavaMD
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
lzbench:
  XZ 0 - Compression
  XZ 0 - Decompression
  Zstd 1 - Compression
  Zstd 1 - Decompression
  Zstd 8 - Compression
  Zstd 8 - Decompression
  Crush 0 - Compression
  Crush 0 - Decompression
  Brotli 0 - Compression
  Brotli 0 - Decompression
  Brotli 2 - Compression
  Brotli 2 - Decompression
  Libdeflate 1 - Compression
  Libdeflate 1 - Decompression
Crafty
TSCP
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  IP Batch 1D - u8s8f32 - CPU
  IP Batch All - u8s8f32 - CPU
  IP Batch 1D - bf16bf16bf16 - CPU
  IP Batch All - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch deconv_1d - u8s8f32 - CPU
  Deconvolution Batch deconv_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_1d - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_3d - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
7-Zip Compression
Stockfish
asmFish
libavif avifenc:
  0
  2
  8
  10
Timed Apache Compilation
Timed Linux Kernel Compilation
Parallel BZIP2 Compression
Open Porous Media:
  Flow MPI Norne - 1
  Flow MPI Norne - 2
  Flow MPI Norne - 4
  Flow MPI Norne - 8
  Flow MPI Norne - 16
  Flow MPI Norne - 18
Gzip Compression
XZ Compression
FLAC Audio Encoding
LAME MP3 Encoding
m-queens
Montage Astronomical Image Mosaic Engine
N-Queens
System XZ Decompression
Darmstadt Automotive Parallel Heterogeneous Suite:
  OpenMP - NDT Mapping
  OpenMP - Points2Image
  OpenMP - Euclidean Cluster
G'MIC:
  2D Function Plotting, 1000 Times
  Plotting Isosurface Of A 3D Volume, 1000 Times
  3D Elevated Function In Rand Colors, 100 Times
Hugin
OCRMyPDF
NeatBench
AI Benchmark Alpha:
  Device Inference Score
  Device Training Score
  Device AI Score
Tesseract OCR
BRL-CAD