HEDT CPUs July 2020

AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Sapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007244-PTS-2007231N08
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 2 Tests
Chess Test Suite 7 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 8 Tests
Compression Tests 6 Tests
CPU Massive 17 Tests
Creator Workloads 12 Tests
Encoding 4 Tests
HPC - High Performance Computing 5 Tests
Imaging 4 Tests
Machine Learning 3 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 3 Tests
OCR 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 2 Tests
Server CPU Tests 8 Tests
Single-Threaded 5 Tests
Video Encoding 2 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i9 10980XE
July 23 2020
  5 Hours, 26 Minutes
Threadripper 3960X
July 24 2020
  5 Hours, 1 Minute
Invert Hiding All Results Option
  5 Hours, 14 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


HEDT CPUs July 2020ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore i9 10980XEThreadripper 3960XIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBNVIDIA NV132 11GBRealtek ALC1220ASUS MG28UIntel I219-V + Intel I211Pop 20.045.4.0-7634-generic (x86_64)GNOME Shell 3.36.3X Server 1.20.8modesetting 1.20.84.3 Mesa 20.0.8GCC 9.3.0ext43840x2160AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads)MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS)AMD Starship/Matisse1000GB Sabrent Rocket 4.0 1TBSapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB (1900/875MHz)AMD Navi 10 HDMI AudioAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.045.4.0-39-generic (x86_64)GNOME Shell 3.36.14.6 Mesa 20.0.4 (LLVM 9.0.1)OpenBenchmarking.orgCompiler Details- Core i9 10980XE: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch=skylake --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Threadripper 3960X: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Core i9 10980XE: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x5002f01- Threadripper 3960X: Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025Python Details- Core i9 10980XE: Python 2.7.18rc1 + Python 3.8.2- Threadripper 3960X: Python 3.8.2Security Details- Core i9 10980XE: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled - Threadripper 3960X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Core i9 10980XE vs. Threadripper 3960X ComparisonPhoronix Test SuiteBaseline+231.9%+231.9%+463.8%+463.8%+695.7%+695.7%218.9%107.8%91.8%91%61.9%55.3%52.2%43.4%38%37.4%36.8%34.5%33%23.6%22.8%22.5%22.4%21.7%19.5%17.8%17.5%17.4%14.7%14.4%14.3%14%13.6%13.3%13.3%13.2%11.9%10.8%10.7%9.1%9%8.3%6.3%6.1%6.1%4.9%4.6%4.5%4.3%3.3%2.9%2.8%2%M.M.B.S.T - f32 - CPUD.B.d - u8s8f32 - CPU188.3%M.M.B.S.T - u8s8f32 - CPU171.6%IP Batch 1D - u8s8f32 - CPU130.1%Time To SolveD.B.d - u8s8f32 - CPU927.6%V.P.MElapsed TimeIP Batch All - u8s8f32 - CPU66.6%IP Batch 1D - f32 - CPUC.S.TTotal TimeTime To CompileCPUOpenMP LavaMD1.H.M.2.D3.E.F.I.R.C.1.T35.3%Flow MPI Norne - 1OpenMP LeukocyteO.S31.2%WAV To MP3Flow MPI Norne - 22OpenMP CFD SolverP.6.P.P.DOpenMP HotSpot3DSpeed 4 Two-PassWAV To FLACZstd 8 - CompressionSpeed 6 Two-PassLibdeflate 1 - CompressionSpeed 8 RealtimeIP Batch All - f32 - CPUSpeed 0 Two-PassZstd 8 - DecompressionTime To CompileFlow MPI Norne - 4OpenMP - NDT MappingOpenMP - Points2ImageR.N.N.T - f32 - CPU10.3%Brotli 0 - Decompression10%C.B.S.A - f32 - CPUR.N.N.I - f32 - CPUOpenMP - Euclidean Cluster9%P.P.A.S.TBrotli 2 - Decompression7.2%2.F.P.1.T6.3%Flow MPI Norne - 8A.C.P6.1%D.I.SL.S.T.A.T.t.g5%10XZ 0 - Compression4.7%8Speed 6 RealtimeCrush 0 - Decompression4.3%Crush 0 - CompressionD.T.S3.6%Libdeflate 1 - Decompression3.5%P.I.O.A.3.V.1.T3.5%Elapsed Time3.4%D.B.d - f32 - CPUC.u.1.0.3.s.i.i.C.L.93.2%BLASC.B.S.A - u8s8f32 - CPUT.T.O.7.I2.7%D.B.d - f32 - CPUoneDNNoneDNNoneDNNoneDNNm-queensoneDNNBRL-CADN-QueensoneDNNoneDNN7-Zip CompressionStockfishTimed Linux Kernel CompilationNeatBenchRodiniaasmFishG'MICOpen Porous MediaRodiniaRodiniaLAME MP3 EncodingOpen Porous Medialibavif avifenclibavif avifencRodiniaOCRMyPDFRodiniaAOM AV1FLAC Audio EncodinglzbenchAOM AV1lzbenchAOM AV1oneDNNAOM AV1lzbenchTimed Apache CompilationOpen Porous MediaDarmstadt Automotive Parallel Heterogeneous SuiteDarmstadt Automotive Parallel Heterogeneous SuiteoneDNNlzbenchoneDNNoneDNNDarmstadt Automotive Parallel Heterogeneous SuiteHuginlzbenchG'MICOpen Porous MediaTSCPWireGuard + Linux Networking Stack Stress TestAI Benchmark AlphaGzip Compressionlibavif avifenclzbenchlibavif avifencAOM AV1lzbenchlzbenchAI Benchmark AlphalzbenchG'MICCraftyoneDNNXZ CompressionLeelaChessZerooneDNNTesseract OCRoneDNNCore i9 10980XEThreadripper 3960X

HEDT CPUs July 2020onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUm-queens: Time To Solveonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUbrl-cad: VGR Performance Metricn-queens: Elapsed Timeonednn: IP Batch All - u8s8f32 - CPUonednn: IP Batch 1D - f32 - CPUcompress-7zip: Compress Speed Teststockfish: Total Timebuild-linux-kernel: Time To Compileneatbench: CPUrodinia: OpenMP LavaMDasmfish: 1024 Hash Memory, 26 Depthgmic: 3D Elevated Function In Rand Colors, 100 Timesopm: Flow MPI Norne - 1rodinia: OpenMP Leukocyterodinia: OpenMP Streamclusterencode-mp3: WAV To MP3opm: Flow MPI Norne - 2avifenc: 2avifenc: 0rodinia: OpenMP CFD Solverocrmypdf: Processing 60 Page PDF Documentrodinia: OpenMP HotSpot3Daom-av1: Speed 4 Two-Passencode-flac: WAV To FLAClzbench: Zstd 8 - Compressionaom-av1: Speed 6 Two-Passlzbench: Libdeflate 1 - Compressionaom-av1: Speed 8 Realtimeonednn: IP Batch All - f32 - CPUaom-av1: Speed 0 Two-Passlzbench: Zstd 8 - Decompressionbuild-apache: Time To Compileopm: Flow MPI Norne - 4daphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imageonednn: Recurrent Neural Network Training - f32 - CPUlzbench: Brotli 0 - Decompressiononednn: Convolution Batch Shapes Auto - f32 - CPUdaphne: OpenMP - Euclidean Clusterhugin: Panorama Photo Assistant + Stitching Timelzbench: Brotli 2 - Decompressiongmic: 2D Function Plotting, 1000 Timesopm: Flow MPI Norne - 8tscp: AI Chess Performancewireguard: ai-benchmark: Device Inference Scorecompress-gzip: Linux Source Tree Archiving To .tar.gzavifenc: 10lzbench: XZ 0 - Compressionavifenc: 8aom-av1: Speed 6 Realtimelzbench: Crush 0 - Decompressionlzbench: Crush 0 - Compressionai-benchmark: Device Training Scorelzbench: Libdeflate 1 - Decompressiongmic: Plotting Isosurface Of A 3D Volume, 1000 Timescrafty: Elapsed Timeonednn: Deconvolution Batch deconv_1d - f32 - CPUcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9lczero: BLASonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUtesseract-ocr: Time To OCR 7 Imagesonednn: Deconvolution Batch deconv_3d - f32 - CPUai-benchmark: Device AI Scoreopm: Flow MPI Norne - 16opm: Flow MPI Norne - 18lzbench: XZ 0 - Decompressionlzbench: Zstd 1 - Decompressionmontage: Mosaic of M17, K band, 1.5 deg x 1.5 degsystem-decompress-xz: lzbench: Zstd 1 - Compressionlzbench: Brotli 2 - Compressionlzbench: Brotli 0 - Compressioncompress-pbzip2: 256MB File Compressiononednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_3d - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_1d - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: IP Batch All - bf16bf16bf16 - CPUonednn: IP Batch 1D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUCore i9 10980XEThreadripper 3960X1.428760.6801550.3676990.51408447.7340.4587282122198.6417.198602.19855981044965236648.63425.8114.4705420915560.178406.91264.34014.5518.805236.57540.22267.20011.15119.28097.9272.289.029953.6123133.9632.09300.3146523.270171.095893.5421187.617818901171.4457079.856921342.8346.156817145.643204.7051410806243.290193632.1814.745454.89918.315321161547129518.07092264761.7152419.29310689.3922123.4692.609173483322.177359.104129148971.5693.3505452175392.2341.7052410.81249.206557.8554463.48315.5323356.84070.4480661.960910.9987131.1830622.9714.713964070794.52411.99611.357961523497555502733.91135.683.3007415057781.432302.59148.36019.0987.123192.63032.83554.9219.16016.12883.1092.687.6881094.1326438.7028.24540.34166020.551152.857989.8523446.268852101189.1026439.031801232.4342.605762154.862192.5771329102229.218205433.7914.523434.68519.135101211493125118.70589253391.6601519.91910999.1381824.1112.557173547316.595353.256131150572.2473.37954921854152.1539OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X0.32150.6430.96451.2861.6075SE +/- 0.008911, N = 3SE +/- 0.002149, N = 31.4287600.448066MIN: 1.38MIN: 0.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 1.42 / Avg: 1.43 / Max: 1.45Min: 0.44 / Avg: 0.45 / Max: 0.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X0.44120.88241.32361.76482.206SE +/- 0.003916, N = 3SE +/- 0.001505, N = 30.6801551.960910MIN: 0.66MIN: 1.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 0.67 / Avg: 0.68 / Max: 0.69Min: 1.96 / Avg: 1.96 / Max: 1.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X0.22470.44940.67410.89881.1235SE +/- 0.004630, N = 3SE +/- 0.000520, N = 30.3676990.998713MIN: 0.34MIN: 0.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 0.36 / Avg: 0.37 / Max: 0.38Min: 1 / Avg: 1 / Max: 11. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X0.26620.53240.79861.06481.331SE +/- 0.002493, N = 3SE +/- 0.017087, N = 140.5140841.183060MIN: 0.49MIN: 1.131. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 1.16 / Avg: 1.18 / Max: 1.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveCore i9 10980XEThreadripper 3960X1122334455SE +/- 0.01, N = 3SE +/- 0.05, N = 347.7322.971. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveCore i9 10980XEThreadripper 3960X1020304050Min: 47.71 / Avg: 47.73 / Max: 47.75Min: 22.89 / Avg: 22.97 / Max: 23.061. (CXX) g++ options: -fopenmp -O2 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X1.06062.12123.18184.24245.303SE +/- 0.000428, N = 3SE +/- 0.005508, N = 30.4587284.713960MIN: 4.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 4.71 / Avg: 4.71 / Max: 4.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricCore i9 10980XEThreadripper 3960X90K180K270K360K450K212219407079-lSM -lICE -lXi -lGLU -lXext -lXrender1. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lGL -lGLdispatch -lX11 -lpthread -ldl -luuid -lm

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeCore i9 10980XEThreadripper 3960X246810SE +/- 0.002, N = 3SE +/- 0.007, N = 38.6414.5241. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeCore i9 10980XEThreadripper 3960X3691215Min: 8.64 / Avg: 8.64 / Max: 8.64Min: 4.51 / Avg: 4.52 / Max: 4.531. (CC) gcc options: -static -fopenmp -O3 -march=native

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X3691215SE +/- 0.05636, N = 3SE +/- 0.01572, N = 37.1986011.99610MIN: 6.86MIN: 11.631. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X3691215Min: 7.13 / Avg: 7.2 / Max: 7.31Min: 11.96 / Avg: 12 / Max: 12.011. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X0.49470.98941.48411.97882.4735SE +/- 0.01914, N = 3SE +/- 0.00461, N = 32.198551.35796MIN: 2.08MIN: 1.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 2.16 / Avg: 2.2 / Max: 2.23Min: 1.35 / Avg: 1.36 / Max: 1.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestCore i9 10980XEThreadripper 3960X30K60K90K120K150KSE +/- 468.74, N = 3SE +/- 170.94, N = 3981041523491. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestCore i9 10980XEThreadripper 3960X30K60K90K120K150KMin: 97185 / Avg: 98104.33 / Max: 98723Min: 152104 / Avg: 152349 / Max: 1526781. (CXX) g++ options: -pipe -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeCore i9 10980XEThreadripper 3960X16M32M48M64M80MSE +/- 113161.61, N = 3SE +/- 456523.74, N = 349652366755550271. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeCore i9 10980XEThreadripper 3960X13M26M39M52M65MMin: 49452093 / Avg: 49652366.33 / Max: 49843797Min: 74736388 / Avg: 75555026.67 / Max: 763145101. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileCore i9 10980XEThreadripper 3960X1122334455SE +/- 0.70, N = 4SE +/- 0.58, N = 348.6333.91
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileCore i9 10980XEThreadripper 3960X1020304050Min: 47.9 / Avg: 48.63 / Max: 50.74Min: 33.29 / Avg: 33.91 / Max: 35.06

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUCore i9 10980XEThreadripper 3960X816243240SE +/- 0.38, N = 3SE +/- 0.18, N = 325.835.6
OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUCore i9 10980XEThreadripper 3960X816243240Min: 25.2 / Avg: 25.83 / Max: 26.5Min: 35.3 / Avg: 35.63 / Max: 35.9

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i9 10980XEThreadripper 3960X306090120150SE +/- 0.59, N = 3SE +/- 0.11, N = 3114.4783.301. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i9 10980XEThreadripper 3960X20406080100Min: 113.3 / Avg: 114.47 / Max: 115.17Min: 83.09 / Avg: 83.3 / Max: 83.471. (CXX) g++ options: -O2 -lOpenCL

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthCore i9 10980XEThreadripper 3960X16M32M48M64M80MSE +/- 693981.08, N = 3SE +/- 352685.57, N = 35420915574150577
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthCore i9 10980XEThreadripper 3960X13M26M39M52M65MMin: 53045610 / Avg: 54209155 / Max: 55446253Min: 73557114 / Avg: 74150576.67 / Max: 74777479

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i9 10980XEThreadripper 3960X20406080100SE +/- 0.00, N = 3SE +/- 0.16, N = 360.1881.431. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i9 10980XEThreadripper 3960X1632486480Min: 60.18 / Avg: 60.18 / Max: 60.18Min: 81.22 / Avg: 81.43 / Max: 81.751. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Core i9 10980XEThreadripper 3960X90180270360450SE +/- 0.19, N = 3SE +/- 1.52, N = 3406.91302.591. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Core i9 10980XEThreadripper 3960X70140210280350Min: 406.56 / Avg: 406.91 / Max: 407.21Min: 299.59 / Avg: 302.59 / Max: 304.551. flow 2020.04

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i9 10980XEThreadripper 3960X1428425670SE +/- 0.54, N = 3SE +/- 0.07, N = 364.3448.361. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i9 10980XEThreadripper 3960X1326395265Min: 63.75 / Avg: 64.34 / Max: 65.42Min: 48.27 / Avg: 48.36 / Max: 48.491. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterCore i9 10980XEThreadripper 3960X510152025SE +/- 0.13, N = 15SE +/- 0.02, N = 314.5519.101. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterCore i9 10980XEThreadripper 3960X510152025Min: 14.09 / Avg: 14.55 / Max: 15.22Min: 19.07 / Avg: 19.1 / Max: 19.141. (CXX) g++ options: -O2 -lOpenCL

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Core i9 10980XEThreadripper 3960X246810SE +/- 0.028, N = 3SE +/- 0.015, N = 38.8057.123-lncurses1. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Core i9 10980XEThreadripper 3960X3691215Min: 8.77 / Avg: 8.81 / Max: 8.86Min: 7.1 / Avg: 7.12 / Max: 7.151. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2Core i9 10980XEThreadripper 3960X50100150200250SE +/- 0.07, N = 3SE +/- 0.17, N = 3236.58192.631. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2Core i9 10980XEThreadripper 3960X4080120160200Min: 236.43 / Avg: 236.57 / Max: 236.65Min: 192.36 / Avg: 192.63 / Max: 192.951. flow 2020.04

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Core i9 10980XEThreadripper 3960X918273645SE +/- 0.20, N = 3SE +/- 0.07, N = 340.2232.841. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Core i9 10980XEThreadripper 3960X816243240Min: 39.95 / Avg: 40.22 / Max: 40.61Min: 32.72 / Avg: 32.83 / Max: 32.951. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Core i9 10980XEThreadripper 3960X1530456075SE +/- 0.09, N = 3SE +/- 0.16, N = 367.2054.921. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Core i9 10980XEThreadripper 3960X1326395265Min: 67.04 / Avg: 67.2 / Max: 67.34Min: 54.64 / Avg: 54.92 / Max: 55.21. (CXX) g++ options: -O3 -fPIC

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i9 10980XEThreadripper 3960X3691215SE +/- 0.050, N = 3SE +/- 0.071, N = 311.1519.1601. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i9 10980XEThreadripper 3960X3691215Min: 11.07 / Avg: 11.15 / Max: 11.24Min: 9.04 / Avg: 9.16 / Max: 9.291. (CXX) g++ options: -O2 -lOpenCL

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentCore i9 10980XEThreadripper 3960X510152025SE +/- 0.07, N = 3SE +/- 0.11, N = 319.2816.13
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentCore i9 10980XEThreadripper 3960X510152025Min: 19.14 / Avg: 19.28 / Max: 19.38Min: 16 / Avg: 16.13 / Max: 16.34

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DCore i9 10980XEThreadripper 3960X20406080100SE +/- 0.09, N = 3SE +/- 0.43, N = 397.9383.111. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DCore i9 10980XEThreadripper 3960X20406080100Min: 97.81 / Avg: 97.93 / Max: 98.11Min: 82.26 / Avg: 83.11 / Max: 83.541. (CXX) g++ options: -O2 -lOpenCL

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i9 10980XEThreadripper 3960X0.6031.2061.8092.4123.015SE +/- 0.00, N = 3SE +/- 0.00, N = 32.282.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i9 10980XEThreadripper 3960X246810Min: 2.28 / Avg: 2.28 / Max: 2.28Min: 2.68 / Avg: 2.68 / Max: 2.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACCore i9 10980XEThreadripper 3960X3691215SE +/- 0.007, N = 5SE +/- 0.019, N = 59.0297.688-logg1. (CXX) g++ options: -O2 -fvisibility=hidden -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACCore i9 10980XEThreadripper 3960X3691215Min: 9.01 / Avg: 9.03 / Max: 9.05Min: 7.63 / Avg: 7.69 / Max: 7.741. (CXX) g++ options: -O2 -fvisibility=hidden -lm

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: CompressionCore i9 10980XEThreadripper 3960X20406080100SE +/- 0.67, N = 3951091. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: CompressionCore i9 10980XEThreadripper 3960X20406080100Min: 108 / Avg: 108.67 / Max: 1101. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i9 10980XEThreadripper 3960X0.92931.85862.78793.71724.6465SE +/- 0.01, N = 3SE +/- 0.00, N = 33.614.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i9 10980XEThreadripper 3960X246810Min: 3.6 / Avg: 3.61 / Max: 3.62Min: 4.12 / Avg: 4.13 / Max: 4.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: CompressionCore i9 10980XEThreadripper 3960X601201802403002312641. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i9 10980XEThreadripper 3960X918273645SE +/- 0.15, N = 3SE +/- 0.13, N = 333.9638.701. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i9 10980XEThreadripper 3960X816243240Min: 33.66 / Avg: 33.96 / Max: 34.15Min: 38.43 / Avg: 38.7 / Max: 38.861. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X714212835SE +/- 0.08, N = 3SE +/- 0.05, N = 332.0928.25MIN: 30.59MIN: 27.91. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X714212835Min: 31.95 / Avg: 32.09 / Max: 32.21Min: 28.15 / Avg: 28.25 / Max: 28.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassCore i9 10980XEThreadripper 3960X0.07650.1530.22950.3060.3825SE +/- 0.00, N = 3SE +/- 0.00, N = 30.300.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassCore i9 10980XEThreadripper 3960X12345Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.34 / Avg: 0.34 / Max: 0.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: DecompressionCore i9 10980XEThreadripper 3960X400800120016002000SE +/- 1.33, N = 3146516601. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: DecompressionCore i9 10980XEThreadripper 3960X30060090012001500Min: 1657 / Avg: 1659.67 / Max: 16611. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileCore i9 10980XEThreadripper 3960X612182430SE +/- 0.01, N = 3SE +/- 0.05, N = 323.2720.55
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileCore i9 10980XEThreadripper 3960X510152025Min: 23.26 / Avg: 23.27 / Max: 23.29Min: 20.48 / Avg: 20.55 / Max: 20.63

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Core i9 10980XEThreadripper 3960X4080120160200SE +/- 0.12, N = 3SE +/- 0.37, N = 3171.10152.861. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Core i9 10980XEThreadripper 3960X306090120150Min: 170.94 / Avg: 171.09 / Max: 171.32Min: 152.35 / Avg: 152.86 / Max: 153.581. flow 2020.04

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingCore i9 10980XEThreadripper 3960X2004006008001000SE +/- 2.69, N = 3SE +/- 1.98, N = 3893.54989.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingCore i9 10980XEThreadripper 3960X2004006008001000Min: 889.47 / Avg: 893.54 / Max: 898.63Min: 987.48 / Avg: 989.85 / Max: 993.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageCore i9 10980XEThreadripper 3960X5K10K15K20K25KSE +/- 169.25, N = 14SE +/- 263.10, N = 1521187.6223446.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageCore i9 10980XEThreadripper 3960X4K8K12K16K20KMin: 19477.42 / Avg: 21187.62 / Max: 21652.36Min: 22807.27 / Avg: 23446.27 / Max: 26156.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X4080120160200SE +/- 1.11, N = 3SE +/- 1.01, N = 3171.45189.10MIN: 167.84MIN: 185.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X306090120150Min: 169.23 / Avg: 171.45 / Max: 172.66Min: 187.15 / Avg: 189.1 / Max: 190.541. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: DecompressionCore i9 10980XEThreadripper 3960X150300450600750SE +/- 0.58, N = 37076431. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: DecompressionCore i9 10980XEThreadripper 3960X120240360480600Min: 706 / Avg: 707 / Max: 7081. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X3691215SE +/- 0.04742, N = 3SE +/- 0.03566, N = 39.856929.03180MIN: 9.71MIN: 8.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X3691215Min: 9.77 / Avg: 9.86 / Max: 9.93Min: 8.97 / Avg: 9.03 / Max: 9.11. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterCore i9 10980XEThreadripper 3960X30060090012001500SE +/- 2.51, N = 3SE +/- 1.06, N = 31342.831232.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterCore i9 10980XEThreadripper 3960X2004006008001000Min: 1339.65 / Avg: 1342.83 / Max: 1347.79Min: 1230.49 / Avg: 1232.43 / Max: 1234.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i9 10980XEThreadripper 3960X1020304050SE +/- 0.54, N = 3SE +/- 0.38, N = 346.1642.61
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i9 10980XEThreadripper 3960X918273645Min: 45.23 / Avg: 46.16 / Max: 47.1Min: 41.87 / Avg: 42.6 / Max: 43.15

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: DecompressionCore i9 10980XEThreadripper 3960X2004006008001000SE +/- 1.53, N = 38177621. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: DecompressionCore i9 10980XEThreadripper 3960X140280420560700Min: 759 / Avg: 762 / Max: 7641. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i9 10980XEThreadripper 3960X306090120150SE +/- 1.16, N = 3SE +/- 0.15, N = 3145.64154.861. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i9 10980XEThreadripper 3960X306090120150Min: 143.67 / Avg: 145.64 / Max: 147.7Min: 154.63 / Avg: 154.86 / Max: 155.151. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Core i9 10980XEThreadripper 3960X4080120160200SE +/- 0.08, N = 3SE +/- 0.29, N = 3204.71192.581. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Core i9 10980XEThreadripper 3960X4080120160200Min: 204.56 / Avg: 204.71 / Max: 204.86Min: 192.02 / Avg: 192.58 / Max: 1931. flow 2020.04

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceCore i9 10980XEThreadripper 3960X300K600K900K1200K1500KSE +/- 884.76, N = 5SE +/- 1285.03, N = 5141080613291021. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceCore i9 10980XEThreadripper 3960X200K400K600K800K1000KMin: 1408639 / Avg: 1410806.2 / Max: 1412251Min: 1327175 / Avg: 1329101.6 / Max: 13336021. (CC) gcc options: -O3 -march=native

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 10980XEThreadripper 3960X50100150200250SE +/- 0.47, N = 3SE +/- 1.09, N = 3243.29229.22
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 10980XEThreadripper 3960X4080120160200Min: 242.38 / Avg: 243.29 / Max: 243.95Min: 227.07 / Avg: 229.22 / Max: 230.66

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreCore i9 10980XEThreadripper 3960X40080012001600200019362054

Gzip Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using Gzip compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCore i9 10980XEThreadripper 3960X816243240SE +/- 0.03, N = 3SE +/- 0.01, N = 332.1833.79
OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCore i9 10980XEThreadripper 3960X714212835Min: 32.13 / Avg: 32.18 / Max: 32.24Min: 33.78 / Avg: 33.79 / Max: 33.81

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Core i9 10980XEThreadripper 3960X1.06762.13523.20284.27045.338SE +/- 0.007, N = 3SE +/- 0.008, N = 34.7454.5231. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Core i9 10980XEThreadripper 3960X246810Min: 4.73 / Avg: 4.74 / Max: 4.76Min: 4.52 / Avg: 4.52 / Max: 4.541. (CXX) g++ options: -O3 -fPIC

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: CompressionCore i9 10980XEThreadripper 3960X102030405045431. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Core i9 10980XEThreadripper 3960X1.10232.20463.30694.40925.5115SE +/- 0.024, N = 3SE +/- 0.017, N = 34.8994.6851. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Core i9 10980XEThreadripper 3960X246810Min: 4.87 / Avg: 4.9 / Max: 4.95Min: 4.66 / Avg: 4.69 / Max: 4.721. (CXX) g++ options: -O3 -fPIC

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i9 10980XEThreadripper 3960X510152025SE +/- 0.02, N = 3SE +/- 0.13, N = 318.3119.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i9 10980XEThreadripper 3960X510152025Min: 18.27 / Avg: 18.31 / Max: 18.33Min: 18.97 / Avg: 19.13 / Max: 19.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: DecompressionCore i9 10980XEThreadripper 3960X1202403604806005325101. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: CompressionCore i9 10980XEThreadripper 3960X3060901201501161211. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreCore i9 10980XEThreadripper 3960X3006009001200150015471493

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: DecompressionCore i9 10980XEThreadripper 3960X30060090012001500SE +/- 0.58, N = 3SE +/- 2.40, N = 3129512511. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: DecompressionCore i9 10980XEThreadripper 3960X2004006008001000Min: 1294 / Avg: 1295 / Max: 1296Min: 1248 / Avg: 1251.33 / Max: 12561. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i9 10980XEThreadripper 3960X510152025SE +/- 0.01, N = 3SE +/- 0.13, N = 318.0718.711. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i9 10980XEThreadripper 3960X510152025Min: 18.04 / Avg: 18.07 / Max: 18.08Min: 18.44 / Avg: 18.71 / Max: 18.891. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i9 10980XEThreadripper 3960X2M4M6M8M10MSE +/- 22362.22, N = 3SE +/- 14360.71, N = 3922647689253391. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i9 10980XEThreadripper 3960X1.6M3.2M4.8M6.4M8MMin: 9181809 / Avg: 9226475.67 / Max: 9250777Min: 8903657 / Avg: 8925339.33 / Max: 89524931. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X0.38590.77181.15771.54361.9295SE +/- 0.00156, N = 3SE +/- 0.00518, N = 31.715241.66015MIN: 1.68MIN: 1.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 1.71 / Avg: 1.72 / Max: 1.72Min: 1.65 / Avg: 1.66 / Max: 1.671. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i9 10980XEThreadripper 3960X510152025SE +/- 0.01, N = 3SE +/- 0.15, N = 319.2919.921. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i9 10980XEThreadripper 3960X510152025Min: 19.28 / Avg: 19.29 / Max: 19.32Min: 19.76 / Avg: 19.92 / Max: 20.221. (CC) gcc options: -pthread -fvisibility=hidden -O2

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: BLASCore i9 10980XEThreadripper 3960X2004006008001000SE +/- 4.81, N = 3SE +/- 16.02, N = 4106810991. (CXX) g++ options: -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: BLASCore i9 10980XEThreadripper 3960X2004006008001000Min: 1059 / Avg: 1068.33 / Max: 1075Min: 1074 / Avg: 1099 / Max: 11461. (CXX) g++ options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X3691215SE +/- 0.05740, N = 3SE +/- 0.05351, N = 39.392219.13818MIN: 9.23MIN: 8.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCore i9 10980XEThreadripper 3960X3691215Min: 9.28 / Avg: 9.39 / Max: 9.47Min: 9.08 / Avg: 9.14 / Max: 9.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesCore i9 10980XEThreadripper 3960X612182430SE +/- 0.01, N = 3SE +/- 0.07, N = 323.4724.11
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesCore i9 10980XEThreadripper 3960X612182430Min: 23.46 / Avg: 23.47 / Max: 23.49Min: 23.98 / Avg: 24.11 / Max: 24.24

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X0.58711.17421.76132.34842.9355SE +/- 0.00436, N = 3SE +/- 0.01528, N = 32.609172.55717MIN: 2.58MIN: 2.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X246810Min: 2.6 / Avg: 2.61 / Max: 2.62Min: 2.54 / Avg: 2.56 / Max: 2.591. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreCore i9 10980XEThreadripper 3960X800160024003200400034833547

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Core i9 10980XEThreadripper 3960X70140210280350SE +/- 0.11, N = 3SE +/- 0.13, N = 3322.18316.601. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Core i9 10980XEThreadripper 3960X60120180240300Min: 321.98 / Avg: 322.18 / Max: 322.35Min: 316.37 / Avg: 316.6 / Max: 316.821. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 18Core i9 10980XEThreadripper 3960X80160240320400SE +/- 0.11, N = 3SE +/- 0.19, N = 3359.10353.261. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 18Core i9 10980XEThreadripper 3960X60120180240300Min: 358.99 / Avg: 359.1 / Max: 359.33Min: 352.94 / Avg: 353.26 / Max: 353.611. flow 2020.04

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: DecompressionCore i9 10980XEThreadripper 3960X306090120150SE +/- 0.33, N = 31291311. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: DecompressionCore i9 10980XEThreadripper 3960X20406080100Min: 130 / Avg: 130.67 / Max: 1311. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: DecompressionCore i9 10980XEThreadripper 3960X30060090012001500148915051. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degCore i9 10980XEThreadripper 3960X1632486480SE +/- 0.03, N = 3SE +/- 0.37, N = 371.5772.251. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degCore i9 10980XEThreadripper 3960X1428425670Min: 71.51 / Avg: 71.57 / Max: 71.61Min: 71.52 / Avg: 72.25 / Max: 72.771. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

System XZ Decompression

This test measures the time to decompress a Linux kernel tarball using XZ. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem XZ DecompressionCore i9 10980XEThreadripper 3960X0.76031.52062.28093.04123.8015SE +/- 0.004, N = 3SE +/- 0.013, N = 33.3503.379
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem XZ DecompressionCore i9 10980XEThreadripper 3960X246810Min: 3.34 / Avg: 3.35 / Max: 3.36Min: 3.36 / Avg: 3.38 / Max: 3.41

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: CompressionCore i9 10980XEThreadripper 3960X120240360480600SE +/- 0.58, N = 35455491. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: CompressionCore i9 10980XEThreadripper 3960X100200300400500Min: 548 / Avg: 549 / Max: 5501. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: CompressionCore i9 10980XEThreadripper 3960X50100150200250SE +/- 0.33, N = 32172181. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: CompressionCore i9 10980XEThreadripper 3960X4080120160200Min: 218 / Avg: 218.33 / Max: 2191. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: CompressionCore i9 10980XEThreadripper 3960X120240360480600SE +/- 0.33, N = 3SE +/- 0.88, N = 35395411. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: CompressionCore i9 10980XEThreadripper 3960X100200300400500Min: 538 / Avg: 538.67 / Max: 539Min: 539 / Avg: 540.67 / Max: 5421. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Parallel BZIP2 Compression

This test measures the time needed to compress a file (a .tar package of the Linux kernel source code) using BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.12256MB File CompressionCore i9 10980XE0.50271.00541.50812.01082.5135SE +/- 0.006, N = 32.2341. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE0.38370.76741.15111.53481.9185SE +/- 0.00114, N = 31.70524MIN: 1.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE3691215SE +/- 0.00, N = 310.81MIN: 10.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE3691215SE +/- 0.00358, N = 39.20655MIN: 9.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE246810SE +/- 0.00830, N = 37.85544MIN: 7.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE1428425670SE +/- 0.03, N = 363.48MIN: 62.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPUCore i9 10980XE1.24482.48963.73444.97926.224SE +/- 0.00112, N = 35.53233MIN: 5.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X1326395265SE +/- 1.15, N = 15SE +/- 0.11, N = 356.8452.15MIN: 50.94MIN: 51.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i9 10980XEThreadripper 3960X1122334455Min: 52.7 / Avg: 56.84 / Max: 66.25Min: 51.98 / Avg: 52.15 / Max: 52.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

86 Results Shown

oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Deconvolution Batch deconv_3d - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  IP Batch 1D - u8s8f32 - CPU
m-queens
oneDNN
BRL-CAD
N-Queens
oneDNN:
  IP Batch All - u8s8f32 - CPU
  IP Batch 1D - f32 - CPU
7-Zip Compression
Stockfish
Timed Linux Kernel Compilation
NeatBench
Rodinia
asmFish
G'MIC
Open Porous Media
Rodinia:
  OpenMP Leukocyte
  OpenMP Streamcluster
LAME MP3 Encoding
Open Porous Media
libavif avifenc:
  2
  0
Rodinia
OCRMyPDF
Rodinia
AOM AV1
FLAC Audio Encoding
lzbench
AOM AV1
lzbench
AOM AV1
oneDNN
AOM AV1
lzbench
Timed Apache Compilation
Open Porous Media
Darmstadt Automotive Parallel Heterogeneous Suite:
  OpenMP - NDT Mapping
  OpenMP - Points2Image
oneDNN
lzbench
oneDNN
Darmstadt Automotive Parallel Heterogeneous Suite
Hugin
lzbench
G'MIC
Open Porous Media
TSCP
WireGuard + Linux Networking Stack Stress Test
AI Benchmark Alpha
Gzip Compression
libavif avifenc
lzbench
libavif avifenc
AOM AV1
lzbench:
  Crush 0 - Decompression
  Crush 0 - Compression
AI Benchmark Alpha
lzbench
G'MIC
Crafty
oneDNN
XZ Compression
LeelaChessZero
oneDNN
Tesseract OCR
oneDNN
AI Benchmark Alpha
Open Porous Media:
  Flow MPI Norne - 16
  Flow MPI Norne - 18
lzbench:
  XZ 0 - Decompression
  Zstd 1 - Decompression
Montage Astronomical Image Mosaic Engine
System XZ Decompression
lzbench:
  Zstd 1 - Compression
  Brotli 2 - Compression
  Brotli 0 - Compression
Parallel BZIP2 Compression
oneDNN:
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_3d - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_1d - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
  IP Batch All - bf16bf16bf16 - CPU
  IP Batch 1D - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU