AMD TR 3960X July

AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Sapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007243-PTS-AMDTR39627
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Threadripper 3960X
July 23 2020
  5 Hours, 22 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD TR 3960X JulyOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads)MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS)AMD Starship/Matisse32GB1000GB Sabrent Rocket 4.0 1TBSapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB (1900/875MHz)AMD Navi 10 HDMI AudioASUS MG28UAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.045.4.0-39-generic (x86_64)GNOME Shell 3.36.1X Server 1.20.8modesetting 1.20.84.6 Mesa 20.0.4 (LLVM 9.0.1)GCC 9.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionAMD TR 3960X July BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025- OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)- Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

AMD TR 3960X Julywireguard: rodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterjava-gradle-perf: Reactoronednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimeavifenc: 0avifenc: 2avifenc: 8avifenc: 10build2: Time To Compileopm: Flow MPI Norne - 1opm: Flow MPI Norne - 2opm: Flow MPI Norne - 4opm: Flow MPI Norne - 8opm: Flow MPI Norne - 16opm: Flow MPI Norne - 24montage: Mosaic of M17, K band, 1.5 deg x 1.5 degdaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clustergimp: resizegimp: rotategimp: auto-levelsgimp: unsharp-maskgmic: 2D Function Plotting, 1000 Timesgmic: Plotting Isosurface Of A 3D Volume, 1000 Timesgmic: 3D Elevated Function In Rand Colors, 100 Timeshugin: Panorama Photo Assistant + Stitching Timeocrmypdf: Processing 60 Page PDF Documentoctave-benchmark: rawtherapee: Total Benchmark Timepyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonneatbench: CPUai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scoretesseract-ocr: Time To OCR 7 ImagesThreadripper 3960X229.71082.88383.15148.3639.15719.258263.8251.3689129.77001.1594212.12459.129661.661132.567989.299474.709151.95367194.65252.59710.4439720.992073720.90325.12749.51130.310.342.6719.314.1338.5454.16432.4824.6114.47358.441321.288204.562153.972192.270316.445454.76272.186985.8723920.3248782091233.286.92110.92813.04516.440156.25519.24681.28142.53316.2256.72345.54122629010298.210116.043022.210015712.244.543935.120721513358524.149OpenBenchmarking.org

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestThreadripper 3960X50100150200250SE +/- 0.77, N = 3229.71

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDThreadripper 3960X20406080100SE +/- 0.27, N = 382.881. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DThreadripper 3960X20406080100SE +/- 0.28, N = 383.151. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteThreadripper 3960X1122334455SE +/- 0.24, N = 348.361. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverThreadripper 3960X3691215SE +/- 0.044, N = 39.1571. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterThreadripper 3960X510152025SE +/- 0.04, N = 319.261. (CXX) g++ options: -O2 -lOpenCL

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorThreadripper 3960X60120180240300SE +/- 5.52, N = 9263.83

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUThreadripper 3960X0.3080.6160.9241.2321.54SE +/- 0.00673, N = 31.36891MIN: 1.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUThreadripper 3960X714212835SE +/- 1.27, N = 1529.77MIN: 27.261. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUThreadripper 3960X0.26090.52180.78271.04361.3045SE +/- 0.00142, N = 31.15942MIN: 1.131. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUThreadripper 3960X3691215SE +/- 0.03, N = 312.12MIN: 11.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUThreadripper 3960X3691215SE +/- 0.01269, N = 39.12966MIN: 8.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUThreadripper 3960X0.37380.74761.12141.49521.869SE +/- 0.00603, N = 31.66113MIN: 1.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUThreadripper 3960X0.57781.15561.73342.31122.889SE +/- 0.00628, N = 32.56798MIN: 2.511. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUThreadripper 3960X3691215SE +/- 0.02795, N = 39.29947MIN: 9.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUThreadripper 3960X1.05962.11923.17884.23845.298SE +/- 0.01141, N = 34.70915MIN: 4.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUThreadripper 3960X0.43960.87921.31881.75842.198SE +/- 0.00033, N = 31.95367MIN: 1.871. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUThreadripper 3960X4080120160200SE +/- 3.26, N = 3194.65MIN: 187.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUThreadripper 3960X1224364860SE +/- 0.19, N = 352.60MIN: 51.511. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUThreadripper 3960X0.09990.19980.29970.39960.4995SE +/- 0.002889, N = 30.443972MIN: 0.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUThreadripper 3960X0.22320.44640.66960.89281.116SE +/- 0.001769, N = 30.992073MIN: 0.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pThreadripper 3960X160320480640800SE +/- 1.87, N = 3720.90MIN: 550.53 / MAX: 929.151. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KThreadripper 3960X70140210280350SE +/- 0.34, N = 3325.12MIN: 197.07 / MAX: 346.221. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pThreadripper 3960X160320480640800SE +/- 2.27, N = 3749.51MIN: 470.14 / MAX: 827.521. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitThreadripper 3960X306090120150SE +/- 0.32, N = 3130.31MIN: 88.49 / MAX: 247.031. (CC) gcc options: -pthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassThreadripper 3960X0.07650.1530.22950.3060.3825SE +/- 0.00, N = 30.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassThreadripper 3960X0.60081.20161.80242.40323.004SE +/- 0.00, N = 32.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeThreadripper 3960X510152025SE +/- 0.10, N = 319.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassThreadripper 3960X0.92931.85862.78793.71724.6465SE +/- 0.01, N = 34.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeThreadripper 3960X918273645SE +/- 0.12, N = 338.541. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Threadripper 3960X1224364860SE +/- 0.11, N = 354.161. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Threadripper 3960X816243240SE +/- 0.19, N = 332.481. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Threadripper 3960X1.03752.0753.11254.155.1875SE +/- 0.012, N = 34.6111. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Threadripper 3960X1.00642.01283.01924.02565.032SE +/- 0.014, N = 34.4731. (CXX) g++ options: -O3 -fPIC

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileThreadripper 3960X1326395265SE +/- 0.09, N = 358.44

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Threadripper 3960X70140210280350SE +/- 1.11, N = 3321.291. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2Threadripper 3960X4080120160200SE +/- 0.62, N = 3204.561. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Threadripper 3960X306090120150SE +/- 0.52, N = 3153.971. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Threadripper 3960X4080120160200SE +/- 0.31, N = 3192.271. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Threadripper 3960X70140210280350SE +/- 0.11, N = 3316.451. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 24Threadripper 3960X100200300400500SE +/- 0.17, N = 3454.761. flow 2020.04

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degThreadripper 3960X1632486480SE +/- 0.18, N = 372.191. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingThreadripper 3960X2004006008001000SE +/- 1.18, N = 3985.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageThreadripper 3960X5K10K15K20K25KSE +/- 265.82, N = 1523920.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterThreadripper 3960X30060090012001500SE +/- 3.55, N = 31233.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeThreadripper 3960X246810SE +/- 0.066, N = 96.921

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateThreadripper 3960X3691215SE +/- 0.05, N = 310.93

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsThreadripper 3960X3691215SE +/- 0.03, N = 313.05

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskThreadripper 3960X48121620SE +/- 0.02, N = 316.44

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesThreadripper 3960X306090120150SE +/- 0.80, N = 3156.261. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesThreadripper 3960X510152025SE +/- 0.31, N = 319.251. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesThreadripper 3960X20406080100SE +/- 0.04, N = 381.281. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeThreadripper 3960X1020304050SE +/- 0.10, N = 342.53

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentThreadripper 3960X48121620SE +/- 0.11, N = 316.23

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Threadripper 3960X246810SE +/- 0.069, N = 56.723

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeThreadripper 3960X1020304050SE +/- 0.05, N = 345.541. RawTherapee, version 5.8, command line.

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goThreadripper 3960X50100150200250226

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Threadripper 3960X60120180240300290

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosThreadripper 3960X20406080100102

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatThreadripper 3960X20406080100SE +/- 0.06, N = 398.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyThreadripper 3960X20406080100101

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibThreadripper 3960X48121620SE +/- 0.03, N = 316.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceThreadripper 3960X90180270360450SE +/- 1.00, N = 3430

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsThreadripper 3960X510152025SE +/- 0.00, N = 322.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesThreadripper 3960X20406080100100

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileThreadripper 3960X306090120150157

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupThreadripper 3960X3691215SE +/- 0.00, N = 312.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateThreadripper 3960X1020304050SE +/- 0.07, N = 344.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonThreadripper 3960X100200300400500SE +/- 0.88, N = 3439

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUThreadripper 3960X816243240SE +/- 0.26, N = 335.1

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreThreadripper 3960X4008001200160020002072

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreThreadripper 3960X300600900120015001513

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreThreadripper 3960X80016002400320040003585

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesThreadripper 3960X612182430SE +/- 0.13, N = 324.15

74 Results Shown

WireGuard + Linux Networking Stack Stress Test
Rodinia:
  OpenMP LavaMD
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
Java Gradle Build
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  IP Batch 1D - u8s8f32 - CPU
  IP Batch All - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch deconv_1d - u8s8f32 - CPU
  Deconvolution Batch deconv_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
libavif avifenc:
  0
  2
  8
  10
Build2
Open Porous Media:
  Flow MPI Norne - 1
  Flow MPI Norne - 2
  Flow MPI Norne - 4
  Flow MPI Norne - 8
  Flow MPI Norne - 16
  Flow MPI Norne - 24
Montage Astronomical Image Mosaic Engine
Darmstadt Automotive Parallel Heterogeneous Suite:
  OpenMP - NDT Mapping
  OpenMP - Points2Image
  OpenMP - Euclidean Cluster
GIMP:
  resize
  rotate
  auto-levels
  unsharp-mask
G'MIC:
  2D Function Plotting, 1000 Times
  Plotting Isosurface Of A 3D Volume, 1000 Times
  3D Elevated Function In Rand Colors, 100 Times
Hugin
OCRMyPDF
GNU Octave Benchmark
RawTherapee
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
NeatBench
AI Benchmark Alpha:
  Device Inference Score
  Device Training Score
  Device AI Score
Tesseract OCR