Core i9 7980XE July

Intel Core i9-7980XE testing with a ASUS PRIME X299-A (2002 BIOS) and Gigabyte AMD Radeon 540/540X/550/550X / RX 540X/550/550X 2GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007118-NE-COREI979879
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i9 7980XE
July 10 2020
  5 Hours, 56 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 7980XE JulyOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-7980XE @ 4.20GHz (18 Cores / 36 Threads)ASUS PRIME X299-A (2002 BIOS)Intel Sky Lake-E DMI3 Registers16GBSamsung SSD 970 EVO 500GBGigabyte AMD Radeon 540/540X/550/550X / RX 540X/550/550X 2GB (1206/1750MHz)Realtek ALC1220LG Ultra HDIntel I219-VUbuntu 20.045.4.0-24-generic (x86_64)GNOME Shell 3.36.1X Server 1.20.8modesetting 1.20.84.6 Mesa 20.0.4 (LLVM 9.0.1)GCC 9.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I9 7980XE July BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x2000064- OpenJDK Runtime Environment (build 11.0.7-ea+9-post-Ubuntu-1ubuntu1)- Python 3.8.2- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

Core i9 7980XE Julywireguard: blosc: blosclzlczero: BLASlczero: Eigenlczero: Randrodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterjava-gradle-perf: Reactorcompress-zstd: 3compress-zstd: 19onednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: IP Batch 1D - bf16bf16bf16 - CPUonednn: IP Batch All - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch deconv_3d - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimesvt-av1: Enc Mode 0 - 1080psvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080pluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismcompress-7zip: Compress Speed Testavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-apache: Time To Compilebuild-linux-kernel: Time To Compilebuild2: Time To Compilemontage: Mosaic of M17, K band, 1.5 deg x 1.5 degdaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clustergmic: 2D Function Plotting, 1000 Timesgmic: Plotting Isosurface Of A 3D Volume, 1000 Timesgmic: 3D Elevated Function In Rand Colors, 100 Timeshugin: Panorama Photo Assistant + Stitching Timeocrmypdf: Processing 60 Page PDF Documentstress-ng: MMAPstress-ng: NUMAstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: RdRandstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingv-ray: CPUblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlypyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonneatbench: Allneatbench: CPUtesseract-ocr: Time To OCR 7 Imagesbrl-cad: VGR Performance MetricCore i9 7980XE263.9027616.512701136143457137.902103.09165.64212.00913.929290.0865494.267.42.1660631.91640.85416111.31095.4748763.78308.831801.688342.582758.389151.032961.58575155.77948.42788.082019.4701810.92301.277600.4638541.74922352.44192.98333.8865.110.282.0612.873.2027.880.1215.37840.581306.68313.45252.142.632.829274275.45945.1435.7605.57826.37253.69580.83379.533831.4016792.0032876591095.96152.97319.55666.26051.01321.445340.06431.921231.73157273.143324.10200236691.66184488.27101101.44220036.7884.807478.442559415.3576605.69118124.452888.1812283.3613916721.752038933.93230.809785911.512657195.81269.67147.76371.00337.8321630393.097.711117.640721.794.114717.640.137525.925.825.565207537OpenBenchmarking.org

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 7980XE60120180240300SE +/- 0.14, N = 3263.90

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzCore i9 7980XE16003200480064008000SE +/- 12.68, N = 37616.51. (CXX) g++ options: -rdynamic

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: BLASCore i9 7980XE30060090012001500SE +/- 15.45, N = 312701. (CXX) g++ options: -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: EigenCore i9 7980XE2004006008001000SE +/- 11.02, N = 311361. (CXX) g++ options: -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: RandomCore i9 7980XE30K60K90K120K150KSE +/- 89.65, N = 31434571. (CXX) g++ options: -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i9 7980XE306090120150SE +/- 0.68, N = 3137.901. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DCore i9 7980XE20406080100SE +/- 1.65, N = 3103.091. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i9 7980XE1530456075SE +/- 0.42, N = 365.641. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i9 7980XE3691215SE +/- 0.03, N = 312.011. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterCore i9 7980XE48121620SE +/- 0.21, N = 413.931. (CXX) g++ options: -O2 -lOpenCL

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorCore i9 7980XE60120180240300SE +/- 3.66, N = 3290.09

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Core i9 7980XE12002400360048006000SE +/- 6.24, N = 35494.21. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Core i9 7980XE1530456075SE +/- 0.18, N = 367.41. (CC) gcc options: -O3 -pthread -lz -llzma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUCore i9 7980XE0.48740.97481.46221.94962.437SE +/- 0.00252, N = 32.16606MIN: 2.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUCore i9 7980XE714212835SE +/- 0.08, N = 331.92MIN: 30.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUCore i9 7980XE0.19220.38440.57660.76880.961SE +/- 0.002363, N = 30.854161MIN: 0.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUCore i9 7980XE3691215SE +/- 0.00, N = 311.31MIN: 11.121. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPUCore i9 7980XE1.23182.46363.69544.92726.159SE +/- 0.00365, N = 35.47487MIN: 5.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPUCore i9 7980XE1428425670SE +/- 0.01, N = 363.78MIN: 63.21. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCore i9 7980XE246810SE +/- 0.00869, N = 38.83180MIN: 8.791. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUCore i9 7980XE0.37990.75981.13971.51961.8995SE +/- 0.00633, N = 31.68834MIN: 1.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUCore i9 7980XE0.58111.16221.74332.32442.9055SE +/- 0.00071, N = 32.58275MIN: 2.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCore i9 7980XE246810SE +/- 0.00810, N = 38.38915MIN: 8.321. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUCore i9 7980XE0.23240.46480.69720.92961.162SE +/- 0.00031, N = 31.03296MIN: 1.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUCore i9 7980XE0.35680.71361.07041.42721.784SE +/- 0.00697, N = 31.58575MIN: 1.551. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i9 7980XE306090120150SE +/- 0.05, N = 3155.78MIN: 154.541. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i9 7980XE1122334455SE +/- 0.18, N = 348.43MIN: 46.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCore i9 7980XE246810SE +/- 0.02865, N = 38.08201MIN: 7.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 7980XE3691215SE +/- 0.00514, N = 39.47018MIN: 9.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPUCore i9 7980XE3691215SE +/- 0.00, N = 310.92MIN: 10.891. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCore i9 7980XE0.28750.5750.86251.151.4375SE +/- 0.00065, N = 31.27760MIN: 1.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCore i9 7980XE0.10440.20880.31320.41760.522SE +/- 0.000720, N = 30.463854MIN: 0.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCore i9 7980XE0.39360.78721.18081.57441.968SE +/- 0.00295, N = 31.74922MIN: 1.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pCore i9 7980XE80160240320400SE +/- 0.72, N = 3352.44MIN: 263.96 / MAX: 446.811. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KCore i9 7980XE4080120160200SE +/- 1.47, N = 3192.98MIN: 129.21 / MAX: 213.921. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pCore i9 7980XE70140210280350SE +/- 1.00, N = 3333.88MIN: 224.03 / MAX: 362.111. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitCore i9 7980XE1530456075SE +/- 0.15, N = 365.11MIN: 46.73 / MAX: 133.931. (CC) gcc options: -pthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassCore i9 7980XE0.0630.1260.1890.2520.315SE +/- 0.00, N = 30.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i9 7980XE0.46350.9271.39051.8542.3175SE +/- 0.00, N = 32.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i9 7980XE3691215SE +/- 0.05, N = 312.871. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i9 7980XE0.721.442.162.883.6SE +/- 0.01, N = 33.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i9 7980XE714212835SE +/- 0.08, N = 327.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pCore i9 7980XE0.02720.05440.08160.10880.136SE +/- 0.000, N = 30.1211. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pCore i9 7980XE1.21012.42023.63034.84046.0505SE +/- 0.015, N = 35.3781. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pCore i9 7980XE918273645SE +/- 0.15, N = 340.581. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i9 7980XE70140210280350SE +/- 3.98, N = 5306.681. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pCore i9 7980XE70140210280350SE +/- 1.88, N = 3313.451. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pCore i9 7980XE60120180240300SE +/- 0.83, N = 3252.141. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCCore i9 7980XE0.59181.18361.77542.36722.959SE +/- 0.00, N = 32.63MIN: 2.57 / MAX: 2.73

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismCore i9 7980XE0.63451.2691.90352.5383.1725SE +/- 0.01, N = 32.82MIN: 2.76 / MAX: 2.85

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestCore i9 7980XE20K40K60K80K100KSE +/- 444.03, N = 3927421. (CXX) g++ options: -pipe -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Core i9 7980XE20406080100SE +/- 0.32, N = 375.461. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Core i9 7980XE1020304050SE +/- 0.16, N = 345.141. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Core i9 7980XE1.2962.5923.8885.1846.48SE +/- 0.012, N = 35.7601. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Core i9 7980XE1.25512.51023.76535.02046.2755SE +/- 0.017, N = 35.5781. (CXX) g++ options: -O3 -fPIC

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileCore i9 7980XE612182430SE +/- 0.03, N = 326.37

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileCore i9 7980XE1224364860SE +/- 0.77, N = 453.70

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileCore i9 7980XE20406080100SE +/- 0.16, N = 380.83

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degCore i9 7980XE20406080100SE +/- 0.04, N = 379.531. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingCore i9 7980XE2004006008001000SE +/- 0.53, N = 3831.401. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageCore i9 7980XE4K8K12K16K20KSE +/- 59.74, N = 316792.001. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterCore i9 7980XE2004006008001000SE +/- 0.27, N = 31095.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i9 7980XE306090120150SE +/- 1.09, N = 3152.971. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i9 7980XE510152025SE +/- 0.01, N = 319.561. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i9 7980XE1530456075SE +/- 0.03, N = 366.261. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i9 7980XE1224364860SE +/- 0.12, N = 351.01

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentCore i9 7980XE510152025SE +/- 0.08, N = 321.45

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPCore i9 7980XE70140210280350SE +/- 4.57, N = 3340.061. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMACore i9 7980XE90180270360450SE +/- 1.46, N = 3431.921. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDCore i9 7980XE30060090012001500SE +/- 0.44, N = 31231.731. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicCore i9 7980XE30K60K90K120K150KSE +/- 1376.15, N = 3157273.141. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoCore i9 7980XE7001400210028003500SE +/- 3.20, N = 33324.101. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocCore i9 7980XE40M80M120M160M200MSE +/- 226623.48, N = 3200236691.661. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandCore i9 7980XE40K80K120K160K200KSE +/- 8.70, N = 3184488.271. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingCore i9 7980XE20K40K60K80K100KSE +/- 213.43, N = 3101101.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILECore i9 7980XE50K100K150K200K250KSE +/- 188.09, N = 3220036.781. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheCore i9 7980XE20406080100SE +/- 0.67, N = 384.801. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressCore i9 7980XE16003200480064008000SE +/- 24.05, N = 37478.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresCore i9 7980XE500K1000K1500K2000K2500KSE +/- 1891.74, N = 32559415.351. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathCore i9 7980XE16K32K48K64K80KSE +/- 124.14, N = 376605.691. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathCore i9 7980XE30K60K90K120K150KSE +/- 7.10, N = 3118124.451. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingCore i9 7980XE6001200180024003000SE +/- 5.35, N = 32888.181. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityCore i9 7980XE3K6K9K12K15KSE +/- 45.95, N = 312283.361. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingCore i9 7980XE3M6M9M12M15MSE +/- 36438.21, N = 313916721.751. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsCore i9 7980XE400K800K1200K1600K2000KSE +/- 18327.82, N = 32038933.931. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingCore i9 7980XE50100150200250SE +/- 1.04, N = 3230.801. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingCore i9 7980XE2M4M6M8M10MSE +/- 11670.73, N = 39785911.511. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUCore i9 7980XE6K12K18K24K30KSE +/- 134.10, N = 326571

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: BMW27 - Compute: CPU-OnlyCore i9 7980XE20406080100SE +/- 0.22, N = 395.81

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Classroom - Compute: CPU-OnlyCore i9 7980XE60120180240300SE +/- 0.21, N = 3269.67

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Fishy Cat - Compute: CPU-OnlyCore i9 7980XE306090120150SE +/- 0.10, N = 3147.76

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Barbershop - Compute: CPU-OnlyCore i9 7980XE80160240320400SE +/- 0.37, N = 3371.00

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.82Blend File: Pabellon Barcelona - Compute: CPU-OnlyCore i9 7980XE70140210280350SE +/- 0.60, N = 3337.83

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goCore i9 7980XE50100150200250SE +/- 0.33, N = 3216

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Core i9 7980XE70140210280350303

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosCore i9 7980XE20406080100SE +/- 0.10, N = 393.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatCore i9 7980XE20406080100SE +/- 0.06, N = 397.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyCore i9 7980XE20406080100111

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibCore i9 7980XE48121620SE +/- 0.00, N = 317.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceCore i9 7980XE90180270360450SE +/- 0.33, N = 3407

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsCore i9 7980XE510152025SE +/- 0.03, N = 321.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesCore i9 7980XE20406080100SE +/- 0.03, N = 394.1

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileCore i9 7980XE306090120150147

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupCore i9 7980XE48121620SE +/- 0.06, N = 317.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateCore i9 7980XE918273645SE +/- 0.06, N = 340.1

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonCore i9 7980XE80160240320400375

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: AllCore i9 7980XE612182430SE +/- 0.28, N = 1525.9

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUCore i9 7980XE612182430SE +/- 0.33, N = 1525.8

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesCore i9 7980XE612182430SE +/- 0.08, N = 325.57

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricCore i9 7980XE40K80K120K160K200K2075371. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -lm

110 Results Shown

WireGuard + Linux Networking Stack Stress Test
C-Blosc
LeelaChessZero:
  BLAS
  Eigen
  Rand
Rodinia:
  OpenMP LavaMD
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
Java Gradle Build
Zstd Compression:
  3
  19
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  IP Batch 1D - u8s8f32 - CPU
  IP Batch All - u8s8f32 - CPU
  IP Batch 1D - bf16bf16bf16 - CPU
  IP Batch All - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch deconv_1d - u8s8f32 - CPU
  Deconvolution Batch deconv_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_1d - bf16bf16bf16 - CPU
  Deconvolution Batch deconv_3d - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
SVT-AV1:
  Enc Mode 0 - 1080p
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
  Visual Quality Optimized - Bosphorus 1080p
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
7-Zip Compression
libavif avifenc:
  0
  2
  8
  10
Timed Apache Compilation
Timed Linux Kernel Compilation
Build2
Montage Astronomical Image Mosaic Engine
Darmstadt Automotive Parallel Heterogeneous Suite:
  OpenMP - NDT Mapping
  OpenMP - Points2Image
  OpenMP - Euclidean Cluster
G'MIC:
  2D Function Plotting, 1000 Times
  Plotting Isosurface Of A 3D Volume, 1000 Times
  3D Elevated Function In Rand Colors, 100 Times
Hugin
OCRMyPDF
Stress-NG:
  MMAP
  NUMA
  MEMFD
  Atomic
  Crypto
  Malloc
  RdRand
  Forking
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
Chaos Group V-RAY
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
NeatBench:
  All
  CPU
Tesseract OCR
BRL-CAD