Core i9 7960X Ubuntu 19.04 Intel Perf

Intel Core i9-7960X testing with a MSI X299 SLI PLUS (MS-7A93) v1.0 (1.A0 BIOS) and Gigabyte AMD Radeon RX 550/550X 2GB on Ubuntu 19.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1910071-PTS-COREI97975
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Core i9-7960X
October 06 2019
  17 Hours, 10 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 7960X Ubuntu 19.04 Intel PerfOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-7960X @ 4.40GHz (16 Cores / 32 Threads)MSI X299 SLI PLUS (MS-7A93) v1.0 (1.A0 BIOS)Intel Sky Lake-E DMI3 Registers16384MB256GB INTEL SSDPEKKW256G8Gigabyte AMD Radeon RX 550/550X 2GB (1206/1750MHz)Realtek ALC1220ASUS VP28UIntel I219-V + Intel I211Ubuntu 19.045.0.20-050020-generic (x86_64)GNOME Shell 3.32.0X Server 1.20.4modesetting 1.20.44.5 Mesa 19.0.2 (LLVM 8.0.0)GCC 8.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I9 7960X Ubuntu 19.04 Intel Perf BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

Core i9 7960X Ubuntu 19.04 Intel Perflczero: BLASlczero: Randdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitlibgav1: Chimera 1080plibgav1: Summer Nature 4Klibgav1: Summer Nature 1080plibgav1: Chimera 1080p 10-bitmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - u8s8f32mkl-dnn: Convolution Batch conv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16ospray: San Miguel - SciVisospray: XFrog Forest - SciVisospray: San Miguel - Path Tracerospray: NASA Streamlines - SciVisospray: XFrog Forest - Path Tracerospray: Magnetic Reconnection - SciVisospray: NASA Streamlines - Path Tracerospray: Magnetic Reconnection - Path Traceraom-av1: AV1 Video Encodingembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objsvt-av1: Enc Mode 0 - 1080psvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080poidn: Memorialluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismtungsten: Hairtungsten: Water Caustictungsten: Non-Exponentialtungsten: Volumetric Causticpgbench: Mostly RAM - Normal Load - Read Onlypgbench: Buffer Test - Normal Load - Read Onlypgbench: Mostly RAM - Normal Load - Read Writepgbench: Buffer Test - Normal Load - Read Writepgbench: Mostly RAM - Single Thread - Read Onlypgbench: Buffer Test - Single Thread - Read Onlypgbench: Mostly RAM - Single Thread - Read Writepgbench: Buffer Test - Single Thread - Read Writepgbench: Mostly RAM - Heavy Contention - Read Onlypgbench: Buffer Test - Heavy Contention - Read Onlypgbench: Mostly RAM - Heavy Contention - Read Writepgbench: Buffer Test - Heavy Contention - Read Writeblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-OnlyIntel Core i9-7960X41.77232472615.52198.10551.3873.6167.4228.77105.3323.285.0613.960.834.664.8423.1811.971141.4810300.571.852.67131.435268.311025.281.025754.60143.4821.9388.135351.4163.949.2611.65977.9638.373723.19252.5627.034.442.4135.712.4429.416.62400.000.1117.4920.1921.0618.9226.5622.790.064.8447.6321.112.612.4716.7522.066.457.8565951.62449279.262995.3411378.354669.5228997.90653.43703.1666717.55444875.003287.6010793.99101.94278.30154.04394.86359.40OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.22.0Backend: BLASIntel Core i9-7960X1020304050SE +/- 0.96, N = 1541.771. (CXX) g++ options: -lpthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.22.0Backend: RandomIntel Core i9-7960X50K100K150K200K250KSE +/- 706.13, N = 32324721. (CXX) g++ options: -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode some sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.4.0Video Input: Chimera 1080pIntel Core i9-7960X130260390520650SE +/- 2.03, N = 3615.52MIN: 472.11 / MAX: 771.391. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.4.0Video Input: Summer Nature 4KIntel Core i9-7960X4080120160200SE +/- 0.25, N = 3198.10MIN: 154.47 / MAX: 215.281. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.4.0Video Input: Summer Nature 1080pIntel Core i9-7960X120240360480600SE +/- 4.34, N = 3551.38MIN: 402.09 / MAX: 607.881. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.4.0Video Input: Chimera 1080p 10-bitIntel Core i9-7960X1632486480SE +/- 0.12, N = 373.61MIN: 45.22 / MAX: 169.111. (CC) gcc options: -pthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080pIntel Core i9-7960X1530456075SE +/- 0.02, N = 367.421. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 4KIntel Core i9-7960X714212835SE +/- 0.01, N = 328.771. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 1080pIntel Core i9-7960X20406080100SE +/- 0.13, N = 3105.331. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080p 10-bitIntel Core i9-7960X612182430SE +/- 0.00, N = 323.281. (CXX) g++ options: -O3 -lpthread

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: f32Intel Core i9-7960X1.13852.2773.41554.5545.6925SE +/- 0.08, N = 35.06MIN: 3.961. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: f32Intel Core i9-7960X48121620SE +/- 0.04, N = 313.96MIN: 13.561. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f32Intel Core i9-7960X0.18680.37360.56040.74720.934SE +/- 0.00, N = 30.83MIN: 0.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f32Intel Core i9-7960X1.04852.0973.14554.1945.2425SE +/- 0.06, N = 34.66MIN: 4.331. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: bf16bf16bf16Intel Core i9-7960X1.0892.1783.2674.3565.445SE +/- 0.00, N = 34.84MIN: 4.751. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: bf16bf16bf16Intel Core i9-7960X612182430SE +/- 0.16, N = 323.18MIN: 8.711. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: f32Intel Core i9-7960X3691215SE +/- 0.04, N = 311.97MIN: 11.791. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: f32Intel Core i9-7960X2004006008001000SE +/- 0.28, N = 31141.48MIN: 1133.151. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f32Intel Core i9-7960X2K4K6K8K10KSE +/- 4.57, N = 310300.57MIN: 10286.21. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Core i9-7960X0.41630.83261.24891.66522.0815SE +/- 0.00, N = 31.85MIN: 1.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Core i9-7960X0.60081.20161.80242.40323.004SE +/- 0.01, N = 32.67MIN: 2.631. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: f32Intel Core i9-7960X306090120150SE +/- 0.11, N = 3131.43MIN: 130.761. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f32Intel Core i9-7960X11002200330044005500SE +/- 17.66, N = 35268.31MIN: 5227.911. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: f32Intel Core i9-7960X2004006008001000SE +/- 0.85, N = 31025.28MIN: 1018.191. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Intel Core i9-7960X0.22950.4590.68850.9181.1475SE +/- 0.00, N = 31.02MIN: 1.011. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Intel Core i9-7960X12002400360048006000SE +/- 20.62, N = 35754.60MIN: 5726.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Recurrent Neural Network Training - Data Type: f32Intel Core i9-7960X306090120150SE +/- 0.23, N = 3143.48MIN: 141.781. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16Intel Core i9-7960X510152025SE +/- 0.01, N = 321.93MIN: 21.81. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32Intel Core i9-7960X20406080100SE +/- 0.04, N = 388.13MIN: 87.691. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16Intel Core i9-7960X11002200330044005500SE +/- 0.60, N = 35351.41MIN: 5342.171. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Intel Core i9-7960X1428425670SE +/- 0.02, N = 363.94MIN: 63.361. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16Intel Core i9-7960X3691215SE +/- 0.01, N = 39.26MIN: 9.21. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16Intel Core i9-7960X3691215SE +/- 0.01, N = 311.65MIN: 11.531. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16Intel Core i9-7960X2004006008001000SE +/- 0.19, N = 3977.96MIN: 976.781. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32Intel Core i9-7960X918273645SE +/- 0.41, N = 738.37MIN: 37.631. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16Intel Core i9-7960X8001600240032004000SE +/- 8.98, N = 33723.19MIN: 3707.071. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16Intel Core i9-7960X60120180240300SE +/- 0.05, N = 3252.56MIN: 252.071. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisIntel Core i9-7960X612182430SE +/- 0.00, N = 1227.03MIN: 25.64 / MAX: 27.78

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisIntel Core i9-7960X0.9991.9982.9973.9964.995SE +/- 0.00, N = 124.44MIN: 4.29 / MAX: 4.48

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerIntel Core i9-7960X0.54231.08461.62692.16922.7115SE +/- 0.00, N = 122.41MIN: 2.37 / MAX: 2.43

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisIntel Core i9-7960X816243240SE +/- 0.00, N = 1235.71MIN: 31.25

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerIntel Core i9-7960X0.5491.0981.6472.1962.745SE +/- 0.00, N = 122.44MIN: 2.37 / MAX: 2.46

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisIntel Core i9-7960X714212835SE +/- 0.00, N = 1229.41MIN: 28.57 / MAX: 30.3

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerIntel Core i9-7960X246810SE +/- 0.00, N = 126.62MIN: 6.21 / MAX: 6.76

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerIntel Core i9-7960X90180270360450SE +/- 21.82, N = 15400.00MIN: 333.33 / MAX: 500

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2019-09-16AV1 Video EncodingIntel Core i9-7960X0.02480.04960.07440.09920.124SE +/- 0.00, N = 30.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: CrownIntel Core i9-7960X48121620SE +/- 0.02, N = 317.49MIN: 17.07 / MAX: 17.7

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: CrownIntel Core i9-7960X510152025SE +/- 0.02, N = 320.19MIN: 20.01 / MAX: 20.51

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian DragonIntel Core i9-7960X510152025SE +/- 0.03, N = 321.06MIN: 20.97 / MAX: 21.24

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian Dragon ObjIntel Core i9-7960X510152025SE +/- 0.01, N = 318.92MIN: 18.85 / MAX: 19.04

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian DragonIntel Core i9-7960X612182430SE +/- 0.02, N = 326.56MIN: 26.44 / MAX: 26.85

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjIntel Core i9-7960X510152025SE +/- 0.02, N = 322.79MIN: 22.67 / MAX: 23.04

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.7Encoder Mode: Enc Mode 0 - Input: 1080pIntel Core i9-7960X0.01350.0270.04050.0540.0675SE +/- 0.00, N = 60.061. (CXX) g++ options: -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.7Encoder Mode: Enc Mode 4 - Input: 1080pIntel Core i9-7960X1.0892.1783.2674.3565.445SE +/- 0.02, N = 34.841. (CXX) g++ options: -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.7Encoder Mode: Enc Mode 8 - Input: 1080pIntel Core i9-7960X1122334455SE +/- 0.18, N = 347.631. (CXX) g++ options: -fPIE -fPIC -pie

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.0.0Scene: MemorialIntel Core i9-7960X510152025SE +/- 0.27, N = 521.11

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: DLSCIntel Core i9-7960X0.58731.17461.76192.34922.9365SE +/- 0.02, N = 102.61MIN: 2.43 / MAX: 2.77

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: Rainbow Colors and PrismIntel Core i9-7960X0.55581.11161.66742.22322.779SE +/- 0.04, N = 42.47MIN: 2.4 / MAX: 2.61

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairIntel Core i9-7960X48121620SE +/- 0.02, N = 316.751. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticIntel Core i9-7960X510152025SE +/- 0.05, N = 322.061. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialIntel Core i9-7960X246810SE +/- 0.10, N = 156.451. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticIntel Core i9-7960X246810SE +/- 0.02, N = 37.851. (CXX) g++ options: -std=c++0x -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mavx512f -mavx512vl -mavx512cd -mavx512dq -mavx512bw -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512pf -mno-avx512er -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

PostgreSQL pgbench

This is a simple benchmark of PostgreSQL using pgbench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Mostly RAM - Test: Normal Load - Mode: Read OnlyIntel Core i9-7960X14K28K42K56K70KSE +/- 622.22, N = 365951.621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Buffer Test - Test: Normal Load - Mode: Read OnlyIntel Core i9-7960X100K200K300K400K500KSE +/- 410.00, N = 3449279.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Mostly RAM - Test: Normal Load - Mode: Read WriteIntel Core i9-7960X6001200180024003000SE +/- 129.22, N = 92995.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Buffer Test - Test: Normal Load - Mode: Read WriteIntel Core i9-7960X2K4K6K8K10KSE +/- 401.27, N = 1211378.351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Mostly RAM - Test: Single Thread - Mode: Read OnlyIntel Core i9-7960X10002000300040005000SE +/- 10.07, N = 34669.521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Buffer Test - Test: Single Thread - Mode: Read OnlyIntel Core i9-7960X6K12K18K24K30KSE +/- 65.94, N = 328997.901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Mostly RAM - Test: Single Thread - Mode: Read WriteIntel Core i9-7960X140280420560700SE +/- 137.21, N = 6653.431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Buffer Test - Test: Single Thread - Mode: Read WriteIntel Core i9-7960X150300450600750SE +/- 101.94, N = 15703.161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Mostly RAM - Test: Heavy Contention - Mode: Read OnlyIntel Core i9-7960X14K28K42K56K70KSE +/- 268.17, N = 366717.551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Buffer Test - Test: Heavy Contention - Mode: Read OnlyIntel Core i9-7960X100K200K300K400K500KSE +/- 842.20, N = 3444875.001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Mostly RAM - Test: Heavy Contention - Mode: Read WriteIntel Core i9-7960X7001400210028003500SE +/- 47.58, N = 33287.601. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 12.0Scaling: Buffer Test - Test: Heavy Contention - Mode: Read WriteIntel Core i9-7960X2K4K6K8K10KSE +/- 429.60, N = 1510793.991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: BMW27 - Compute: CPU-OnlyIntel Core i9-7960X20406080100SE +/- 0.05, N = 3101.94

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Classroom - Compute: CPU-OnlyIntel Core i9-7960X60120180240300SE +/- 0.11, N = 3278.30

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Fishy Cat - Compute: CPU-OnlyIntel Core i9-7960X306090120150SE +/- 0.07, N = 3154.04

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Barbershop - Compute: CPU-OnlyIntel Core i9-7960X90180270360450SE +/- 0.38, N = 3394.86

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Pabellon Barcelona - Compute: CPU-OnlyIntel Core i9-7960X80160240320400SE +/- 0.34, N = 3359.40

79 Results Shown

LeelaChessZero:
  BLAS
  Rand
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
libgav1:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
MKL-DNN DNNL:
  IP Batch 1D - f32
  IP Batch All - f32
  IP Batch 1D - u8s8f32
  IP Batch All - u8s8f32
  IP Batch 1D - bf16bf16bf16
  IP Batch All - bf16bf16bf16
  Convolution Batch conv_3d - f32
  Convolution Batch conv_all - f32
  Convolution Batch conv_3d - u8s8f32
  Deconvolution Batch deconv_1d - f32
  Deconvolution Batch deconv_3d - f32
  Convolution Batch conv_alexnet - f32
  Convolution Batch conv_all - u8s8f32
  Deconvolution Batch deconv_all - f32
  Deconvolution Batch deconv_1d - u8s8f32
  Deconvolution Batch deconv_3d - u8s8f32
  Recurrent Neural Network Training - f32
  Convolution Batch conv_3d - bf16bf16bf16
  Convolution Batch conv_alexnet - u8s8f32
  Convolution Batch conv_all - bf16bf16bf16
  Convolution Batch conv_googlenet_v3 - f32
  Deconvolution Batch deconv_1d - bf16bf16bf16
  Deconvolution Batch deconv_3d - bf16bf16bf16
  Convolution Batch conv_alexnet - bf16bf16bf16
  Convolution Batch conv_googlenet_v3 - u8s8f32
  Deconvolution Batch deconv_all - bf16bf16bf16
  Convolution Batch conv_googlenet_v3 - bf16bf16bf16
OSPray:
  San Miguel - SciVis
  XFrog Forest - SciVis
  San Miguel - Path Tracer
  NASA Streamlines - SciVis
  XFrog Forest - Path Tracer
  Magnetic Reconnection - SciVis
  NASA Streamlines - Path Tracer
  Magnetic Reconnection - Path Tracer
AOM AV1
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
SVT-AV1:
  Enc Mode 0 - 1080p
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
Intel Open Image Denoise
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
Tungsten Renderer:
  Hair
  Water Caustic
  Non-Exponential
  Volumetric Caustic
PostgreSQL pgbench:
  Mostly RAM - Normal Load - Read Only
  Buffer Test - Normal Load - Read Only
  Mostly RAM - Normal Load - Read Write
  Buffer Test - Normal Load - Read Write
  Mostly RAM - Single Thread - Read Only
  Buffer Test - Single Thread - Read Only
  Mostly RAM - Single Thread - Read Write
  Buffer Test - Single Thread - Read Write
  Mostly RAM - Heavy Contention - Read Only
  Buffer Test - Heavy Contention - Read Only
  Mostly RAM - Heavy Contention - Read Write
  Buffer Test - Heavy Contention - Read Write
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only