Core i9 7900X Linux

Intel Core i7-7900X testing with a ASRock X299 Extreme4 (P1.50 BIOS) and Zotac NVIDIA NVD9 1GB on Ubuntu 18.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1907163-HV-COREI979044.

Core i9 7900X LinuxProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore i9 7900XIntel Core i7-7900X @ 4.50GHz (10 Cores / 20 Threads)ASRock X299 Extreme4 (P1.50 BIOS)Intel Sky Lake-E DMI3 Registers16384MB120GB Force MP500Zotac NVIDIA NVD9 1GBRealtek ALC1220VA2431Intel I219-VUbuntu 18.044.18.0-25-generic (x86_64)GNOME Shell 3.28.4X Server 1.20.4modesetting 1.20.44.3 Mesa 19.0.2GCC 7.4.0ext41920x1080OpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)- l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

Core i9 7900X Linuxrenaissance: Scala Dottyrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treejohn-the-ripper: Blowfishmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32svt-av1: 1080p 8-bit YUV To AV1 Video Encodebuild-gcc: Time To Compilecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9tjbench: Decompression Throughputappleseed: Emilyappleseed: Disney Materialappleseed: Material TesterCore i9 7900X6474.095284.663899.8324033.6420572.216991.8111266.361940016.33109.394.063.9744.1744.2417.401608.562.623.48194.311782.3311695.6711761.736139.596266.4290.281.459916.53128.501.469998.8216488.13131.2051.5054.8434.48969.1021.41207.09355.62199.34203.38OpenBenchmarking.org

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Scala DottyCore i9 7900X14002800420056007000SE +/- 38.32, N = 86474.09

Renaissance

Test: Apache Spark ALS

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Apache Spark ALSCore i9 7900X11002200330044005500SE +/- 38.81, N = 85284.66

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Apache Spark BayesCore i9 7900X8001600240032004000SE +/- 29.82, N = 153899.83

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Savina Reactors.IOCore i9 7900X5K10K15K20K25KSE +/- 211.57, N = 824033.64

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Apache Spark PageRankCore i9 7900X4K8K12K16K20KSE +/- 116.99, N = 820572.21

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: In-Memory Database ShootoutCore i9 7900X15003000450060007500SE +/- 59.61, N = 86991.81

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Akka Unbalanced Cobwebbed TreeCore i9 7900X2K4K6K8K10KSE +/- 113.88, N = 811266.36

John The Ripper

Test: Blowfish

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishCore i9 7900X4K8K12K16K20KSE +/- 26.43, N = 3194001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

MKL-DNN

Harness: IP Batch 1D - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32Core i9 7900X48121620SE +/- 0.17, N = 316.33MIN: 101. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32Core i9 7900X20406080100SE +/- 0.41, N = 3109.39MIN: 74.691. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32Core i9 7900X0.91351.8272.74053.6544.5675SE +/- 0.06, N = 34.06MIN: 21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32Core i9 7900X0.89331.78662.67993.57324.4665SE +/- 0.01, N = 33.97MIN: 21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32Core i9 7900X1020304050SE +/- 0.38, N = 344.17MIN: 26.131. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32Core i9 7900X1020304050SE +/- 0.20, N = 344.24MIN: 24.731. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32Core i9 7900X48121620SE +/- 0.02, N = 317.40MIN: 17.171. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32Core i9 7900X30060090012001500SE +/- 3.19, N = 31608.56MIN: 1571.381. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32Core i9 7900X0.58951.1791.76852.3582.9475SE +/- 0.01, N = 32.62MIN: 2.561. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32Core i9 7900X0.7831.5662.3493.1323.915SE +/- 0.02, N = 33.48MIN: 3.421. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32Core i9 7900X4080120160200SE +/- 0.47, N = 3194.31MIN: 192.461. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32Core i9 7900X400800120016002000SE +/- 6.81, N = 31782.33MIN: 1713.351. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32Core i9 7900X3K6K9K12K15KSE +/- 3.23, N = 311695.67MIN: 11683.61. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32Core i9 7900X3K6K9K12K15KSE +/- 13.04, N = 311761.73MIN: 117371. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s32Core i9 7900X13002600390052006500SE +/- 4.34, N = 36139.59MIN: 6128.341. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s32Core i9 7900X13002600390052006500SE +/- 4.43, N = 36266.42MIN: 6241.061. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Core i9 7900X20406080100SE +/- 0.23, N = 390.28MIN: 87.691. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32Core i9 7900X0.32630.65260.97891.30521.6315SE +/- 0.00, N = 31.45MIN: 1.441. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32Core i9 7900X2K4K6K8K10KSE +/- 4.56, N = 39916.53MIN: 9909.11. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32Core i9 7900X306090120150SE +/- 0.18, N = 3128.50MIN: 127.761. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32Core i9 7900X0.32850.6570.98551.3141.6425SE +/- 0.00, N = 31.46MIN: 1.451. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32Core i9 7900X2K4K6K8K10KSE +/- 15.29, N = 39998.82MIN: 9982.391. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32Core i9 7900X4K8K12K16K20KSE +/- 242.06, N = 316488.13MIN: 16200.21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32Core i9 7900X306090120150SE +/- 0.24, N = 3131.20MIN: 130.51. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32Core i9 7900X1224364860SE +/- 0.05, N = 351.50MIN: 51.211. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32Core i9 7900X1224364860SE +/- 0.07, N = 354.84MIN: 54.471. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

SVT-AV1

1080p 8-bit YUV To AV1 Video Encode

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.51080p 8-bit YUV To AV1 Video EncodeCore i9 7900X816243240SE +/- 0.08, N = 334.481. (CXX) g++ options: -O3 -pie -lpthread -lm

Timed GCC Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 8.2Time To CompileCore i9 7900X2004006008001000SE +/- 5.28, N = 3969.10

XZ Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i9 7900X510152025SE +/- 0.11, N = 321.411. (CC) gcc options: -pthread -fvisibility=hidden -O2

libjpeg-turbo tjbench

Test: Decompression Throughput

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Test: Decompression ThroughputCore i9 7900X50100150200250SE +/- 0.21, N = 3207.091. (CC) gcc options: -O3 -rdynamic

Appleseed

Scene: Emily

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyCore i9 7900X80160240320400355.62

Appleseed

Scene: Disney Material

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialCore i9 7900X4080120160200199.34

Appleseed

Scene: Material Tester

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterCore i9 7900X4080120160200203.38


Phoronix Test Suite v10.8.4