Core i5 8400 + Ubuntu 19.04

Intel Core i5-8400 testing with a MSI Z370M MORTAR (MS-7B54) v1.0 (1.50 BIOS) and Intel UHD 630 on Ubuntu 19.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1904195-HV-COREI584069
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Core i5-8400
April 19 2019
  5 Hours, 25 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i5 8400 + Ubuntu 19.04OpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-8400 @ 4.00GHz (6 Cores)MSI Z370M MORTAR (MS-7B54) v1.0 (1.50 BIOS)Intel 8th Gen Core8192MB512GB INTEL SSDPEKNW512G8Intel UHD 630 (1050MHz)Realtek ALC892DELL S2409WIntel I219-VUbuntu 19.045.0.0-13-generic (x86_64)GNOME Shell 3.32.0X Server 1.20.4modesetting 1.20.4GCC 8.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionCore I5 8400 + Ubuntu 19.04 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu1) - KPTI + __user pointer sanitization + Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + SSB disabled via prctl and seccomp + PTE Inversion; VMX: conditional cache flushes SMT disabled

Core i5 8400 + Ubuntu 19.04mkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32compress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9compress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19spec-jbb2015: SPECjbb2015-Composite max-jOPSspec-jbb2015: SPECjbb2015-Composite critical-jOPSIntel Core i5-84005.3180.482.902.8645.4945.0726.943930.298.549.18506.463544.4738761.9038763.3040356.9040099.90221.551109445.3027.7869231191OpenBenchmarking.org

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32Intel Core i5-84001.19482.38963.58444.77925.9745.31MIN: 5.261. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32Intel Core i5-84002040608010080.48MIN: 79.981. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32Intel Core i5-84000.65251.3051.95752.613.26252.90MIN: 2.871. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32Intel Core i5-84000.64351.2871.93052.5743.21752.86MIN: 2.841. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32Intel Core i5-8400102030405045.49MIN: 45.391. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32Intel Core i5-8400102030405045.07MIN: 44.961. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32Intel Core i5-840061218243026.94MIN: 26.771. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32Intel Core i5-840080016002400320040003930.29MIN: 3878.331. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Core i5-84002468108.54MIN: 8.521. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Core i5-840036912159.18MIN: 9.151. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32Intel Core i5-8400110220330440550506.46MIN: 505.881. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32Intel Core i5-840080016002400320040003544.47MIN: 3521.251. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32Intel Core i5-84008K16K24K32K40K38761.90MIN: 38754.91. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32Intel Core i5-84008K16K24K32K40K38763.30MIN: 387581. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s32Intel Core i5-84009K18K27K36K45K40356.90MIN: 40331.11. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s32Intel Core i5-84009K18K27K36K45K40099.90MIN: 40086.71. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Intel Core i5-840050100150200250221.55MIN: 218.551. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32Intel Core i5-84002K4K6K8K10K11094MIN: 11087.21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Intel Core i5-8400102030405045.301. (CC) gcc options: -pthread -fvisibility=hidden -O2

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Intel Core i5-840071421283527.781. (CC) gcc options: -O3 -pthread

SPECjbb 2015

This is a benchmark of SPECjbb 2015. For this test profile to work, you must have a valid license/copy of the SPECjbb 2015 ISO (SPECjbb2015-1.02.iso) in your Phoronix Test Suite download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPSIntel Core i5-8400150030004500600075006923

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPSIntel Core i5-8400300600900120015001191