Intel Core i5-8400 testing with a MSI Z370M MORTAR (MS-7B54) v1.0 (1.50 BIOS) and MSI Intel UHD 630 3GB on Ubuntu 19.04 via the Phoronix Test Suite.
Intel Core i5-8400 Processor: Intel Core i5-8400 @ 4.00GHz (6 Cores), Motherboard: MSI Z370M MORTAR (MS-7B54) v1.0 (1.50 BIOS), Chipset: Intel 8th Gen Core, Memory: 8GB, Disk: 512GB INTEL SSDPEKNW512G8, Graphics: MSI Intel UHD 630 3GB (1050MHz), Audio: Realtek ALC892, Monitor: VA2431, Network: Intel I219-V
OS: Ubuntu 19.04, Kernel: 5.0.0-38-generic (x86_64), Desktop: GNOME Shell 3.32.1, Display Server: X Server 1.20.4, Display Driver: modesetting 1.20.4, OpenGL: 4.5 Mesa 19.0.2, Compiler: GCC 10.0.1 20200409, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --disable-multilib --enable-checking=release --enable-languages=c,c++,fortranProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xcaJava Notes: OpenJDK Runtime Environment (build 11.0.5+10-post-Ubuntu-0ubuntu1.119.04)Python Notes: Python 2.7.16 + Python 3.7.3Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + tsx_async_abort: Not affected
oneDNN MKL-DNN This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: Recurrent Neural Network Inference - Data Type: f32 Intel Core i5-8400 20 40 60 80 100 SE +/- 1.15, N = 3 82.67 MIN: 79.73 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 Intel Core i5-8400 2 4 6 8 10 SE +/- 0.00135, N = 3 7.09739 MIN: 7.06 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
oneDNN MKL-DNN This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: Deconvolution Batch deconv_3d - Data Type: f32 Intel Core i5-8400 3 6 9 12 15 SE +/- 0.00391, N = 3 9.05623 MIN: 8.99 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 Intel Core i5-8400 60 120 180 240 300 SE +/- 3.78, N = 3 288.27 MIN: 284.02 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: IP Batch All - Data Type: u8s8f32 Intel Core i5-8400 11 22 33 44 55 SE +/- 0.02, N = 3 48.24 MIN: 47.87 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: Deconvolution Batch deconv_1d - Data Type: f32 Intel Core i5-8400 2 4 6 8 10 SE +/- 0.00150, N = 3 6.77680 MIN: 6.73 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: IP Batch 1D - Data Type: u8s8f32 Intel Core i5-8400 0.8105 1.621 2.4315 3.242 4.0525 SE +/- 0.00275, N = 3 3.60241 MIN: 3.58 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: IP Batch All - Data Type: f32 Intel Core i5-8400 20 40 60 80 100 SE +/- 0.11, N = 3 80.64 MIN: 79.6 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: IP Batch 1D - Data Type: f32 Intel Core i5-8400 1.2023 2.4046 3.6069 4.8092 6.0115 SE +/- 0.01008, N = 3 5.34365 MIN: 5.27 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
RawTherapee RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RawTherapee Total Benchmark Time Intel Core i5-8400 20 40 60 80 100 SE +/- 0.02, N = 3 78.28 1. RawTherapee, version 5.5, command line.
An advanced, cross-platform program for developing raw photos.
Website: http://www.rawtherapee.com/
Documentation: http://rawpedia.rawtherapee.com/
Forum: https://discuss.pixls.us/c/software/rawtherapee
Code and bug reports: https://github.com/Beep6581/RawTherapee
Symbols:
<Chevrons> indicate parameters you can change.
[Square brackets] mean the parameter is optional.
The pipe symbol | indicates a choice of one or the other.
The dash symbol - denotes a range of possible values from one to the other.
Usage:
rawtherapee-cli -c <dir>|<files> Convert files in batch with default parameters.
rawtherapee-cli <other options> -c <dir>|<files> Convert files in batch with your own settings.
Options:
rawtherapee-cli[-o <output>|-O <output>] [-q] [-a] [-s|-S] [-p <one.pp3> [-p <two.pp3> ...] ] [-d] [ -j[1-100] -js<1-3> | -t[z] -b<8|16|16f|32> | -n -b<8|16> ] [-Y] [-f] -c <input>
-c <files> Specify one or more input files or folders.
When specifying folders, Rawtherapee will look for image file types which comply
with the selected extensions (see also '-a').
-c must be the last option.
-o <file>|<dir> Set output file or folder.
Saves output file alongside input file if -o is not specified.
-O <file>|<dir> Set output file or folder and copy pp3 file into it.
Saves output file alongside input file if -O is not specified.
-q Quick-start mode. Does not load cached files to speedup start time.
-a Process all supported image file types when specifying a folder, even those
not currently selected in Preferences > File Browser > Parsed Extensions.
-s Use the existing sidecar file to build the processing parameters,
e.g. for photo.raw there should be a photo.raw.pp3 file in the same folder.
If the sidecar file does not exist, neutral values will be used.
-S Like -s but skip if the sidecar file does not exist.
-p <file.pp3> Specify processing profile to be used for all conversions.
You can specify as many sets of "-p <file.pp3>" options as you like,
each will be built on top of the previous one, as explained below.
-d Use the default raw or non-raw processing profile as set in
Preferences > Image Processing > Default Processing Profile
-j[1-100] Specify output to be JPEG (default, if -t and -n are not set).
Optionally, specify compression 1-100 (default value: 92).
-js<1-3> Specify the JPEG chroma subsampling parameter, where:
1 = Best compression: 2x2, 1x1, 1x1 (4:2:0)
Chroma halved vertically and horizontally.
2 = Balanced (default): 2x1, 1x1, 1x1 (4:2:2)
Chroma halved horizontally.
3 = Best quality: 1x1, 1x1, 1x1 (4:4:4)
No chroma subsampling.
-b<8|16|16f|32> Specify bit depth per channel.
8 = 8-bit integer. Applies to JPEG, PNG and TIFF. Default for JPEG and PNG.
16 = 16-bit integer. Applies to TIFF and PNG. Default for TIFF.
16f = 16-bit float. Applies to TIFF.
32 = 32-bit float. Applies to TIFF.
-t[z] Specify output to be TIFF.
Uncompressed by default, or deflate compression with 'z'.
-n Specify output to be compressed PNG.
Compression is hard-coded to PNG_FILTER_PAETH, Z_RLE.
-Y Overwrite output if present.
-f Use the custom fast-export processing pipeline.
Your pp3 files can be incomplete, RawTherapee will build the final values as follows:
1- A new processing profile is created using neutral values,
2- If the "-d" option is set, the values are overridden by those found in
the default raw or non-raw processing profile.
3- If one or more "-p" options are set, the values are overridden by those
found in these processing profiles.
4- If the "-s" or "-S" options are set, the values are finally overridden by those
found in the sidecar files.
The processing profiles are processed in the order specified on the command line.
oneDNN MKL-DNN This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN MKL-DNN 1.3 Harness: Recurrent Neural Network Training - Data Type: f32 Intel Core i5-8400 70 140 210 280 350 SE +/- 4.70, N = 3 311.07 MIN: 299.74 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -lrt -ldl
OpenBenchmarking.org Average Mbytes/sec, More Is Better Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Exchange Intel Core i5-8400 500 1000 1500 2000 2500 SE +/- 4.42, N = 3 2475.88 MAX: 8228.63 1. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.org Average usec, Fewer Is Better Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Exchange Intel Core i5-8400 70 140 210 280 350 SE +/- 0.75, N = 3 300.16 MIN: 0.37 / MAX: 4505.51 1. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.org Average Mbytes/sec, More Is Better Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 PingPong Intel Core i5-8400 300 600 900 1200 1500 SE +/- 5.18, N = 3 1571.47 MIN: 9.54 / MAX: 4205 1. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.org Average Mbytes/sec, More Is Better Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Sendrecv Intel Core i5-8400 400 800 1200 1600 2000 SE +/- 5.34, N = 3 1877.39 MAX: 8111.79 1. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.org Average usec, Fewer Is Better Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Sendrecv Intel Core i5-8400 40 80 120 160 200 SE +/- 0.57, N = 3 202.04 MIN: 0.25 / MAX: 3537.31 1. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.org FPS, More Is Better dav1d 0.6.0 Video Input: Summer Nature 4K Intel Core i5-8400 20 40 60 80 100 SE +/- 0.12, N = 3 96.49 MIN: 90.33 / MAX: 109.4 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.6.0 Video Input: Summer Nature 1080p Intel Core i5-8400 70 140 210 280 350 SE +/- 0.78, N = 3 310.96 MIN: 277.26 / MAX: 340.6 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.6.0 Video Input: Chimera 1080p 10-bit Intel Core i5-8400 20 40 60 80 100 SE +/- 0.04, N = 3 86.41 MIN: 60.32 / MAX: 181.57 1. (CC) gcc options: -pthread
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer ISPC - Model: Crown Intel Core i5-8400 1.2799 2.5598 3.8397 5.1196 6.3995 SE +/- 0.0093, N = 3 5.6885 MIN: 5.65 / MAX: 5.77
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer - Model: Asian Dragon Intel Core i5-8400 2 4 6 8 10 SE +/- 0.0107, N = 3 6.0623 MIN: 6.01 / MAX: 6.15
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer - Model: Asian Dragon Obj Intel Core i5-8400 1.2652 2.5304 3.7956 5.0608 6.326 SE +/- 0.0143, N = 3 5.6231 MIN: 5.58 / MAX: 5.68
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer ISPC - Model: Asian Dragon Intel Core i5-8400 2 4 6 8 10 SE +/- 0.0391, N = 3 7.2586 MIN: 7.16 / MAX: 7.43
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer ISPC - Model: Asian Dragon Obj Intel Core i5-8400 2 4 6 8 10 SE +/- 0.0027, N = 3 6.5315 MIN: 6.51 / MAX: 6.59
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.3 Scene: Rainbow Colors and Prism Intel Core i5-8400 0.2138 0.4276 0.6414 0.8552 1.069 SE +/- 0.01, N = 3 0.95 MIN: 0.93 / MAX: 1.01
YafaRay YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better YafaRay 3.4.1 Total Time For Sample Scene Intel Core i5-8400 60 120 180 240 300 SE +/- 0.39, N = 3 270.62 1. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lboost_system -lboost_filesystem -lboost_locale
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Fill Sync Intel Core i5-8400 0.0675 0.135 0.2025 0.27 0.3375 SE +/- 0.00, N = 3 0.3 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Fill Sync Intel Core i5-8400 500 1000 1500 2000 2500 SE +/- 5.99, N = 3 2211.61 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Overwrite Intel Core i5-8400 7 14 21 28 35 SE +/- 0.34, N = 6 29.3 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Overwrite Intel Core i5-8400 5 10 15 20 25 SE +/- 0.26, N = 6 22.69 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Random Fill Intel Core i5-8400 7 14 21 28 35 SE +/- 0.39, N = 15 29.7 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Random Fill Intel Core i5-8400 5 10 15 20 25 SE +/- 0.30, N = 15 22.35 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Random Read Intel Core i5-8400 0.5576 1.1152 1.6728 2.2304 2.788 SE +/- 0.007, N = 3 2.478 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Seek Random Intel Core i5-8400 0.6912 1.3824 2.0736 2.7648 3.456 SE +/- 0.021, N = 3 3.072 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Random Delete Intel Core i5-8400 5 10 15 20 25 SE +/- 0.18, N = 3 21.04 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Sequential Fill Intel Core i5-8400 8 16 24 32 40 SE +/- 0.20, N = 3 32.3 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Sequential Fill Intel Core i5-8400 5 10 15 20 25 SE +/- 0.13, N = 3 20.57 1. (CXX) g++ options: -O3 -lsnappy -lpthread
Intel Core i5-8400 Processor: Intel Core i5-8400 @ 4.00GHz (6 Cores), Motherboard: MSI Z370M MORTAR (MS-7B54) v1.0 (1.50 BIOS), Chipset: Intel 8th Gen Core, Memory: 8GB, Disk: 512GB INTEL SSDPEKNW512G8, Graphics: MSI Intel UHD 630 3GB (1050MHz), Audio: Realtek ALC892, Monitor: VA2431, Network: Intel I219-V
OS: Ubuntu 19.04, Kernel: 5.0.0-38-generic (x86_64), Desktop: GNOME Shell 3.32.1, Display Server: X Server 1.20.4, Display Driver: modesetting 1.20.4, OpenGL: 4.5 Mesa 19.0.2, Compiler: GCC 10.0.1 20200409, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --disable-multilib --enable-checking=release --enable-languages=c,c++,fortranProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xcaJava Notes: OpenJDK Runtime Environment (build 11.0.5+10-post-Ubuntu-0ubuntu1.119.04)Python Notes: Python 2.7.16 + Python 3.7.3Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + tsx_async_abort: Not affected
Testing initiated at 10 April 2020 18:33 by user phoronix.