Core i3 7100

Intel Core i3-7100 testing with a Gigabyte B250M-DS3H-CF (F9 BIOS) and Intel HD 630 3GB on Ubuntu 17.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1910039-AS-COREI371066
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i3 7100
October 02 2019
  15 Hours, 29 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i3 7100OpenBenchmarking.orgPhoronix Test SuiteIntel Core i3-7100 @ 3.90GHz (2 Cores / 4 Threads)Gigabyte B250M-DS3H-CF (F9 BIOS)Intel Xeon E3-1200 v6/7th + B2508192MB250GB Western Digital WDS250G1B0A-Intel HD 630 3GB (1100MHz)Realtek ALC887-VDDELL S2409WRealtek RTL8111/8168/8411Ubuntu 17.104.20.0-999-generic (x86_64) 20181202GNOME Shell 3.26.1X Server + Wayland4.5 Mesa 17.2.2GCC 7.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionCore I3 7100 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

Core i3 7100ospray: San Miguel - Path Tracerospray: XFrog Forest - Path Tracernpb: EP.Dnpb: FT.Cmkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32ospray: NASA Streamlines - Path Tracermkl-dnn: Convolution Batch conv_all - f32npb: BT.Cmkl-dnn: Deconvolution Batch deconv_all - f32ospray: XFrog Forest - SciVismkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32embree: Pathtracer - Crownvpxenc: vpxenc VP9 1080p Video Encodeembree: Pathtracer - Asian Dragon Objluxcorerender: DLSCmkl-dnn: IP Batch All - f32embree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer - Asian Dragonnpb: LU.Cospray: NASA Streamlines - SciVismkl-dnn: IP Batch All - u8s8f32s32embree: Pathtracer ISPC - Asian Dragonmkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32ospray: San Miguel - SciVismkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32x265: H.265 1080p Video Encodingmkl-dnn: Convolution Batch conv_googlenet_v3 - f32tungsten: Hairsvt-av1: Enc Mode 4 - 1080pnpb: SP.Bnpb: EP.Ctungsten: Water Causticoidn: Memorialnpb: CG.Cgraphics-magick: Sharpenluxcorerender: Rainbow Colors and Prismgraphics-magick: Noise-Gaussiangraphics-magick: Enhancedgraphics-magick: Swirlgraphics-magick: Resizinggraphics-magick: HWB Color Spacegraphics-magick: Rotateospray: Magnetic Reconnection - SciVisredis: SETredis: GETredis: LPOPsvt-hevc: 1080p 8-bit YUV To HEVC Video Encodemkl-dnn: IP Batch All - u8s8u8s32redis: SADDmkl-dnn: Convolution Batch conv_3d - f32tungsten: Volumetric Causticsvt-av1: Enc Mode 8 - 1080predis: LPUSHglibc-bench: exptungsten: Non-Exponentialmkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32npb: MG.Cmkl-dnn: Deconvolution Batch deconv_1d - f32glibc-bench: sincosglibc-bench: singlibc-bench: cosmkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch 1D - f32glibc-bench: sqrtglibc-bench: sinhglibc-bench: log2svt-vp9: 1080p 8-bit YUV To VP9 Video Encodeglibc-bench: tanhglibc-bench: pthread_onceglibc-bench: ffsllglibc-bench: atanhglibc-bench: asinhglibc-bench: modfglibc-bench: ffstjbench: Decompression Throughputospray: Magnetic Reconnection - Path Tracerneatbench: CPUmkl-dnn: Deconvolution Batch deconv_3d - f32Core i3 71000.320.34109.621123.5164990.7062695.9078463.500.9310944.905415.629092.160.6254833.5754702.2044941.8744471.472.2846.192.580.44325.342.703.002.9010447.254.59185.973.5027422.7326867.373.801261.581089.284.44610.52122.720.753595.51109.4269.091.962283.94280.464244822084397833.791902278.372796221.672901776.7511.67193.502201235.2765.3338.868.391754090.2119259.3329.562186.742004.577781.7422.4931579.8331809.2731865.601451.8612.0112.0925.601.8415.039.5353.4613.901.842.0912.8619.552.361.84180.08504.6123.69OpenBenchmarking.org

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerCore i3 71000.0720.1440.2160.2880.36SE +/- 0.00, N = 90.32

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerCore i3 71000.07650.1530.22950.3060.3825SE +/- 0.00, N = 90.34MIN: 0.25

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DCore i3 710020406080100SE +/- 0.09, N = 3109.621. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CCore i3 71002004006008001000SE +/- 84.53, N = 91123.511. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s32Core i3 710014K28K42K56K70KSE +/- 122.89, N = 364990.70MIN: 64064.41. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s32Core i3 710013K26K39K52K65KSE +/- 100.95, N = 362695.90MIN: 61710.21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32Core i3 710020K40K60K80K100KSE +/- 45.81, N = 378463.50MIN: 77629.81. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerCore i3 71000.20930.41860.62790.83721.0465SE +/- 0.00, N = 90.93MIN: 0.92 / MAX: 0.94

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32Core i3 71002K4K6K8K10KSE +/- 2.56, N = 310944.90MIN: 10916.91. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CCore i3 710012002400360048006000SE +/- 6.90, N = 35415.621. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32Core i3 71002K4K6K8K10KSE +/- 64.75, N = 39092.16MIN: 8837.381. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisCore i3 71000.13950.2790.41850.5580.6975SE +/- 0.00, N = 30.62MIN: 0.61

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32Core i3 710012K24K36K48K60KSE +/- 20.42, N = 354833.57MIN: 54484.31. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32Core i3 710012K24K36K48K60KSE +/- 75.43, N = 354702.20MIN: 541381. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32Core i3 710010K20K30K40K50KSE +/- 62.42, N = 344941.87MIN: 44772.11. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32Core i3 710010K20K30K40K50KSE +/- 91.52, N = 344471.47MIN: 44309.81. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: CrownCore i3 71000.5131.0261.5392.0522.565SE +/- 0.01, N = 32.28MIN: 2.26 / MAX: 2.31

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.1vpxenc VP9 1080p Video EncodeCore i3 71001020304050SE +/- 0.06, N = 346.191. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian Dragon ObjCore i3 71000.58051.1611.74152.3222.9025SE +/- 0.00, N = 32.58MIN: 2.56 / MAX: 2.61

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: DLSCCore i3 71000.0990.1980.2970.3960.495SE +/- 0.00, N = 120.44MIN: 0.42 / MAX: 0.45

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32Core i3 710070140210280350SE +/- 3.52, N = 15325.34MIN: 230.211. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: CrownCore i3 71000.60751.2151.82252.433.0375SE +/- 0.00, N = 32.70MIN: 2.68 / MAX: 2.73

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjCore i3 71000.6751.352.0252.73.375SE +/- 0.00, N = 33.00MIN: 2.98 / MAX: 3.03

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian DragonCore i3 71000.65251.3051.95752.613.2625SE +/- 0.00, N = 32.90MIN: 2.89 / MAX: 2.93

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CCore i3 71002K4K6K8K10KSE +/- 4.83, N = 310447.251. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisCore i3 71001.03282.06563.09844.13125.164SE +/- 0.00, N = 124.59MIN: 4.26 / MAX: 4.63

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32Core i3 71004080120160200SE +/- 1.88, N = 11185.97MIN: 132.581. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian DragonCore i3 71000.78751.5752.36253.153.9375SE +/- 0.00, N = 33.50MIN: 3.48 / MAX: 3.54

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32Core i3 71006K12K18K24K30KSE +/- 38.52, N = 327422.73MIN: 27254.71. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32Core i3 71006K12K18K24K30KSE +/- 15.59, N = 326867.37MIN: 26724.71. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisCore i3 71000.8551.712.5653.424.275SE +/- 0.00, N = 63.80MIN: 3.76 / MAX: 3.83

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32Core i3 710030060090012001500SE +/- 10.15, N = 31261.58MIN: 1185.591. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32Core i3 71002004006008001000SE +/- 2.06, N = 31089.28MIN: 1032.361. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

x265

This is a simple test of the x265 encoder run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.1.2H.265 1080p Video EncodingCore i3 71000.9991.9982.9973.9964.995SE +/- 0.01, N = 34.441. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Core i3 7100130260390520650SE +/- 0.14, N = 3610.52MIN: 607.751. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairCore i3 7100306090120150SE +/- 0.15, N = 3122.721. (CXX) g++ options: -std=c++0x -march=skylake -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.7Encoder Mode: Enc Mode 4 - Input: 1080pCore i3 71000.16880.33760.50640.67520.844SE +/- 0.00, N = 30.751. (CXX) g++ options: -fPIE -fPIC -pie

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BCore i3 71008001600240032004000SE +/- 1.45, N = 33595.511. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CCore i3 710020406080100SE +/- 0.11, N = 3109.421. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticCore i3 71001530456075SE +/- 0.16, N = 369.091. (CXX) g++ options: -std=c++0x -march=skylake -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.0.0Scene: MemorialCore i3 71000.4410.8821.3231.7642.205SE +/- 0.00, N = 41.96

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CCore i3 71005001000150020002500SE +/- 1.02, N = 32283.941. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenCore i3 7100714212835281. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -ljpeg -lXext -lX11 -llzma -lz -lm -lpthread

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: Rainbow Colors and PrismCore i3 71000.10350.2070.31050.4140.5175SE +/- 0.00, N = 30.46MIN: 0.44 / MAX: 0.52

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianCore i3 71001020304050421. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -ljpeg -lXext -lX11 -llzma -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedCore i3 71001020304050441. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -ljpeg -lXext -lX11 -llzma -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlCore i3 710020406080100821. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -ljpeg -lXext -lX11 -llzma -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingCore i3 7100501001502002502081. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -ljpeg -lXext -lX11 -llzma -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceCore i3 71001002003004005004391. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -ljpeg -lXext -lX11 -llzma -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateCore i3 710020040060080010007831. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -ljpeg -lXext -lX11 -llzma -lz -lm -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisCore i3 71000.85281.70562.55843.41124.264SE +/- 0.02, N = 33.79MIN: 3.62 / MAX: 3.86

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SETCore i3 7100400K800K1200K1600K2000KSE +/- 51807.03, N = 151902278.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: GETCore i3 7100600K1200K1800K2400K3000KSE +/- 32519.51, N = 152796221.671. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: LPOPCore i3 7100600K1200K1800K2400K3000KSE +/- 46576.94, N = 152901776.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.4.11080p 8-bit YUV To HEVC Video EncodeCore i3 71003691215SE +/- 0.03, N = 311.671. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32Core i3 71004080120160200SE +/- 2.06, N = 3193.50MIN: 135.051. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SADDCore i3 7100500K1000K1500K2000K2500KSE +/- 64754.86, N = 122201235.271. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32Core i3 71001530456075SE +/- 0.02, N = 365.33MIN: 64.991. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticCore i3 7100918273645SE +/- 0.03, N = 338.861. (CXX) g++ options: -std=c++0x -march=skylake -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.7Encoder Mode: Enc Mode 8 - Input: 1080pCore i3 7100246810SE +/- 0.02, N = 38.391. (CXX) g++ options: -fPIE -fPIC -pie

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: LPUSHCore i3 7100400K800K1200K1600K2000KSE +/- 19041.96, N = 101754090.211. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: expCore i3 71004K8K12K16K20KSE +/- 1.44, N = 319259.33

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialCore i3 7100714212835SE +/- 0.02, N = 329.561. (CXX) g++ options: -std=c++0x -march=skylake -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32Core i3 71005001000150020002500SE +/- 11.37, N = 32186.74MIN: 2101.121. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32Core i3 7100400800120016002000SE +/- 11.44, N = 32004.57MIN: 1914.631. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CCore i3 71002K4K6K8K10KSE +/- 4.75, N = 37781.741. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32Core i3 7100510152025SE +/- 0.04, N = 322.49MIN: 22.281. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sincosCore i3 71007K14K21K28K35KSE +/- 8.73, N = 331579.83

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinCore i3 71007K14K21K28K35KSE +/- 14.05, N = 331809.27

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: cosCore i3 71007K14K21K28K35KSE +/- 16.75, N = 331865.60

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32Core i3 710030060090012001500SE +/- 1.96, N = 31451.86MIN: 1445.41. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32Core i3 71003691215SE +/- 0.12, N = 312.01MIN: 11.141. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32Core i3 71003691215SE +/- 0.03, N = 312.09MIN: 11.041. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32Core i3 7100612182430SE +/- 0.06, N = 325.60MIN: 18.731. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sqrtCore i3 71000.4140.8281.2421.6562.07SE +/- 0.00, N = 41.84

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinhCore i3 710048121620SE +/- 0.24, N = 415.03

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: log2Core i3 71003691215SE +/- 0.00, N = 49.53

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 2019-09-091080p 8-bit YUV To VP9 Video EncodeCore i3 71001224364860SE +/- 0.37, N = 353.461. (CC) gcc options: -fPIE -fPIC -flto -O3 -O2 -pie -rdynamic -lpthread -lrt -lm

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: tanhCore i3 710048121620SE +/- 0.00, N = 313.90

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: pthread_onceCore i3 71000.4140.8281.2421.6562.07SE +/- 0.00, N = 31.84

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsllCore i3 71000.47030.94061.41091.88122.3515SE +/- 0.00, N = 32.09

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: atanhCore i3 71003691215SE +/- 0.00, N = 312.86

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: asinhCore i3 7100510152025SE +/- 0.02, N = 319.55

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: modfCore i3 71000.5311.0621.5932.1242.655SE +/- 0.00, N = 32.36

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsCore i3 71000.4140.8281.2421.6562.07SE +/- 0.00, N = 31.84

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark part of libjpeg-turbo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Test: Decompression ThroughputCore i3 71004080120160200SE +/- 0.03, N = 3180.081. (CC) gcc options: -O3 -rdynamic

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerCore i3 7100112233445550MIN: 45.45 / MAX: 55.56

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUCore i3 71001.03732.07463.11194.14925.1865SE +/- 0.06, N = 34.61

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32Core i3 7100612182430SE +/- 0.01, N = 323.69MIN: 23.521. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

89 Results Shown

OSPray:
  San Miguel - Path Tracer
  XFrog Forest - Path Tracer
NAS Parallel Benchmarks:
  EP.D
  FT.C
MKL-DNN:
  Convolution Batch conv_all - u8s8u8s32
  Convolution Batch conv_all - u8s8f32s32
  Deconvolution Batch deconv_all - u8s8u8s32
OSPray
MKL-DNN
NAS Parallel Benchmarks
MKL-DNN
OSPray
MKL-DNN:
  Convolution Batch conv_3d - u8s8f32s32
  Convolution Batch conv_3d - u8s8u8s32
  Deconvolution Batch deconv_3d - u8s8f32s32
  Deconvolution Batch deconv_3d - u8s8u8s32
Embree
VP9 libvpx Encoding
Embree
LuxCoreRender
MKL-DNN
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon Obj
  Pathtracer - Asian Dragon
NAS Parallel Benchmarks
OSPray
MKL-DNN
Embree
MKL-DNN:
  Deconvolution Batch deconv_1d - u8s8f32s32
  Deconvolution Batch deconv_1d - u8s8u8s32
OSPray
MKL-DNN:
  Convolution Batch conv_googlenet_v3 - u8s8u8s32
  Convolution Batch conv_googlenet_v3 - u8s8f32s32
x265
MKL-DNN
Tungsten Renderer
SVT-AV1
NAS Parallel Benchmarks:
  SP.B
  EP.C
Tungsten Renderer
Intel Open Image Denoise
NAS Parallel Benchmarks
GraphicsMagick
LuxCoreRender
GraphicsMagick:
  Noise-Gaussian
  Enhanced
  Swirl
  Resizing
  HWB Color Space
  Rotate
OSPray
Redis:
  SET
  GET
  LPOP
SVT-HEVC
MKL-DNN
Redis
MKL-DNN
Tungsten Renderer
SVT-AV1
Redis
glibc bench
Tungsten Renderer
MKL-DNN:
  Convolution Batch conv_alexnet - u8s8u8s32
  Convolution Batch conv_alexnet - u8s8f32s32
NAS Parallel Benchmarks
MKL-DNN
glibc bench:
  sincos
  sin
  cos
MKL-DNN:
  Convolution Batch conv_alexnet - f32
  IP Batch 1D - u8s8u8s32
  IP Batch 1D - u8s8f32s32
  IP Batch 1D - f32
glibc bench:
  sqrt
  sinh
  log2
SVT-VP9
glibc bench:
  tanh
  pthread_once
  ffsll
  atanh
  asinh
  modf
  ffs
libjpeg-turbo tjbench
OSPray
NeatBench
MKL-DNN