ddds

Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2310187-NE-DDDS2145567&grs&sro.

dddsProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLCompilerFile-SystemScreen ResolutionabcIntel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads)MSI MS-14C6 (E14C6IMS.115 BIOS)Intel Alder Lake PCH16GB1024GB Micron_3400_MTFDKBA1T0TFHMSI Intel ADL GT2 15GB (1450MHz)Realtek ALC274Intel Alder Lake-P PCH CNVi WiFiUbuntu 23.106.3.0-7-generic (x86_64)GNOME ShellX Server + Wayland4.6 Mesa 23.1.7-1ubuntu1OpenCL 3.0GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-nEN1TP/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-nEN1TP/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x42c - Thermald 2.5.4 Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

dddsonednn: IP Shapes 3D - u8s8f32 - CPUembree: Pathtracer - Asian Dragonembree: Pathtracer - Crownembree: Pathtracer - Asian Dragon Objonednn: IP Shapes 1D - f32 - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragononednn: IP Shapes 1D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragon Objopenvkl: vklBenchmarkCPU Scalaroidn: RTLightmap.hdr.4096x4096 - CPU-Onlyonednn: IP Shapes 3D - f32 - CPUoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyopenvkl: vklBenchmarkCPU ISPCoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyeasywave: e2Asean Grid + BengkuluSept2007 Source - 1200onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUeasywave: e2Asean Grid + BengkuluSept2007 Source - 2400onednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUopenvkl: vklBenchmarkGPU Intel oneAPI SYCLeasywave: e2Asean Grid + BengkuluSept2007 Source - 240onednn: Recurrent Neural Network Inference - u8s8f32 - CPUfluidx3d: FP32-FP16Cfluidx3d: FP32-FP32onednn: Convolution Batch Shapes Auto - f32 - CPUfluidx3d: FP32-FP16Sonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUembree: Pathtracer oneAPI SYCL - Crownabc1.714319.48497.52378.568511.45747.703510.1822.276798.8968740.105.638960.211610.22215.082.72839056.794772.43555.28110.05049189.718810.164847.737.7917913710.3224517.556093698.75456463.667729.51053.082097.37445.80016.53789.148045.83827.85613.140496.8812570.086.988250.171320.18240.0972.5721810044.65236.44593.05110.25389853.792015161.737.842811399.9194625.96163658.833586493.682419.55982.250986.27964.98745.68847.778025.3047.19522.494936.8098580.085.851370.171310.19255.6013.014769116.274733.46605.18310.94679946.699513.25159.098.286491459.8034623.656183688.76656443.687889.55125OpenBenchmarking.org

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUabc0.69351.3872.08052.7743.46751.714313.082092.25098MIN: 1.43MIN: 1.45MIN: 1.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonabc36912159.48497.37446.2796MIN: 9.06 / MAX: 12.83MIN: 7.09 / MAX: 12.73MIN: 6.14 / MAX: 12.66

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownabc2468107.52375.80014.9874MIN: 7.19 / MAX: 10.5MIN: 5.61 / MAX: 10.44MIN: 4.87 / MAX: 10.42

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objabc2468108.56856.53785.6884MIN: 8.24 / MAX: 11.63MIN: 6.45 / MAX: 7.88MIN: 5.61 / MAX: 5.77

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabc369121511.457409.148047.77802MIN: 4.49MIN: 4.54MIN: 4.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownabc2468107.70355.83825.3040MIN: 7.39 / MAX: 10.79MIN: 5.65 / MAX: 10.63MIN: 5.19 / MAX: 7.1

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonabc369121510.18207.85617.1952MIN: 9.65 / MAX: 13.61MIN: 7.45 / MAX: 13.59MIN: 6.95 / MAX: 13.62

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUabc0.70661.41322.11982.82643.5332.276793.140492.49493MIN: 1.7MIN: 1.82MIN: 1.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objabc2468108.89686.88126.8098MIN: 8.47 / MAX: 11.92MIN: 6.52 / MAX: 11.81MIN: 6.54 / MAX: 11.9

OpenVKL

Benchmark: vklBenchmarkCPU Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalarabc1632486480745758MIN: 5 / MAX: 1304MIN: 4 / MAX: 1044MIN: 4 / MAX: 1064

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyabc0.02250.0450.06750.090.11250.100.080.08

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUabc2468105.638966.988255.85137MIN: 5.04MIN: 5.08MIN: 5.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyabc0.04730.09460.14190.18920.23650.210.170.17

OpenVKL

Benchmark: vklBenchmarkCPU ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCabc4080120160200161132131MIN: 11 / MAX: 2260MIN: 8 / MAX: 1862MIN: 8 / MAX: 1855

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyabc0.04950.0990.14850.1980.24750.220.180.19

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200abc60120180240300215.08240.10255.601. (CXX) g++ options: -O3 -fopenmp

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUabc0.67831.35662.03492.71323.39152.728302.572183.01476MIN: 2.22MIN: 2.25MIN: 2.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUabc2K4K6K8K10K9056.7910044.609116.27MIN: 8880.02MIN: 9728.53MIN: 8924.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUabc110022003300440055004772.435236.444733.46MIN: 4570.93MIN: 5033.54MIN: 4556.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400abc130260390520650555.28593.05605.181. (CXX) g++ options: -O3 -fopenmp

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUabc369121510.0510.2510.95MIN: 6.08MIN: 6.02MIN: 6.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUabc2K4K6K8K10K9189.719853.709946.69MIN: 8912.94MIN: 9658.57MIN: 9715.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUabc2K4K6K8K10K8810.169201.009513.20MIN: 8604.63MIN: 8796.3MIN: 9231.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUabc110022003300440055004847.735161.735159.09MIN: 4646.81MIN: 4954.49MIN: 4935.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUabc2468107.791797.842818.28649MIN: 7.19MIN: 7.26MIN: 7.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVKL

Benchmark: vklBenchmarkGPU Intel oneAPI SYCL

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkGPU Intel oneAPI SYCLabc306090120150137139145MIN: 1 / MAX: 5867MIN: 1 / MAX: 5263MIN: 1 / MAX: 5589

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240abc369121510.3229.9199.8031. (CXX) g++ options: -O3 -fopenmp

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUabc100020003000400050004517.554625.904623.65MIN: 4343.84MIN: 4438.23MIN: 4461.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

FluidX3D

Test: FP32-FP16C

OpenBenchmarking.orgMLUPs/s, More Is BetterFluidX3D 2.9Test: FP32-FP16Cabc130260390520650609616618

FluidX3D

Test: FP32-FP32

OpenBenchmarking.orgMLUPs/s, More Is BetterFluidX3D 2.9Test: FP32-FP32abc80160240320400369365368

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUabc2468108.754508.833588.76650MIN: 7.93MIN: 8.02MIN: 7.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

FluidX3D

Test: FP32-FP16S

OpenBenchmarking.orgMLUPs/s, More Is BetterFluidX3D 2.9Test: FP32-FP16Sabc140280420560700646649644

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUabc0.82981.65962.48943.31924.1493.667723.682413.68788MIN: 3.23MIN: 3.24MIN: 3.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUabc36912159.510509.559809.55125MIN: 8.58MIN: 8.72MIN: 8.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl


Phoronix Test Suite v10.8.5