dd

AMD Ryzen 7 7840U testing with a PHX Ray_PEU (V1.04 BIOS) and AMD Phoenix1 512MB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2310248-NE-DD557351896&grt&sor.

ddProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcAMD Ryzen 7 7840U @ 5.29GHz (8 Cores / 16 Threads)PHX Ray_PEU (V1.04 BIOS)AMD Device 14e816GB1024GB Micron_3400_MTFDKBA1T0TFHAMD Phoenix1 512MB (2700/800MHz)AMD Rembrandt Radeon HD AudioMEDIATEK MT7922 802.11ax PCIUbuntu 23.106.5.0-with-patch2 (x86_64)GNOME Shell 45.0X Server + Wayland4.6 Mesa 23.3~git2309080600.fd297e~oibaf~m (git-fd297ec 2023-09-08 mantic-oibaf-ppa) (LLVM 15.0.7 DRM 3.54)GCC 13.2.0ext43200x2000OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- a: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-4ZOwl4/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-4ZOwl4/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - b: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-4ZOwl4/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-4ZOwl4/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa704101Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ddeasywave: e2Asean Grid + BengkuluSept2007 Source - 240easywave: e2Asean Grid + BengkuluSept2007 Source - 1200easywave: e2Asean Grid + BengkuluSept2007 Source - 2400embree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvkl: vklBenchmarkCPU ISPCopenvkl: vklBenchmarkCPU Scalarquantlib: Multi-Threadedquantlib: Single-Threadedabc11.979236.372584.7287.17817.45568.61737.83489.51528.11140.220.220.118.869165.61121.28112.226782.757173.7923615.017310.86827.6093714.25071.529011.924554898.172514.184919.236.768319.933325.172322506.074956.72520.791706728605413311.929235.936585.3547.27147.43318.7647.81329.51788.06960.220.220.118.992485.572291.258232.225322.753833.7567114.940810.41527.67514.57761.677452.010694967.812555.434965.286.742159.737855.223812555.284925.482524.941716728768.74229.311.981236.144579.9867.18827.41518.7337.75119.4668.03880.220.220.118.662465.598331.15592.19782.425193.6244315.09469.62336.3661414.26591.297851.555873876.532127.794376.696.726848.707554.424112320.014574.042382.551716928299.94074.6OpenBenchmarking.org

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240bac369121511.9311.9811.981. (CXX) g++ options: -O3 -fopenmp

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200bca50100150200250235.94236.14236.371. (CXX) g++ options: -O3 -fopenmp

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400cab130260390520650579.99584.73585.351. (CXX) g++ options: -O3 -fopenmp

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownbca2468107.27147.18827.1781MIN: 7.2 / MAX: 7.51MIN: 7.11 / MAX: 7.5MIN: 7.09 / MAX: 7.6

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownabc2468107.45567.43317.4151MIN: 7.37 / MAX: 7.79MIN: 7.34 / MAX: 7.7MIN: 7.34 / MAX: 7.69

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonbca2468108.76408.73308.6173MIN: 8.67 / MAX: 9.04MIN: 8.64 / MAX: 9.06MIN: 8.57 / MAX: 8.75

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objabc2468107.83487.81327.7511MIN: 7.8 / MAX: 7.98MIN: 7.74 / MAX: 8.1MIN: 7.7 / MAX: 7.89

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonbac36912159.51789.51529.4660MIN: 9.39 / MAX: 9.87MIN: 9.42 / MAX: 9.86MIN: 9.39 / MAX: 9.76

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objabc2468108.11148.06968.0388MIN: 8.07 / MAX: 8.27MIN: 7.97 / MAX: 8.36MIN: 7.99 / MAX: 8.16

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlycba0.04950.0990.14850.1980.24750.220.220.22

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlycba0.04950.0990.14850.1980.24750.220.220.22

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlycba0.02480.04960.07440.09920.1240.110.110.11

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUcab36912158.662468.869168.99248MIN: 7.18MIN: 7.17MIN: 8.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbca1.26252.5253.78755.056.31255.572295.598335.61120MIN: 5.48MIN: 5.5MIN: 5.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUcba0.28820.57640.86461.15281.4411.155901.258231.28110MIN: 0.9MIN: 1.01MIN: 1.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUcba0.5011.0021.5032.0042.5052.197802.225322.22678MIN: 2.06MIN: 2.04MIN: 2.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUcba0.62041.24081.86122.48163.1022.425192.753832.75717MIN: 2.15MIN: 2.29MIN: 2.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUcba0.85331.70662.55993.41324.26653.624433.756713.79236MIN: 3.4MIN: 3.37MIN: 3.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUbac4812162014.9415.0215.09MIN: 14.72MIN: 14.65MIN: 14.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUcba36912159.623310.415210.8682MIN: 5.21MIN: 6.45MIN: 6.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUcab2468106.366147.609377.67500MIN: 4.81MIN: 6.03MIN: 6.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUacb4812162014.2514.2714.58MIN: 14.01MIN: 13.71MIN: 14.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUcab0.37740.75481.13221.50961.8871.297851.529011.67745MIN: 1.08MIN: 1.26MIN: 1.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUcab0.45240.90481.35721.80962.2621.555871.924552.01069MIN: 1.16MIN: 1.46MIN: 1.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUcab110022003300440055003876.534898.174967.81MIN: 3780.33MIN: 4855.43MIN: 4924.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUcab50010001500200025002127.792514.182555.43MIN: 2040.61MIN: 2466.17MIN: 2502.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUcab110022003300440055004376.694919.234965.28MIN: 4320.61MIN: 4873.66MIN: 49161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUcba2468106.726846.742156.76831MIN: 6.42MIN: 6.46MIN: 6.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUcba36912158.707559.737859.93332MIN: 7.2MIN: 8.44MIN: 8.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUcab1.17542.35083.52624.70165.8774.424115.172325.22381MIN: 3.56MIN: 3.57MIN: 3.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUcab50010001500200025002320.012506.072555.28MIN: 2157.6MIN: 2464.71MIN: 2505.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUcba110022003300440055004574.044925.484956.70MIN: 4522.29MIN: 4884.66MIN: 4902.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUcab50010001500200025002382.552520.792524.94MIN: 2326.78MIN: 2471.71MIN: 2468.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVKL

Benchmark: vklBenchmarkCPU ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCcba4080120160200171171170MIN: 12 / MAX: 2390MIN: 12 / MAX: 2399MIN: 12 / MAX: 2389

OpenVKL

Benchmark: vklBenchmarkCPU Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalarcba1530456075696767MIN: 5 / MAX: 1266MIN: 5 / MAX: 1194MIN: 5 / MAX: 1197

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threadedbac6K12K18K24K30K28768.728605.028299.91. (CXX) g++ options: -O3 -march=native -fPIE -pie

QuantLib

Configuration: Single-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Single-Threadedbac90018002700360045004229.34133.04074.61. (CXX) g++ options: -O3 -march=native -fPIE -pie


Phoronix Test Suite v10.8.5