5600x s AMD Ryzen 5 5600X 6-Core testing with a ASUS TUF GAMING B550M-PLUS (WI-FI) (1216 BIOS) and XFX AMD Radeon R9 285/380 2GB on Ubuntu 21.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2103295-IB-5600XS92352&gru .
5600x s Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution 1 2 3 AMD Ryzen 5 5600X 6-Core @ 3.70GHz (6 Cores / 12 Threads) ASUS TUF GAMING B550M-PLUS (WI-FI) (1216 BIOS) AMD Starship/Matisse 16GB 1000GB Western Digital WD_BLACK SN850 1TB XFX AMD Radeon R9 285/380 2GB (918/1375MHz) AMD Tonga HDMI Audio LG Ultra HD Realtek RTL8125 2.5GbE + Intel Wi-Fi 6 AX200 Ubuntu 21.04 5.10.0-14-generic (x86_64) GNOME Shell 3.38.3 X Server 1.20.9 + Wayland 4.6 Mesa 20.3.4 (LLVM 11.0.1) GCC 10.2.1 20210306 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Environment Details - DEBUGINFOD_URLS= Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Gd1agl/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Gd1agl/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
5600x s sysbench: CPU sysbench: RAM / Memory onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU 1 2 3 35668.15 22058.62 4.21332 10.3621 1.79837 1.76426 21.0331 6.54830 7.43471 16.9850 2.34701 3.86561 3845.26 2202.60 3852.59 2199.43 2.50845 3853.33 2198.80 3.09227 35670.71 21958.50 4.19463 10.3024 1.79867 1.75512 20.9934 6.68868 7.44044 16.9944 2.34952 3.88981 3847.97 2197.83 3849.16 2198.47 2.49156 3850.92 2197.21 3.09458 35672.42 22037.31 4.19329 10.3047 1.79549 1.75632 20.9591 7.23395 7.43837 16.9593 2.34982 3.87551 3843.00 2200.55 3855.04 2202.12 2.47878 3847.38 2199.59 3.09158 OpenBenchmarking.org
Sysbench Test: CPU OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU 1 2 3 8K 16K 24K 32K 40K SE +/- 1.25, N = 3 SE +/- 0.52, N = 3 SE +/- 1.94, N = 3 35668.15 35670.71 35672.42 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Sysbench Test: RAM / Memory OpenBenchmarking.org MiB/sec, More Is Better Sysbench 1.0.20 Test: RAM / Memory 1 2 3 5K 10K 15K 20K 25K SE +/- 3.35, N = 3 SE +/- 49.01, N = 3 SE +/- 22.76, N = 3 22058.62 21958.50 22037.31 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU 1 2 3 0.948 1.896 2.844 3.792 4.74 SE +/- 0.00872, N = 3 SE +/- 0.00276, N = 3 SE +/- 0.00469, N = 3 4.21332 4.19463 4.19329 MIN: 3.86 MIN: 3.9 MIN: 3.89 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU 1 2 3 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 10.36 10.30 10.30 MIN: 10.18 MIN: 10.16 MIN: 10.12 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.4047 0.8094 1.2141 1.6188 2.0235 SE +/- 0.00273, N = 3 SE +/- 0.00435, N = 3 SE +/- 0.00339, N = 3 1.79837 1.79867 1.79549 MIN: 1.72 MIN: 1.72 MIN: 1.72 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.397 0.794 1.191 1.588 1.985 SE +/- 0.01303, N = 3 SE +/- 0.00162, N = 3 SE +/- 0.00854, N = 3 1.76426 1.75512 1.75632 MIN: 1.62 MIN: 1.59 MIN: 1.6 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU 1 2 3 5 10 15 20 25 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 21.03 20.99 20.96 MIN: 19.52 MIN: 19.53 MIN: 19.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU 1 2 3 2 4 6 8 10 SE +/- 0.14848, N = 15 SE +/- 0.12559, N = 15 SE +/- 0.31077, N = 12 6.54830 6.68868 7.23395 MIN: 5.31 MIN: 5.3 MIN: 5.33 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU 1 2 3 2 4 6 8 10 SE +/- 0.00220, N = 3 SE +/- 0.00275, N = 3 SE +/- 0.00747, N = 3 7.43471 7.44044 7.43837 MIN: 7.22 MIN: 7.21 MIN: 7.24 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU 1 2 3 4 8 12 16 20 SE +/- 0.04, N = 3 SE +/- 0.21, N = 3 SE +/- 0.19, N = 15 16.99 16.99 16.96 MIN: 15.84 MIN: 15.66 MIN: 15.46 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.5287 1.0574 1.5861 2.1148 2.6435 SE +/- 0.00198, N = 3 SE +/- 0.00265, N = 3 SE +/- 0.00389, N = 3 2.34701 2.34952 2.34982 MIN: 2.27 MIN: 2.25 MIN: 2.25 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.8752 1.7504 2.6256 3.5008 4.376 SE +/- 0.01510, N = 3 SE +/- 0.01672, N = 3 SE +/- 0.01281, N = 3 3.86561 3.88981 3.87551 MIN: 3.61 MIN: 3.64 MIN: 3.66 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU 1 2 3 800 1600 2400 3200 4000 SE +/- 0.83, N = 3 SE +/- 1.69, N = 3 SE +/- 3.44, N = 3 3845.26 3847.97 3843.00 MIN: 3834.69 MIN: 3832.17 MIN: 3830.07 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU 1 2 3 500 1000 1500 2000 2500 SE +/- 0.76, N = 3 SE +/- 3.37, N = 3 SE +/- 1.93, N = 3 2202.60 2197.83 2200.55 MIN: 2195.91 MIN: 2188.28 MIN: 2191.04 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU 1 2 3 800 1600 2400 3200 4000 SE +/- 3.32, N = 3 SE +/- 1.46, N = 3 SE +/- 2.25, N = 3 3852.59 3849.16 3855.04 MIN: 3836.73 MIN: 3834.01 MIN: 3843.44 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU 1 2 3 500 1000 1500 2000 2500 SE +/- 3.48, N = 3 SE +/- 3.46, N = 3 SE +/- 1.81, N = 3 2199.43 2198.47 2202.12 MIN: 2186.56 MIN: 2188.64 MIN: 2188.91 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU 1 2 3 0.5644 1.1288 1.6932 2.2576 2.822 SE +/- 0.00313, N = 3 SE +/- 0.00013, N = 3 SE +/- 0.00392, N = 3 2.50845 2.49156 2.47878 MIN: 2.45 MIN: 2.44 MIN: 2.43 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 800 1600 2400 3200 4000 SE +/- 1.28, N = 3 SE +/- 1.80, N = 3 SE +/- 1.40, N = 3 3853.33 3850.92 3847.38 MIN: 3843.66 MIN: 3841.15 MIN: 3832.5 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU 1 2 3 500 1000 1500 2000 2500 SE +/- 1.00, N = 3 SE +/- 1.58, N = 3 SE +/- 1.09, N = 3 2198.80 2197.21 2199.59 MIN: 2191.61 MIN: 2188.31 MIN: 2192.65 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.6963 1.3926 2.0889 2.7852 3.4815 SE +/- 0.00181, N = 3 SE +/- 0.00482, N = 3 SE +/- 0.00201, N = 3 3.09227 3.09458 3.09158 MIN: 2.97 MIN: 2.98 MIN: 2.98 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Phoronix Test Suite v10.8.5