3700X More march AMD Ryzen 7 3700X 8-Core testing with a Gigabyte A320M-S2H-CF (F52a BIOS) and HIS AMD Radeon HD 7750/8740 / R7 250E 1GB on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2103170-IB-3700XMORE69&sro .
3700X More march Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution 1 2 3 AMD Ryzen 7 3700X 8-Core @ 3.60GHz (8 Cores / 16 Threads) Gigabyte A320M-S2H-CF (F52a BIOS) AMD Starship/Matisse 8GB 240GB TOSHIBA RC100 HIS AMD Radeon HD 7750/8740 / R7 250E 1GB AMD Oland/Hainan/Cape DELL S2409W Realtek RTL8111/8168/8411 Ubuntu 20.04 5.8.1-050801-generic (x86_64) GNOME Shell 3.36.4 X Server 1.20.9 4.5 Mesa 20.0.8 (LLVM 10.0.0) GCC 9.3.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8701021 Python Details - Python 3.8.5 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
3700X More march incompact3d: input.i3d 129 Cells Per Direction incompact3d: input.i3d 192 Cells Per Direction svt-hevc: 1 - Bosphorus 1080p svt-hevc: 7 - Bosphorus 1080p svt-hevc: 10 - Bosphorus 1080p svt-vp9: VMAF Optimized - Bosphorus 1080p svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080p svt-vp9: Visual Quality Optimized - Bosphorus 1080p build-mesa: Time To Compile onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU sysbench: RAM / Memory sysbench: CPU 1 2 3 40.3466771 318.211354 7.63 106.19 209.25 136.60 143.24 114.39 55.883 5.56715 10.2333 2.63018 2.38623 23.0380 8.99245 6.69724 21.0584 3.63976 4.62287 3899.22 2758.75 3873.42 2746.85 4.82779 3858.11 2743.14 3.05660 10289.34 17383.33 40.4793879 316.528941 7.59 105.83 209.21 134.18 143.51 114.45 55.559 5.53204 10.7400 2.62524 2.68827 23.0943 8.70508 6.74862 21.522 3.64478 4.64339 3936.82 2830.69 3938.80 2819.51 4.94477 3919.20 2801.03 3.06767 10276.90 17341.94 40.7185669 317.006622 7.61 105.92 209.45 135.52 143.34 114.17 55.758 5.53438 10.9520 11.59989 2.46096 22.9538 8.71411 6.73558 21.2447 3.64264 4.63664 3811.13 OpenBenchmarking.org
Xcompact3d Incompact3d Input: input.i3d 129 Cells Per Direction OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction 1 2 3 9 18 27 36 45 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.22, N = 3 40.35 40.48 40.72 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
Xcompact3d Incompact3d Input: input.i3d 192 Cells Per Direction OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 192 Cells Per Direction 1 2 3 70 140 210 280 350 SE +/- 1.26, N = 3 SE +/- 0.22, N = 3 SE +/- 0.35, N = 3 318.21 316.53 317.01 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
SVT-HEVC Tuning: 1 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p 1 2 3 2 4 6 8 10 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 7.63 7.59 7.61 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 7 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p 1 2 3 20 40 60 80 100 SE +/- 0.04, N = 3 SE +/- 0.29, N = 3 SE +/- 0.22, N = 3 106.19 105.83 105.92 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p 1 2 3 50 100 150 200 250 SE +/- 0.31, N = 3 SE +/- 0.26, N = 3 SE +/- 0.49, N = 3 209.25 209.21 209.45 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 Tuning: VMAF Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p 1 2 3 30 60 90 120 150 SE +/- 1.75, N = 3 SE +/- 4.27, N = 12 SE +/- 3.07, N = 12 136.60 134.18 135.52 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p 1 2 3 30 60 90 120 150 SE +/- 0.20, N = 3 SE +/- 0.39, N = 3 SE +/- 0.23, N = 3 143.24 143.51 143.34 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p 1 2 3 30 60 90 120 150 SE +/- 0.20, N = 3 SE +/- 0.10, N = 3 SE +/- 0.46, N = 3 114.39 114.45 114.17 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
Timed Mesa Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Mesa Compilation 21.0 Time To Compile 1 2 3 13 26 39 52 65 SE +/- 0.01, N = 3 SE +/- 0.10, N = 3 SE +/- 0.06, N = 3 55.88 55.56 55.76
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU 1 2 3 1.2526 2.5052 3.7578 5.0104 6.263 SE +/- 0.02182, N = 3 SE +/- 0.01225, N = 3 SE +/- 0.00906, N = 3 5.56715 5.53204 5.53438 MIN: 5.4 MIN: 5.37 MIN: 5.4 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU 1 2 3 3 6 9 12 15 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 10.23 10.74 10.95 MIN: 9.78 MIN: 10.43 MIN: 10.66 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU 1 2 3 3 6 9 12 15 SE +/- 0.00406, N = 3 SE +/- 0.00360, N = 3 SE +/- 5.08322, N = 12 2.63018 2.62524 11.59989 MIN: 2.58 MIN: 2.58 MIN: 2.57 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.6049 1.2098 1.8147 2.4196 3.0245 SE +/- 0.01088, N = 3 SE +/- 0.00780, N = 3 SE +/- 0.00446, N = 3 2.38623 2.68827 2.46096 MIN: 2.3 MIN: 2.61 MIN: 2.4 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU 1 2 3 6 12 18 24 30 SE +/- 0.00, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 23.04 23.09 22.95 MIN: 22.72 MIN: 22.81 MIN: 22.74 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU 1 2 3 3 6 9 12 15 SE +/- 0.01522, N = 3 SE +/- 0.16196, N = 15 SE +/- 0.21403, N = 15 8.99245 8.70508 8.71411 MIN: 5.17 MIN: 5.13 MIN: 5.16 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU 1 2 3 2 4 6 8 10 SE +/- 0.00606, N = 3 SE +/- 0.01474, N = 3 SE +/- 0.00795, N = 3 6.69724 6.74862 6.73558 MIN: 6.62 MIN: 6.64 MIN: 6.65 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU 1 2 3 5 10 15 20 25 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 21.06 21.52 21.24 MIN: 20.68 MIN: 21.31 MIN: 20.94 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU 1 2 3 0.8201 1.6402 2.4603 3.2804 4.1005 SE +/- 0.00803, N = 3 SE +/- 0.00448, N = 3 SE +/- 0.01166, N = 3 3.63976 3.64478 3.64264 MIN: 3.47 MIN: 3.47 MIN: 3.47 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU 1 2 3 1.0448 2.0896 3.1344 4.1792 5.224 SE +/- 0.00500, N = 3 SE +/- 0.00974, N = 3 SE +/- 0.00935, N = 3 4.62287 4.64339 4.63664 MIN: 4.43 MIN: 4.47 MIN: 4.45 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU 1 2 3 800 1600 2400 3200 4000 SE +/- 2.05, N = 3 SE +/- 19.48, N = 3 SE +/- 4.52, N = 3 3899.22 3936.82 3811.13 MIN: 3886.77 MIN: 3902.02 MIN: 3793.22 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU 1 2 600 1200 1800 2400 3000 SE +/- 12.62, N = 3 SE +/- 7.42, N = 3 2758.75 2830.69 MIN: 2706.38 MIN: 2793.58 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU 1 2 800 1600 2400 3200 4000 SE +/- 16.07, N = 3 SE +/- 6.78, N = 3 3873.42 3938.80 MIN: 3842.1 MIN: 3920.91 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU 1 2 600 1200 1800 2400 3000 SE +/- 11.92, N = 3 SE +/- 7.65, N = 3 2746.85 2819.51 MIN: 2702.15 MIN: 2784.96 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU 1 2 1.1126 2.2252 3.3378 4.4504 5.563 SE +/- 0.01300, N = 3 SE +/- 0.01312, N = 3 4.82779 4.94477 MIN: 4.66 MIN: 4.77 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU 1 2 800 1600 2400 3200 4000 SE +/- 13.76, N = 3 SE +/- 2.50, N = 3 3858.11 3919.20 MIN: 3834.74 MIN: 3903.26 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU 1 2 600 1200 1800 2400 3000 SE +/- 16.15, N = 3 SE +/- 8.10, N = 3 2743.14 2801.03 MIN: 2706.62 MIN: 2771.6 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU 1 2 0.6902 1.3804 2.0706 2.7608 3.451 SE +/- 0.00195, N = 3 SE +/- 0.00076, N = 3 3.05660 3.06767 MIN: 2.98 MIN: 3 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Sysbench Test: RAM / Memory OpenBenchmarking.org MiB/sec, More Is Better Sysbench 1.0.20 Test: RAM / Memory 1 2 2K 4K 6K 8K 10K SE +/- 4.79, N = 3 SE +/- 23.22, N = 3 10289.34 10276.90 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Sysbench Test: CPU OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU 1 2 4K 8K 12K 16K 20K SE +/- 5.91, N = 3 SE +/- 33.37, N = 3 17383.33 17341.94 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Phoronix Test Suite v10.8.5