3700X More march

AMD Ryzen 7 3700X 8-Core testing with a Gigabyte A320M-S2H-CF (F52a BIOS) and HIS AMD Radeon HD 7750/8740 / R7 250E 1GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103170-IB-3700XMORE69
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
March 17 2021
  1 Hour, 3 Minutes
2
March 17 2021
  1 Hour, 8 Minutes
3
March 17 2021
  43 Minutes
Invert Behavior (Only Show Selected Data)
  58 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3700X More marchProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution123AMD Ryzen 7 3700X 8-Core @ 3.60GHz (8 Cores / 16 Threads)Gigabyte A320M-S2H-CF (F52a BIOS)AMD Starship/Matisse8GB240GB TOSHIBA RC100HIS AMD Radeon HD 7750/8740 / R7 250E 1GBAMD Oland/Hainan/CapeDELL S2409WRealtek RTL8111/8168/8411Ubuntu 20.045.8.1-050801-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.94.5 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8701021 Python Details- Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%104%108%111%oneDNNTimed Mesa CompilationSVT-VP9Xcompact3d Incompact3dSVT-HEVC

3700X More marchonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsvt-hevc: 1 - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080psysbench: RAM / Memorysysbench: CPUbuild-mesa: Time To Compileincompact3d: input.i3d 129 Cells Per Directionincompact3d: input.i3d 192 Cells Per Direction1235.5671510.23332.630182.3862323.03808.992456.6972421.05843.639764.622873899.222758.753873.422746.854.827793858.112743.143.056607.63106.19209.25136.60143.24114.3910289.3417383.3355.88340.3466771318.2113545.5320410.74002.625242.6882723.09438.705086.7486221.5223.644784.643393936.822830.693938.802819.514.944773919.202801.033.067677.59105.83209.21134.18143.51114.4510276.9017341.9455.55940.4793879316.5289415.5343810.952011.599892.4609622.95388.714116.7355821.24473.642644.636643811.137.61105.92209.45135.52143.34114.1755.75840.7185669317.006622OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1231.25262.50523.75785.01046.263SE +/- 0.02182, N = 3SE +/- 0.01225, N = 3SE +/- 0.00906, N = 35.567155.532045.53438MIN: 5.4MIN: 5.37MIN: 5.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 310.2310.7410.95MIN: 9.78MIN: 10.43MIN: 10.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1233691215SE +/- 0.00406, N = 3SE +/- 0.00360, N = 3SE +/- 5.08322, N = 122.630182.6252411.59989MIN: 2.58MIN: 2.58MIN: 2.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.60491.20981.81472.41963.0245SE +/- 0.01088, N = 3SE +/- 0.00780, N = 3SE +/- 0.00446, N = 32.386232.688272.46096MIN: 2.3MIN: 2.61MIN: 2.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123612182430SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 323.0423.0922.95MIN: 22.72MIN: 22.81MIN: 22.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.01522, N = 3SE +/- 0.16196, N = 15SE +/- 0.21403, N = 158.992458.705088.71411MIN: 5.17MIN: 5.13MIN: 5.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU123246810SE +/- 0.00606, N = 3SE +/- 0.01474, N = 3SE +/- 0.00795, N = 36.697246.748626.73558MIN: 6.62MIN: 6.64MIN: 6.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123510152025SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 321.0621.5221.24MIN: 20.68MIN: 21.31MIN: 20.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1230.82011.64022.46033.28044.1005SE +/- 0.00803, N = 3SE +/- 0.00448, N = 3SE +/- 0.01166, N = 33.639763.644783.64264MIN: 3.47MIN: 3.47MIN: 3.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1231.04482.08963.13444.17925.224SE +/- 0.00500, N = 3SE +/- 0.00974, N = 3SE +/- 0.00935, N = 34.622874.643394.63664MIN: 4.43MIN: 4.47MIN: 4.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1238001600240032004000SE +/- 2.05, N = 3SE +/- 19.48, N = 3SE +/- 4.52, N = 33899.223936.823811.13MIN: 3886.77MIN: 3902.02MIN: 3793.221. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU126001200180024003000SE +/- 12.62, N = 3SE +/- 7.42, N = 32758.752830.69MIN: 2706.38MIN: 2793.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU128001600240032004000SE +/- 16.07, N = 3SE +/- 6.78, N = 33873.423938.80MIN: 3842.1MIN: 3920.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU126001200180024003000SE +/- 11.92, N = 3SE +/- 7.65, N = 32746.852819.51MIN: 2702.15MIN: 2784.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU121.11262.22523.33784.45045.563SE +/- 0.01300, N = 3SE +/- 0.01312, N = 34.827794.94477MIN: 4.66MIN: 4.771. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU128001600240032004000SE +/- 13.76, N = 3SE +/- 2.50, N = 33858.113919.20MIN: 3834.74MIN: 3903.261. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU126001200180024003000SE +/- 16.15, N = 3SE +/- 8.10, N = 32743.142801.03MIN: 2706.62MIN: 2771.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU120.69021.38042.07062.76083.451SE +/- 0.00195, N = 3SE +/- 0.00076, N = 33.056603.06767MIN: 2.98MIN: 31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p123246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 37.637.597.611. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p12320406080100SE +/- 0.04, N = 3SE +/- 0.29, N = 3SE +/- 0.22, N = 3106.19105.83105.921. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p12350100150200250SE +/- 0.31, N = 3SE +/- 0.26, N = 3SE +/- 0.49, N = 3209.25209.21209.451. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p123306090120150SE +/- 1.75, N = 3SE +/- 4.27, N = 12SE +/- 3.07, N = 12136.60134.18135.521. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p123306090120150SE +/- 0.20, N = 3SE +/- 0.39, N = 3SE +/- 0.23, N = 3143.24143.51143.341. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p123306090120150SE +/- 0.20, N = 3SE +/- 0.10, N = 3SE +/- 0.46, N = 3114.39114.45114.171. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory122K4K6K8K10KSE +/- 4.79, N = 3SE +/- 23.22, N = 310289.3410276.901. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU124K8K12K16K20KSE +/- 5.91, N = 3SE +/- 33.37, N = 317383.3317341.941. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile1231326395265SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 355.8855.5655.76

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction123918273645SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.22, N = 340.3540.4840.721. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 192 Cells Per Direction12370140210280350SE +/- 1.26, N = 3SE +/- 0.22, N = 3SE +/- 0.35, N = 3318.21316.53317.011. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi