sysbench oneDNN Ryzen 9 5950X

AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS) and llvmpipe on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103140-PTS-SYSBENCH79
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

CPU Massive 2 Tests
Multi-Core 2 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
March 13 2021
  40 Minutes
2
March 13 2021
  39 Minutes
3
March 13 2021
  39 Minutes
4
March 13 2021
  40 Minutes
5
March 13 2021
  40 Minutes
Invert Hiding All Results Option
  39 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


sysbench oneDNN Ryzen 9 5950XProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution12345AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBllvmpipeAMD Device ab28Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.10.23-051023-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.94.5 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1 256 bits)1.0.168GCC 10.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

12345Result OverviewPhoronix Test Suite100%103%106%109%oneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNSysbenchoneDNNSysbenchoneDNND.B.s - f32 - CPUIP Shapes 3D - f32 - CPUR.N.N.I - f32 - CPUR.N.N.T - f32 - CPUIP Shapes 3D - u8s8f32 - CPUR.N.N.I - bf16bf16bf16 - CPUC.B.S.A - u8s8f32 - CPUR.N.N.T - u8s8f32 - CPUD.B.s - u8s8f32 - CPUR.N.N.I - u8s8f32 - CPUM.M.B.S.T - f32 - CPUC.B.S.A - f32 - CPUM.M.B.S.T - u8s8f32 - CPUIP Shapes 1D - f32 - CPUD.B.s - u8s8f32 - CPUIP Shapes 1D - u8s8f32 - CPUCPUD.B.s - f32 - CPURAM / MemoryR.N.N.T - bf16bf16bf16 - CPU

sysbench oneDNN Ryzen 9 5950Xonednn: IP Shapes 3D - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUsysbench: CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUsysbench: RAM / Memoryonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPU123458.682561755.702713.470.4617701740.7818.56952725.571.068521746.000.63964217.03631.418243.958461.677740.83489990905.953.5767414807.782708.584.946628.611221722.242692.660.4565071715.7618.64722705.251.062781727.540.63870717.04471.410253.950231.674650.83185591133.123.5673514841.972700.715.140198.612851727.292676.920.4545071713.7518.53442695.281.058531739.580.63392617.18831.407243.934891.676380.83377791295.963.5689214820.942706.444.814438.811451721.032729.230.4583841716.2018.72032695.621.056881736.050.63601917.09671.406543.943311.679590.83040691225.863.5621114845.302702.764.586988.896361714.952691.730.4596461736.7118.79882708.381.057751733.880.63684417.09551.414153.92951.669730.83264591042.943.5715414868.162699.184.70312OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU23145246810SE +/- 0.03495, N = 3SE +/- 0.05813, N = 3SE +/- 0.02162, N = 3SE +/- 0.05920, N = 3SE +/- 0.03275, N = 38.611228.612858.682568.811458.89636MIN: 8.21MIN: 8.13MIN: 8.14MIN: 8.33MIN: 8.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU54231400800120016002000SE +/- 4.01, N = 3SE +/- 5.23, N = 3SE +/- 7.22, N = 3SE +/- 13.99, N = 3SE +/- 11.93, N = 31714.951721.031722.241727.291755.70MIN: 1698.74MIN: 1705.22MIN: 1693.61MIN: 1698.14MIN: 1720.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU352146001200180024003000SE +/- 12.00, N = 3SE +/- 9.51, N = 3SE +/- 1.98, N = 3SE +/- 4.23, N = 3SE +/- 9.17, N = 32676.922691.732692.662713.472729.23MIN: 2645.36MIN: 2662.92MIN: 2678.17MIN: 2695.46MIN: 2694.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU324510.10390.20780.31170.41560.5195SE +/- 0.002018, N = 3SE +/- 0.002258, N = 3SE +/- 0.002373, N = 3SE +/- 0.001381, N = 3SE +/- 0.004852, N = 30.4545070.4565070.4583840.4596460.461770MIN: 0.41MIN: 0.42MIN: 0.43MIN: 0.42MIN: 0.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU32451400800120016002000SE +/- 2.22, N = 3SE +/- 0.54, N = 3SE +/- 2.84, N = 3SE +/- 9.03, N = 3SE +/- 8.56, N = 31713.751715.761716.201736.711740.78MIN: 1692.03MIN: 1703.59MIN: 1701.07MIN: 1699.5MIN: 1706.771. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU31245510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 318.5318.5718.6518.7218.80MIN: 18.14MIN: 18.16MIN: 18.1MIN: 18.33MIN: 18.291. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU342516001200180024003000SE +/- 10.44, N = 3SE +/- 11.59, N = 3SE +/- 8.00, N = 3SE +/- 10.27, N = 3SE +/- 10.35, N = 32695.282695.622705.252708.382725.57MIN: 2671.19MIN: 2673.04MIN: 2682.75MIN: 2680.82MIN: 2696.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU453210.24040.48080.72120.96161.202SE +/- 0.00105, N = 3SE +/- 0.00119, N = 3SE +/- 0.00161, N = 3SE +/- 0.00465, N = 3SE +/- 0.00181, N = 31.056881.057751.058531.062781.06852MIN: 0.97MIN: 0.97MIN: 0.97MIN: 0.98MIN: 0.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU25431400800120016002000SE +/- 0.49, N = 3SE +/- 6.52, N = 3SE +/- 9.45, N = 3SE +/- 12.37, N = 3SE +/- 12.65, N = 31727.541733.881736.051739.581746.00MIN: 1714.36MIN: 1708.5MIN: 1713.46MIN: 1717.31MIN: 1710.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU345210.14390.28780.43170.57560.7195SE +/- 0.000478, N = 3SE +/- 0.000994, N = 3SE +/- 0.000627, N = 3SE +/- 0.000454, N = 3SE +/- 0.000949, N = 30.6339260.6360190.6368440.6387070.639642MIN: 0.6MIN: 0.61MIN: 0.61MIN: 0.61MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1254348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.18, N = 317.0417.0417.1017.1017.19MIN: 16.37MIN: 16.63MIN: 16.56MIN: 16.53MIN: 16.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU432510.31910.63820.95731.27641.5955SE +/- 0.00102, N = 3SE +/- 0.00206, N = 3SE +/- 0.00138, N = 3SE +/- 0.00141, N = 3SE +/- 0.00092, N = 31.406541.407241.410251.414151.41824MIN: 1.3MIN: 1.3MIN: 1.31MIN: 1.32MIN: 1.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU534210.89071.78142.67213.56284.4535SE +/- 0.03409, N = 3SE +/- 0.01623, N = 3SE +/- 0.01060, N = 3SE +/- 0.00752, N = 3SE +/- 0.01694, N = 33.929503.934893.943313.950233.95846MIN: 3.72MIN: 3.72MIN: 3.74MIN: 3.76MIN: 3.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU523140.37790.75581.13371.51161.8895SE +/- 0.00358, N = 3SE +/- 0.00176, N = 3SE +/- 0.00123, N = 3SE +/- 0.00014, N = 3SE +/- 0.00545, N = 31.669731.674651.676381.677741.67959MIN: 1.59MIN: 1.6MIN: 1.59MIN: 1.62MIN: 1.591. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU425310.18790.37580.56370.75160.9395SE +/- 0.001726, N = 3SE +/- 0.002629, N = 3SE +/- 0.002188, N = 3SE +/- 0.003750, N = 3SE +/- 0.001499, N = 30.8304060.8318550.8326450.8337770.834899MIN: 0.75MIN: 0.75MIN: 0.75MIN: 0.76MIN: 0.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU3425120K40K60K80K100KSE +/- 84.52, N = 3SE +/- 87.78, N = 3SE +/- 77.39, N = 3SE +/- 86.89, N = 3SE +/- 88.94, N = 391295.9691225.8691133.1291042.9490905.951. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU423510.80481.60962.41443.21924.024SE +/- 0.00236, N = 3SE +/- 0.00034, N = 3SE +/- 0.00441, N = 3SE +/- 0.00116, N = 3SE +/- 0.00466, N = 33.562113.567353.568923.571543.57674MIN: 3.41MIN: 3.43MIN: 3.42MIN: 3.44MIN: 3.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory542313K6K9K12K15KSE +/- 0.96, N = 3SE +/- 17.12, N = 3SE +/- 20.27, N = 3SE +/- 31.43, N = 3SE +/- 13.87, N = 314868.1614845.3014841.9714820.9414807.781. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU524316001200180024003000SE +/- 4.86, N = 3SE +/- 5.61, N = 3SE +/- 7.92, N = 3SE +/- 11.04, N = 3SE +/- 4.11, N = 32699.182700.712702.762706.442708.58MIN: 2678.94MIN: 2685.06MIN: 2678.89MIN: 2680.57MIN: 2692.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU453121.15652.3133.46954.6265.7825SE +/- 0.27001, N = 15SE +/- 0.23987, N = 15SE +/- 0.25214, N = 12SE +/- 0.26241, N = 15SE +/- 0.32359, N = 124.586984.703124.814434.946625.14019MIN: 2.9MIN: 2.91MIN: 2.87MIN: 2.89MIN: 2.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl