sysbench oneDNN Ryzen 9 5950X

AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS) and llvmpipe on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103140-PTS-SYSBENCH79
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1
March 13 2021
  40 Minutes
2
March 13 2021
  39 Minutes
3
March 13 2021
  39 Minutes
4
March 13 2021
  40 Minutes
5
March 13 2021
  40 Minutes
Invert Behavior (Only Show Selected Data)
  39 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


sysbench oneDNN Ryzen 9 5950XProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution12345AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS)AMD Starship/Matisse32GB2000GB Corsair Force MP600 + 2000GBllvmpipeAMD Device ab28Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.105.10.23-051023-generic (x86_64)GNOME Shell 3.38.2X Server 1.20.94.5 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1 256 bits)1.0.168GCC 10.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201009Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

12345Result OverviewPhoronix Test Suite100%103%106%109%oneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNSysbenchoneDNNSysbenchoneDNND.B.s - f32 - CPUIP Shapes 3D - f32 - CPUR.N.N.I - f32 - CPUR.N.N.T - f32 - CPUIP Shapes 3D - u8s8f32 - CPUR.N.N.I - bf16bf16bf16 - CPUC.B.S.A - u8s8f32 - CPUR.N.N.T - u8s8f32 - CPUD.B.s - u8s8f32 - CPUR.N.N.I - u8s8f32 - CPUM.M.B.S.T - f32 - CPUC.B.S.A - f32 - CPUM.M.B.S.T - u8s8f32 - CPUIP Shapes 1D - f32 - CPUD.B.s - u8s8f32 - CPUIP Shapes 1D - u8s8f32 - CPUCPUD.B.s - f32 - CPURAM / MemoryR.N.N.T - bf16bf16bf16 - CPU

sysbench oneDNN Ryzen 9 5950Xonednn: Deconvolution Batch shapes_1d - f32 - CPUsysbench: CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUsysbench: RAM / Memoryonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU123454.9466290905.952713.472708.582725.571740.781755.701746.001.068523.958460.8348990.6396421.418248.682560.46177014807.7818.569517.03633.576741.677745.1401991133.122692.662700.712705.251715.761722.241727.541.062783.950230.8318550.6387071.410258.611220.45650714841.9718.647217.04473.567351.674654.8144391295.962676.922706.442695.281713.751727.291739.581.058533.934890.8337770.6339261.407248.612850.45450714820.9418.534417.18833.568921.676384.5869891225.862729.232702.762695.621716.201721.031736.051.056883.943310.8304060.6360191.406548.811450.45838414845.3018.720317.09673.562111.679594.7031291042.942691.732699.182708.381736.711714.951733.881.057753.92950.8326450.6368441.414158.896360.45964614868.1618.798817.09553.571541.66973OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU453121.15652.3133.46954.6265.7825SE +/- 0.27001, N = 15SE +/- 0.23987, N = 15SE +/- 0.25214, N = 12SE +/- 0.26241, N = 15SE +/- 0.32359, N = 124.586984.703124.814434.946625.14019MIN: 2.9MIN: 2.91MIN: 2.87MIN: 2.89MIN: 2.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU3425120K40K60K80K100KSE +/- 84.52, N = 3SE +/- 87.78, N = 3SE +/- 77.39, N = 3SE +/- 86.89, N = 3SE +/- 88.94, N = 391295.9691225.8691133.1291042.9490905.951. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU352146001200180024003000SE +/- 12.00, N = 3SE +/- 9.51, N = 3SE +/- 1.98, N = 3SE +/- 4.23, N = 3SE +/- 9.17, N = 32676.922691.732692.662713.472729.23MIN: 2645.36MIN: 2662.92MIN: 2678.17MIN: 2695.46MIN: 2694.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU524316001200180024003000SE +/- 4.86, N = 3SE +/- 5.61, N = 3SE +/- 7.92, N = 3SE +/- 11.04, N = 3SE +/- 4.11, N = 32699.182700.712702.762706.442708.58MIN: 2678.94MIN: 2685.06MIN: 2678.89MIN: 2680.57MIN: 2692.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU342516001200180024003000SE +/- 10.44, N = 3SE +/- 11.59, N = 3SE +/- 8.00, N = 3SE +/- 10.27, N = 3SE +/- 10.35, N = 32695.282695.622705.252708.382725.57MIN: 2671.19MIN: 2673.04MIN: 2682.75MIN: 2680.82MIN: 2696.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU32451400800120016002000SE +/- 2.22, N = 3SE +/- 0.54, N = 3SE +/- 2.84, N = 3SE +/- 9.03, N = 3SE +/- 8.56, N = 31713.751715.761716.201736.711740.78MIN: 1692.03MIN: 1703.59MIN: 1701.07MIN: 1699.5MIN: 1706.771. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU54231400800120016002000SE +/- 4.01, N = 3SE +/- 5.23, N = 3SE +/- 7.22, N = 3SE +/- 13.99, N = 3SE +/- 11.93, N = 31714.951721.031722.241727.291755.70MIN: 1698.74MIN: 1705.22MIN: 1693.61MIN: 1698.14MIN: 1720.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU25431400800120016002000SE +/- 0.49, N = 3SE +/- 6.52, N = 3SE +/- 9.45, N = 3SE +/- 12.37, N = 3SE +/- 12.65, N = 31727.541733.881736.051739.581746.00MIN: 1714.36MIN: 1708.5MIN: 1713.46MIN: 1717.31MIN: 1710.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU453210.24040.48080.72120.96161.202SE +/- 0.00105, N = 3SE +/- 0.00119, N = 3SE +/- 0.00161, N = 3SE +/- 0.00465, N = 3SE +/- 0.00181, N = 31.056881.057751.058531.062781.06852MIN: 0.97MIN: 0.97MIN: 0.97MIN: 0.98MIN: 0.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU534210.89071.78142.67213.56284.4535SE +/- 0.03409, N = 3SE +/- 0.01623, N = 3SE +/- 0.01060, N = 3SE +/- 0.00752, N = 3SE +/- 0.01694, N = 33.929503.934893.943313.950233.95846MIN: 3.72MIN: 3.72MIN: 3.74MIN: 3.76MIN: 3.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU425310.18790.37580.56370.75160.9395SE +/- 0.001726, N = 3SE +/- 0.002629, N = 3SE +/- 0.002188, N = 3SE +/- 0.003750, N = 3SE +/- 0.001499, N = 30.8304060.8318550.8326450.8337770.834899MIN: 0.75MIN: 0.75MIN: 0.75MIN: 0.76MIN: 0.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU345210.14390.28780.43170.57560.7195SE +/- 0.000478, N = 3SE +/- 0.000994, N = 3SE +/- 0.000627, N = 3SE +/- 0.000454, N = 3SE +/- 0.000949, N = 30.6339260.6360190.6368440.6387070.639642MIN: 0.6MIN: 0.61MIN: 0.61MIN: 0.61MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU432510.31910.63820.95731.27641.5955SE +/- 0.00102, N = 3SE +/- 0.00206, N = 3SE +/- 0.00138, N = 3SE +/- 0.00141, N = 3SE +/- 0.00092, N = 31.406541.407241.410251.414151.41824MIN: 1.3MIN: 1.3MIN: 1.31MIN: 1.32MIN: 1.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU23145246810SE +/- 0.03495, N = 3SE +/- 0.05813, N = 3SE +/- 0.02162, N = 3SE +/- 0.05920, N = 3SE +/- 0.03275, N = 38.611228.612858.682568.811458.89636MIN: 8.21MIN: 8.13MIN: 8.14MIN: 8.33MIN: 8.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU324510.10390.20780.31170.41560.5195SE +/- 0.002018, N = 3SE +/- 0.002258, N = 3SE +/- 0.002373, N = 3SE +/- 0.001381, N = 3SE +/- 0.004852, N = 30.4545070.4565070.4583840.4596460.461770MIN: 0.41MIN: 0.42MIN: 0.43MIN: 0.42MIN: 0.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory542313K6K9K12K15KSE +/- 0.96, N = 3SE +/- 17.12, N = 3SE +/- 20.27, N = 3SE +/- 31.43, N = 3SE +/- 13.87, N = 314868.1614845.3014841.9714820.9414807.781. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU31245510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 318.5318.5718.6518.7218.80MIN: 18.14MIN: 18.16MIN: 18.1MIN: 18.33MIN: 18.291. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1254348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.18, N = 317.0417.0417.1017.1017.19MIN: 16.37MIN: 16.63MIN: 16.56MIN: 16.53MIN: 16.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU423510.80481.60962.41443.21924.024SE +/- 0.00236, N = 3SE +/- 0.00034, N = 3SE +/- 0.00441, N = 3SE +/- 0.00116, N = 3SE +/- 0.00466, N = 33.562113.567353.568923.571543.57674MIN: 3.41MIN: 3.43MIN: 3.42MIN: 3.44MIN: 3.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU523140.37790.75581.13371.51161.8895SE +/- 0.00358, N = 3SE +/- 0.00176, N = 3SE +/- 0.00123, N = 3SE +/- 0.00014, N = 3SE +/- 0.00545, N = 31.669731.674651.676381.677741.67959MIN: 1.59MIN: 1.6MIN: 1.59MIN: 1.62MIN: 1.591. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl