baseline

AMD Ryzen 7 1700 Eight-Core testing with a ASUS PRIME B350-PLUS (5007 BIOS) and eVGA NVIDIA GeForce GTX 1080 Ti 11GB on Arch rolling via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2007193-NI-BASELINE202&grs.

baselineProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionmainlinelinuxaiAMD Ryzen 7 1700 Eight-Core @ 3.00GHz (8 Cores / 16 Threads)ASUS PRIME B350-PLUS (5007 BIOS)AMD 17h32GBSamsung SSD 960 EVO 250GB + 3001GB Seagate ST3000DM008-2DM1 + 2000GB Western Digital WD20EFRX-68E + 2 x 2000GB Seagate ST2000DM008-2FR1eVGA NVIDIA GeForce GTX 1080 Ti 11GB (1556/5508MHz)NVIDIA GP102 HDMI AudioU2777BRealtek RTL8111/8168/8411Arch rolling5.7.8-arch1-1 (x86_64)X Server 1.20.8NVIDIA 450.574.6.0GCC 10.1.0 + Clang 10.0.0 + LLVM 10.0.0 + ICC + CUDA 10.2ext411520x2160OpenBenchmarking.org- --disable-libssp --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-werror --enable-__cxa_atexit --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-install-libiberty --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu - Scaling Governor: acpi-cpufreq schedutil - CPU Microcode: 0x8001138- Python 3.8.3- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

baselineonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - f32 - CPUbuild-ffmpeg: Time To Compileonednn: Recurrent Neural Network Training - f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUmainlinelinuxai6.747955.96066282.76316.161721.846518.152912.248321.059591.60607.81265137.04010.849891.2371064.85417.8937OpenBenchmarking.org

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUmainlinelinuxai246810SE +/- 0.08744, N = 56.74795MIN: 5.321. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUmainlinelinuxai1.34112.68224.02335.36446.7055SE +/- 0.02692, N = 35.96066MIN: 4.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUmainlinelinuxai60120180240300SE +/- 4.57, N = 3282.76MIN: 229.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUmainlinelinuxai48121620SE +/- 0.17, N = 1516.16MIN: 13.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUmainlinelinuxai510152025SE +/- 0.13, N = 321.85MIN: 19.291. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUmainlinelinuxai48121620SE +/- 0.22, N = 618.15MIN: 16.861. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUmainlinelinuxai3691215SE +/- 0.16, N = 412.25MIN: 10.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUmainlinelinuxai510152025SE +/- 0.16, N = 321.06MIN: 17.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUmainlinelinuxai20406080100SE +/- 0.48, N = 391.61MIN: 84.731. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUmainlinelinuxai246810SE +/- 0.03612, N = 37.81265MIN: 6.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch All - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUmainlinelinuxai306090120150SE +/- 0.35, N = 3137.04MIN: 125.731. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUmainlinelinuxai3691215SE +/- 0.12, N = 310.85MIN: 8.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compilemainlinelinuxai20406080100SE +/- 0.22, N = 391.24

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUmainlinelinuxai2004006008001000SE +/- 60.97, N = 121064.85MIN: 682.791. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUmainlinelinuxai48121620SE +/- 1.01, N = 1317.89MIN: 13.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl


Phoronix Test Suite v10.8.4