r 2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite. r1: Processor: 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads), Motherboard: TYAN S7106 (V2.01.B40 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 94GB, Disk: 500GB Samsung SSD 860, Graphics: llvmpipe, Monitor: VE228, Network: 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc6-generic (x86_64) 20200920, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 3.3 Mesa 20.0.4 (LLVM 9.0.1 256 bits), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080 oneDNN 1.5 Harness: IP Batch 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 1.75730 |================================================================= oneDNN 1.5 Harness: IP Batch All - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 27.35 |=================================================================== oneDNN 1.5 Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better r1 . 1.82044 |================================================================= oneDNN 1.5 Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better r1 . 6.75257 |================================================================= oneDNN 1.5 Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better r1 . 5.69301 |================================================================= oneDNN 1.5 Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better r1 . 51.32 |=================================================================== oneDNN 1.5 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 7.44916 |================================================================= oneDNN 1.5 Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 1.96077 |================================================================= oneDNN 1.5 Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 2.70059 |================================================================= oneDNN 1.5 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better r1 . 7.05069 |================================================================= oneDNN 1.5 Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better r1 . 0.551345 |================================================================ oneDNN 1.5 Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better r1 . 0.691067 |================================================================ oneDNN 1.5 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 224.99 |================================================================== oneDNN 1.5 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 80.44 |=================================================================== oneDNN 1.5 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better r1 . 6.39772 |================================================================= oneDNN 1.5 Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better r1 . 7.39126 |================================================================= oneDNN 1.5 Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better r1 . 9.47970 |================================================================= oneDNN 1.5 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU ms < Lower Is Better r1 . 0.525698 |================================================================ oneDNN 1.5 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better r1 . 0.296070 |================================================================ oneDNN 1.5 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better r1 . 1.45436 |================================================================= AOM AV1 2.0 Encoder Mode: Speed 0 Two-Pass Frames Per Second > Higher Is Better r1 . 0.27 |==================================================================== AOM AV1 2.0 Encoder Mode: Speed 4 Two-Pass Frames Per Second > Higher Is Better r1 . 1.93 |==================================================================== AOM AV1 2.0 Encoder Mode: Speed 6 Realtime Frames Per Second > Higher Is Better r1 . 10.96 |=================================================================== AOM AV1 2.0 Encoder Mode: Speed 6 Two-Pass Frames Per Second > Higher Is Better r1 . 2.95 |==================================================================== AOM AV1 2.0 Encoder Mode: Speed 8 Realtime Frames Per Second > Higher Is Better r1 . 23.41 |=================================================================== Timed Linux Kernel Compilation 5.4 Time To Compile Seconds < Lower Is Better r1 . 38.43 |=================================================================== Timed LLVM Compilation 10.0 Time To Compile Seconds < Lower Is Better r1 . 285.57 |================================================================== Blender 2.90 Blend File: BMW27 - Compute: CPU-Only Seconds < Lower Is Better r1 . 60.49 |=================================================================== Blender 2.90 Blend File: Classroom - Compute: CPU-Only Seconds < Lower Is Better r1 . 173.91 |================================================================== Blender 2.90 Blend File: Fishy Cat - Compute: CPU-Only Seconds < Lower Is Better r1 . 84.22 |=================================================================== Blender 2.90 Blend File: Barbershop - Compute: CPU-Only Seconds < Lower Is Better r1 . 249.29 |================================================================== Blender 2.90 Blend File: Pabellon Barcelona - Compute: CPU-Only Seconds < Lower Is Better r1 . 193.19 |================================================================== Kripke 1.2.4 Throughput FoM > Higher Is Better r1 . 46087163 |================================================================ OpenCV 4.4 Test: DNN - Deep Neural Network ms < Lower Is Better r1 . 10566 |=================================================================== InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 val/sec > Higher Is Better r1 . 726686.3 |================================================================ InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 val/sec > Higher Is Better r1 . 1296920.4 |=============================================================== InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 val/sec > Higher Is Better r1 . 1363127.8 |===============================================================