renoir march
AMD Ryzen 7 4700U testing with a LENOVO LNVNB161216 (DTCN18WWV1.04 BIOS) and AMD Renoir 512MB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2304016-NE-RENOIRMAR40&sor&grt.
Apache HTTP Server
Concurrent Requests: 100
Apache HTTP Server
Concurrent Requests: 200
Apache HTTP Server
Concurrent Requests: 500
Apache HTTP Server
Concurrent Requests: 1000
Blender
Blend File: BMW27 - Compute: CPU-Only
Blender
Blend File: Classroom - Compute: CPU-Only
Blender
Blend File: Fishy Cat - Compute: CPU-Only
Blender
Blend File: Barbershop - Compute: CPU-Only
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
Memcached
Set To Get Ratio: 1:5
Memcached
Set To Get Ratio: 1:10
Memcached
Set To Get Ratio: 1:100
nginx
Connections: 100
nginx
Connections: 200
nginx
Connections: 500
nginx
Connections: 1000
oneDNN
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
TensorFlow
Device: CPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 32 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 64 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: ResNet-50
Phoronix Test Suite v10.8.5