nlp-benchmarks m7i.2xlarge NLP benchmarking c6i.2xlarge: Processor: Intel Xeon Platinum 8375C (4 Cores / 8 Threads), Motherboard: Amazon EC2 c6i.2xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 1 x 16GB DDR4-3200MT/s, Disk: 215GB Amazon Elastic Block Store, Network: Amazon Elastic OS: Amazon Linux 2023, Kernel: 6.1.61-85.141.amzn2023.x86_64 (x86_64), Compiler: GCC 11.4.1 20230605, File-System: xfs, System Layer: amazon nlp-benchmarks: Processor: Intel Xeon Platinum 8488C (4 Cores / 8 Threads), Motherboard: Amazon EC2 m7i-flex.2xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 1 x 32GB 4800MT/s, Disk: 215GB Amazon Elastic Block Store, Network: Amazon Elastic OS: Amazon Linux 2023, Kernel: 6.1.72-96.166.amzn2023.x86_64 (x86_64), Compiler: GCC 11.4.1 20230605, File-System: xfs, System Layer: amazon oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 5.93669 |===================================================== nlp-benchmarks . 5.84261 |==================================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 12.66180 |==================================================== nlp-benchmarks . 8.10007 |================================= oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 8.09803 |===================================================== nlp-benchmarks . 7.94878 |==================================================== oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 6.24391 |============================================== nlp-benchmarks . 7.19462 |===================================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 2.038360 |==================================================== nlp-benchmarks . 1.016327 |========================== oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 1.772860 |==================================================== nlp-benchmarks . 1.003269 |============================= oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 2496.84 |===================================================== nlp-benchmarks . 2382.46 |=================================================== oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 33.19200 |==================================================== nlp-benchmarks . 3.68993 |====== oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 46.69860 |==================================================== nlp-benchmarks . 1.76350 |== oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 34.71660 |==================================================== nlp-benchmarks . 3.11075 |===== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 2492.29 |===================================================== nlp-benchmarks . 2389.64 |=================================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better c6i.2xlarge .... 2501.50 |===================================================== nlp-benchmarks . 2318.31 |================================================= Numpy Benchmark Score > Higher Is Better c6i.2xlarge .... 374.99 |============================================== nlp-benchmarks . 438.25 |====================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better c6i.2xlarge .... 26.78 |================================================= nlp-benchmarks . 29.89 |======================================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better c6i.2xlarge .... 10.57 |================================================ nlp-benchmarks . 12.10 |======================================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better c6i.2xlarge .... 15.96 |=============================================== nlp-benchmarks . 18.84 |======================================================= PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better c6i.2xlarge .... 15.81 |=============================================== nlp-benchmarks . 18.34 |======================================================= PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better c6i.2xlarge .... 6.36 |================================================ nlp-benchmarks . 7.36 |======================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better c6i.2xlarge .... 6.38 |=============================================== nlp-benchmarks . 7.55 |======================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better c6i.2xlarge .... 7.99 |================================================== nlp-benchmarks . 8.99 |======================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better c6i.2xlarge .... 4.06 |========================================== nlp-benchmarks . 5.38 |======================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better c6i.2xlarge .... 4.04 |========================================= nlp-benchmarks . 5.47 |======================================================== OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better c6i.2xlarge .... 1.77 |============ nlp-benchmarks . 8.43 |======================================================== OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU ms < Lower Is Better c6i.2xlarge .... 2252.04 |===================================================== nlp-benchmarks . 474.46 |=========== OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better c6i.2xlarge .... 6.53 |====================== nlp-benchmarks . 16.57 |======================================================= OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better c6i.2xlarge .... 610.59 |====================================================== nlp-benchmarks . 241.21 |===================== OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better c6i.2xlarge .... 22.26 |======================= nlp-benchmarks . 53.87 |======================================================= OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better c6i.2xlarge .... 179.53 |====================================================== nlp-benchmarks . 74.20 |====================== PyBench 2018-02-16 Total For Average Test Times Milliseconds < Lower Is Better c6i.2xlarge .... 1000 |======================================================== nlp-benchmarks . 736 |=========================================