zn

2 x AMD EPYC 7773X 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2305011-NE-ZN838339286
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 6 Tests
C/C++ Compiler Tests 8 Tests
CPU Massive 10 Tests
Creator Workloads 10 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
Fortran Tests 2 Tests
Game Development 4 Tests
HPC - High Performance Computing 3 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 2 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 8 Tests
Server 6 Tests
Server CPU Tests 6 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
April 30 2023
  4 Hours, 57 Minutes
b
April 30 2023
  4 Hours, 22 Minutes
c
April 30 2023
  4 Hours, 54 Minutes
Invert Hiding All Results Option
  4 Hours, 44 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


zn, "QuantLib 1.30 - ", Higher Results Are Better "a", "b", "c", "SPECFEM3D 4.0 - Model: Mount St. Helens", Lower Results Are Better "a", "b", "c", "SPECFEM3D 4.0 - Model: Layered Halfspace", Lower Results Are Better "a", "b", "c", "SPECFEM3D 4.0 - Model: Tomographic Model", Lower Results Are Better "a", "b", "c", "SPECFEM3D 4.0 - Model: Homogeneous Halfspace", Lower Results Are Better "a", "b", "c", "SPECFEM3D 4.0 - Model: Water-layered Halfspace", Lower Results Are Better "a", "b", "c", "John The Ripper 2023.03.14 - Test: bcrypt", Higher Results Are Better "a", "b", "c", "John The Ripper 2023.03.14 - Test: WPA PSK", Higher Results Are Better "a", "b", "c", "John The Ripper 2023.03.14 - Test: Blowfish", Higher Results Are Better "a", "b", "c", "John The Ripper 2023.03.14 - Test: HMAC-SHA512", Higher Results Are Better "a", "b", "c", "John The Ripper 2023.03.14 - Test: MD5", Higher Results Are Better "a", "b", "c", "Embree 4.0.1 - Binary: Pathtracer - Model: Crown", Higher Results Are Better "a", "b", "c", "Embree 4.0.1 - Binary: Pathtracer ISPC - Model: Crown", Higher Results Are Better "a", "b", "c", "Embree 4.0.1 - Binary: Pathtracer - Model: Asian Dragon", Higher Results Are Better "a", "b", "c", "Embree 4.0.1 - Binary: Pathtracer ISPC - Model: Asian Dragon", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 4 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 8 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 12 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 13 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 4 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 8 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 12 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "c", "SVT-AV1 1.5 - Encoder Mode: Preset 13 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 4K - Video Preset: Slow", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 4K - Video Preset: Medium", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 1080p - Video Preset: Slow", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 1080p - Video Preset: Medium", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 4K - Video Preset: Very Fast", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 4K - Video Preset: Super Fast", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 4K - Video Preset: Ultra Fast", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 1080p - Video Preset: Very Fast", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 1080p - Video Preset: Super Fast", Higher Results Are Better "a", "b", "c", "uvg266 0.4.1 - Video Input: Bosphorus 1080p - Video Preset: Ultra Fast", Higher Results Are Better "a", "b", "c", "VVenC 1.8 - Video Input: Bosphorus 4K - Video Preset: Fast", Higher Results Are Better "a", "b", "c", "VVenC 1.8 - Video Input: Bosphorus 4K - Video Preset: Faster", Higher Results Are Better "a", "b", "c", "VVenC 1.8 - Video Input: Bosphorus 1080p - Video Preset: Fast", Higher Results Are Better "a", "b", "c", "VVenC 1.8 - Video Input: Bosphorus 1080p - Video Preset: Faster", Higher Results Are Better "a", "b", "c", "OpenVKL 1.3.1 - Benchmark: vklBenchmark ISPC", Higher Results Are Better "a", "b", "c", "OpenVKL 1.3.1 - Benchmark: vklBenchmark Scalar", Higher Results Are Better "a", "b", "c", "Timed FFmpeg Compilation 6.0 - Time To Compile", Lower Results Are Better "a", "b", "c", "Timed Godot Game Engine Compilation 4.0 - Time To Compile", Lower Results Are Better "a", "b", "c", "Timed Linux Kernel Compilation 6.1 - Build: defconfig", Lower Results Are Better "a", "b", "c", "Timed Linux Kernel Compilation 6.1 - Build: allmodconfig", Lower Results Are Better "a", "b", "c", "Timed LLVM Compilation 16.0 - Build System: Ninja", Lower Results Are Better "a", "b", "c", "Timed LLVM Compilation 16.0 - Build System: Unix Makefiles", Lower Results Are Better "a", "b", "c", "Timed Node.js Compilation 19.8.1 - Time To Compile", Lower Results Are Better "a", "b", "c", "Build2 0.15 - Time To Compile", Lower Results Are Better "a", "b", "c", "Opus Codec Encoding 1.4 - WAV To Opus Encode", Lower Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: SHA256", Higher Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: SHA512", Higher Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: RSA4096", Higher Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: RSA4096", Higher Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: ChaCha20", Higher Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: AES-128-GCM", Higher Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: AES-256-GCM", Higher Results Are Better "a", "b", "c", "OpenSSL 3.1 - Algorithm: ChaCha20-Poly1305", Higher Results Are Better "a", "b", "c", "ClickHouse 22.12.3.5 - 100M Rows Hits Dataset, First Run / Cold Cache", Higher Results Are Better "a",269.18555502841 "b",401.11194403483 "c",398.91295452052 "ClickHouse 22.12.3.5 - 100M Rows Hits Dataset, Second Run", Higher Results Are Better "a",273.4517094512 "b",413.04274425496 "c",403.79950240121 "ClickHouse 22.12.3.5 - 100M Rows Hits Dataset, Third Run", Higher Results Are Better "a",274.77250446555 "b",404.68549312686 "c",410.26370384578 "CockroachDB 22.2 - Workload: MoVR - Concurrency: 128", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: MoVR - Concurrency: 256", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: MoVR - Concurrency: 512", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 10% Reads - Concurrency: 128", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 10% Reads - Concurrency: 256", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 10% Reads - Concurrency: 512", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 50% Reads - Concurrency: 128", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 50% Reads - Concurrency: 256", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 50% Reads - Concurrency: 512", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 60% Reads - Concurrency: 128", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 60% Reads - Concurrency: 256", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 60% Reads - Concurrency: 512", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 95% Reads - Concurrency: 128", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 95% Reads - Concurrency: 256", Higher Results Are Better "a", "b", "c", "CockroachDB 22.2 - Workload: KV, 95% Reads - Concurrency: 512", Higher Results Are Better "a", "b", "c", "GROMACS 2023 - Implementation: MPI CPU - Input: water_GMX50_bare", Higher Results Are Better "a", "b", "c", "TensorFlow 2.12 - Device: CPU - Batch Size: 16 - Model: ResNet-50", Higher Results Are Better "a", "b", "c", "TensorFlow 2.12 - Device: CPU - Batch Size: 32 - Model: ResNet-50", Higher Results Are Better "a", "b", "c", "TensorFlow 2.12 - Device: CPU - Batch Size: 64 - Model: ResNet-50", Higher Results Are Better "a", "b", "c", "TensorFlow 2.12 - Device: CPU - Batch Size: 256 - Model: ResNet-50", Higher Results Are Better "a", "b", "c", "TensorFlow 2.12 - Device: CPU - Batch Size: 512 - Model: ResNet-50", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "Neural Magic DeepSparse 1.3.2 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "Google Draco 1.5.6 - Model: Lion", Lower Results Are Better "a", "b", "c", "Google Draco 1.5.6 - Model: Church Facade", Lower Results Are Better "a", "b", "c", "Blender 3.5 - Blend File: BMW27 - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "Blender 3.5 - Blend File: Classroom - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "Blender 3.5 - Blend File: Fishy Cat - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "Blender 3.5 - Blend File: Barbershop - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "Blender 3.5 - Blend File: Pabellon Barcelona - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "DeepRec - Model: BST - Data Type: BF16", Higher Results Are Better "a", "b", "c", "DeepRec - Model: BST - Data Type: FP32", Higher Results Are Better "a", "b", "c", "DeepRec - Model: DIN - Data Type: BF16", Higher Results Are Better "a", "b", "c", "DeepRec - Model: DIN - Data Type: FP32", Higher Results Are Better "a", "b", "c", "DeepRec - Model: PLE - Data Type: BF16", Higher Results Are Better "a", "b", "c", "DeepRec - Model: PLE - Data Type: FP32", Higher Results Are Better "a", "b", "c", "DeepRec - Model: DLRM - Data Type: BF16", Higher Results Are Better "a", "b", "c", "DeepRec - Model: DLRM - Data Type: FP32", Higher Results Are Better "a", "b", "c", "DeepRec - Model: MMOE - Data Type: BF16", Higher Results Are Better "a", "b", "c", "DeepRec - Model: MMOE - Data Type: FP32", Higher Results Are Better "a", "b", "c", "DeepRec - Model: DCNv2 - Data Type: BF16", Higher Results Are Better "a", "b", "c", "DeepRec - Model: DCNv2 - Data Type: FP32", Higher Results Are Better "a", "b", "c", "PETSc 3.19 - Test: Streams", Higher Results Are Better "a", "b", "c", "RocksDB 8.0 - Test: Random Read", Higher Results Are Better "a", "b", "c", "RocksDB 8.0 - Test: Update Random", Higher Results Are Better "a", "b", "c", "RocksDB 8.0 - Test: Read While Writing", Higher Results Are Better "a", "b", "c", "RocksDB 8.0 - Test: Read Random Write Random", Higher Results Are Better "a", "b", "c", "nginx 1.23.2 - Connections: 100", Higher Results Are Better "a", "b", "c", "nginx 1.23.2 - Connections: 200", Higher Results Are Better "a", "b", "c", "nginx 1.23.2 - Connections: 500", Higher Results Are Better "a", "b", "c", "nginx 1.23.2 - Connections: 1000", Higher Results Are Better "a", "b", "c", "Apache HTTP Server 2.4.56 - Concurrent Requests: 100", Higher Results Are Better "a", "b", "c", "Apache HTTP Server 2.4.56 - Concurrent Requests: 200", Higher Results Are Better "a", "b", "c", "Apache HTTP Server 2.4.56 - Concurrent Requests: 500", Higher Results Are Better "a", "b", "c", "Apache HTTP Server 2.4.56 - Concurrent Requests: 1000", Higher Results Are Better "a", "b", "c", "BRL-CAD 7.34 - VGR Performance Metric", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 1", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 1", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 16", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 32", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 64", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 96", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 16", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 32", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 64", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 96", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 256", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 512", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_fp32_pretrained_model - Batch Size: 960", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 256", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 512", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: resnet50_int8_pretrained_model - Batch Size: 960", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 1", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 1", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 1", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 16", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 32", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 64", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 96", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 16", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 32", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 64", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 96", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 16", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 32", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 64", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 96", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 256", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 512", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_fp32_pretrained_model - Batch Size: 960", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 256", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 512", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: inceptionv4_int8_pretrained_model - Batch Size: 960", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 256", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 512", Higher Results Are Better "a", "c", "Intel TensorFlow 2.12 - Model: mobilenetv1_int8_pretrained_model - Batch Size: 960", Higher Results Are Better "a", "c",