axs2mlperf

This is a facilitator for the MLPerf Inference Benchmark Suite leveraging the axs2mlperf Docker container build and testing for currently facilitating ResNwt-50 reference model inference CPU benchmarks. See reference information at https://github.com/krai/axs2mlperf/blob/master/demo/README.md


axs2mlperf

Test: ResNet50 MLPerf Reference Model

OpenBenchmarking.org metrics for this test profile configuration based on 29 public results since 12 May 2023 with the latest data as of 19 May 2023.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Details
Percentile Rank
# Compatible Public Results
Samples Per Second (Average)
Zen 4 [16 Cores / 32 Threads]
84th
3
152 +/- 7
Zen 4 [16 Cores / 32 Threads]
84th
4
149 +/- 19
Mid-Tier
75th
< 121
Zen 4 [192 Cores / 384 Threads]
64th
5
85 +/- 2
Median
50th
53
Zen 3 [8 Cores / 16 Threads]
39th
4
42 +/- 5
Low-Tier
25th
< 28
Tiger Lake [4 Cores / 8 Threads]
12th
3
26 +/- 1