AMD EPYC 7343 16-Core testing with a Supermicro H12SSL-i v1.02 (2.4 BIOS) and astdrmfb on AlmaLinux 9.1 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2304307-NE-EPYCLAST283 epyc last - Phoronix Test Suite epyc last AMD EPYC 7343 16-Core testing with a Supermicro H12SSL-i v1.02 (2.4 BIOS) and astdrmfb on AlmaLinux 9.1 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2304307-NE-EPYCLAST283&grw&sor .
epyc last Processor Motherboard Memory Disk Graphics Monitor OS Kernel Compiler File-System Screen Resolution a b c d AMD EPYC 7343 16-Core @ 3.20GHz (16 Cores / 32 Threads) Supermicro H12SSL-i v1.02 (2.4 BIOS) 8 x 64 GB DDR4-3200MT/s Samsung M393A8G40AB2-CWE 2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07 astdrmfb DELL E207WFP AlmaLinux 9.1 5.14.0-162.12.1.el9_1.x86_64 (x86_64) GCC 11.3.1 20220421 ext4 1680x1050 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Disk Details - NONE / relatime,rw,stripe=32 / raid1 nvme1n1p3[0] nvme0n1p3[1] Block Size: 4096 Processor Details - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 Python Details - Python 3.9.14 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
epyc last quantlib: svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p influxdb: 4 - 10000 - 2,5000,1 - 10000 intel-tensorflow: inceptionv4_fp32_pretrained_model - 1 influxdb: 64 - 10000 - 2,5000,1 - 10000 intel-tensorflow: inceptionv4_fp32_pretrained_model - 32 intel-tensorflow: inceptionv4_int8_pretrained_model - 32 intel-tensorflow: mobilenetv1_int8_pretrained_model - 960 intel-tensorflow: inceptionv4_fp32_pretrained_model - 96 intel-tensorflow: inceptionv4_int8_pretrained_model - 256 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 512 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 96 intel-tensorflow: mobilenetv1_int8_pretrained_model - 96 intel-tensorflow: resnet50_fp32_pretrained_model - 64 intel-tensorflow: resnet50_fp32_pretrained_model - 96 intel-tensorflow: resnet50_fp32_pretrained_model - 16 intel-tensorflow: resnet50_fp32_pretrained_model - 32 intel-tensorflow: resnet50_int8_pretrained_model - 1 intel-tensorflow: resnet50_int8_pretrained_model - 1 intel-tensorflow: resnet50_fp32_pretrained_model - 1 intel-tensorflow: inceptionv4_int8_pretrained_model - 96 intel-tensorflow: resnet50_int8_pretrained_model - 16 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 32 intel-tensorflow: resnet50_int8_pretrained_model - 32 intel-tensorflow: mobilenetv1_int8_pretrained_model - 32 intel-tensorflow: resnet50_int8_pretrained_model - 64 intel-tensorflow: inceptionv4_fp32_pretrained_model - 512 intel-tensorflow: resnet50_int8_pretrained_model - 96 intel-tensorflow: inceptionv4_int8_pretrained_model - 960 intel-tensorflow: resnet50_fp32_pretrained_model - 256 intel-tensorflow: mobilenetv1_int8_pretrained_model - 256 intel-tensorflow: resnet50_fp32_pretrained_model - 512 intel-tensorflow: resnet50_fp32_pretrained_model - 1 intel-tensorflow: resnet50_fp32_pretrained_model - 960 intel-tensorflow: inceptionv4_int8_pretrained_model - 16 intel-tensorflow: resnet50_int8_pretrained_model - 256 intel-tensorflow: inceptionv4_int8_pretrained_model - 64 intel-tensorflow: resnet50_int8_pretrained_model - 512 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 16 intel-tensorflow: resnet50_int8_pretrained_model - 960 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 64 intel-tensorflow: inceptionv4_fp32_pretrained_model - 1 intel-tensorflow: mobilenetv1_int8_pretrained_model - 16 sqlite: 2 intel-tensorflow: mobilenetv1_int8_pretrained_model - 64 intel-tensorflow: inceptionv4_int8_pretrained_model - 1 intel-tensorflow: inceptionv4_fp32_pretrained_model - 256 intel-tensorflow: inceptionv4_int8_pretrained_model - 1 intel-tensorflow: inceptionv4_fp32_pretrained_model - 960 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 1 intel-tensorflow: inceptionv4_int8_pretrained_model - 512 intel-tensorflow: mobilenetv1_int8_pretrained_model - 1 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 256 intel-tensorflow: inceptionv4_fp32_pretrained_model - 16 intel-tensorflow: mobilenetv1_fp32_pretrained_model - 960 sqlite: 4 intel-tensorflow: mobilenetv1_int8_pretrained_model - 512 intel-tensorflow: inceptionv4_fp32_pretrained_model - 64 sqlite: 8 sqlite: 16 sqlite: 32 a b c d 3202.1 3.766 52.582 174.519 160.501 9.027 95.925 547.498 548.007 1547894.4 30.817 1602099.9 53.10 117.00 2133.18 51.83 119.18 976.58 990.35 2083.55 170.509 169.726 168.744 174.040 4.508 221.855 12.613 118.25 346.017 981.32 356.317 2056.18 365.095 51.76 373.126 120.74 167.968 2090.97 168.371 79.283 168.721 113.31 382.093 118.48 383.615 932.19 391.680 998.43 32.23 2003.49 2.150 2112.33 69.04 51.92 14.436 51.76 1045.59 119.90 1933.37 1001.61 53.20 983.76 3.178 2170.09 52.44 5.065 8.476 11.321 3206.1 3.782 51.982 175.683 160.664 9.064 95.707 542.028 542.625 1545780.8 30.475 1593776.4 52.91 117.81 2128.1 51.72 119.18 974.84 986.72 2081.87 170.516 169.649 171.632 174.303 4.509 221.769 12.535 119.097759734 347.926 984.41 361.362 2110.19 364.043 51.87 373.723 120.94 167.998 2106.54 168.641 79.776 169.729 111.63 380.652 118.93 385.546 929.58 392.07 997.73 33.16 2002.09 2.041 2120.77 69.16 52.07 14.415 51.78 1048.2 119.61 1932.47 1000.28 53.47 982.71 2.938 2179.09 52.40 3.856 6.209 11.811 3200.7 3.791 52.523 172.696 160.139 9.121 96.329 535.678 545.864 1552035.6 30.647 1599391.9 53.11 116.93 2137.2 51.90 119.33 976.22 988.15 2087.39 170.603 169.971 171.049 174.255 4.622 216.369 12.502 119.82 348.365 982.66 357.714 2037.86 365.003 51.77 372.933 120.74 168.055 2028.82 168.788 79.988 170.093 113.63 381.064 117.61 385.5 933.76 392.383 999.93 32.06 1989 2.106 2063.78 69.01 52.06 14.426 51.59 1046.79 119.91 1933.58 1001.43 52.99 982.46 2.904 2161.98 51.72 3.966 7.198 11.743 3192.7 3.784 52.572 174.836 159.521 9.28 95.648 539.588 547.432 1560758 30.723 1600346.9 53.27 117.89 2132.29 52.00 119.45 974.69 988.72 2071.18 170.261 169.31 169.333 172.269 4.596 217.593 12.466 118.23 344.868 981.61 357.035 2033.57 365.305 51.76 372.928 120.57 168.16 2091.6 168.365 80.218 169.338 113.36 380.015 119.83 384.26 931.22 391.383 999.92 31.86 1984.38 2.039 2091.66 69.07 52.04 14.377 51.62 1046.4 120.56 1934.32 1001.3 53.22 984.36 2.712 2171.79 51.97 3.761 6.075 11.598 OpenBenchmarking.org
QuantLib OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.30 b a c d 700 1400 2100 2800 3500 SE +/- 1.35, N = 3 3206.1 3202.1 3200.7 3192.7 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 4 - Input: Bosphorus 4K c d b a 0.853 1.706 2.559 3.412 4.265 SE +/- 0.019, N = 3 3.791 3.784 3.782 3.766 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 8 - Input: Bosphorus 4K a d c b 12 24 36 48 60 SE +/- 0.19, N = 3 52.58 52.57 52.52 51.98 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 12 - Input: Bosphorus 4K b d a c 40 80 120 160 200 SE +/- 0.56, N = 3 175.68 174.84 174.52 172.70 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a c d 40 80 120 160 200 SE +/- 0.85, N = 3 160.66 160.50 160.14 159.52 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 4 - Input: Bosphorus 1080p d c b a 3 6 9 12 15 SE +/- 0.031, N = 3 9.280 9.121 9.064 9.027 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 8 - Input: Bosphorus 1080p c a b d 20 40 60 80 100 SE +/- 0.42, N = 3 96.33 95.93 95.71 95.65 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b d c 120 240 360 480 600 SE +/- 0.64, N = 3 547.50 542.03 539.59 535.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.5 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a d c b 120 240 360 480 600 SE +/- 0.34, N = 3 548.01 547.43 545.86 542.63 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
InfluxDB Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 d c a b 300K 600K 900K 1200K 1500K SE +/- 5338.36, N = 3 1560758.0 1552035.6 1547894.4 1545780.8
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 1 OpenBenchmarking.org ms, Fewer Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 1 b c d a 7 14 21 28 35 SE +/- 0.10, N = 3 30.48 30.65 30.72 30.82
InfluxDB Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 a d c b 300K 600K 900K 1200K 1500K SE +/- 3918.66, N = 3 1602099.9 1600346.9 1599391.9 1593776.4
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 32 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 32 d c a b 12 24 36 48 60 SE +/- 0.09, N = 3 53.27 53.11 53.10 52.91
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 32 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 32 d b a c 30 60 90 120 150 SE +/- 0.81, N = 3 117.89 117.81 117.00 116.93
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 960 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 960 c a d b 500 1000 1500 2000 2500 SE +/- 2.58, N = 3 2137.20 2133.18 2132.29 2128.10
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 96 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 96 d c a b 12 24 36 48 60 SE +/- 0.07, N = 3 52.00 51.90 51.83 51.72
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 256 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 256 d c b a 30 60 90 120 150 SE +/- 0.27, N = 3 119.45 119.33 119.18 119.18
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512 a c b d 200 400 600 800 1000 SE +/- 0.75, N = 3 976.58 976.22 974.84 974.69
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96 a d c b 200 400 600 800 1000 SE +/- 1.26, N = 3 990.35 988.72 988.15 986.72
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 96 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 96 c a b d 400 800 1200 1600 2000 SE +/- 2.02, N = 3 2087.39 2083.55 2081.87 2071.18
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 64 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 64 c b a d 40 80 120 160 200 SE +/- 0.12, N = 3 170.60 170.52 170.51 170.26
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 96 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 96 c a b d 40 80 120 160 200 SE +/- 0.11, N = 3 169.97 169.73 169.65 169.31
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 16 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 16 b c d a 40 80 120 160 200 SE +/- 0.83, N = 3 171.63 171.05 169.33 168.74
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 32 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 32 b c a d 40 80 120 160 200 SE +/- 0.31, N = 3 174.30 174.26 174.04 172.27
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 1 OpenBenchmarking.org ms, Fewer Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 1 a b d c 1.04 2.08 3.12 4.16 5.2 SE +/- 0.032, N = 3 4.508 4.509 4.596 4.622
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 1 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 1 a b d c 50 100 150 200 250 SE +/- 1.58, N = 3 221.86 221.77 217.59 216.37
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 1 OpenBenchmarking.org ms, Fewer Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 1 d c b a 3 6 9 12 15 SE +/- 0.01, N = 3 12.47 12.50 12.54 12.61
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 96 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 96 c b a d 30 60 90 120 150 SE +/- 0.42, N = 3 119.82 119.10 118.25 118.23
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 16 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 16 c b a d 80 160 240 320 400 SE +/- 1.45, N = 3 348.37 347.93 346.02 344.87
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32 b c d a 200 400 600 800 1000 SE +/- 1.14, N = 3 984.41 982.66 981.61 981.32
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 32 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 32 b c d a 80 160 240 320 400 SE +/- 0.47, N = 3 361.36 357.71 357.04 356.32
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 32 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 32 b a c d 500 1000 1500 2000 2500 SE +/- 19.33, N = 7 2110.19 2056.18 2037.86 2033.57
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 64 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 64 d a c b 80 160 240 320 400 SE +/- 0.43, N = 3 365.31 365.10 365.00 364.04
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 512 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 512 b c d a 12 24 36 48 60 SE +/- 0.05, N = 3 51.87 51.77 51.76 51.76
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 96 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 96 b a c d 80 160 240 320 400 SE +/- 0.27, N = 3 373.72 373.13 372.93 372.93
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 960 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 960 b c a d 30 60 90 120 150 SE +/- 0.16, N = 3 120.94 120.74 120.74 120.57
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 256 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 256 d c b a 40 80 120 160 200 SE +/- 0.12, N = 3 168.16 168.06 168.00 167.97
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 256 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 256 b d a c 500 1000 1500 2000 2500 SE +/- 14.60, N = 3 2106.54 2091.60 2090.97 2028.82
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 512 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 512 c b a d 40 80 120 160 200 SE +/- 0.21, N = 3 168.79 168.64 168.37 168.37
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 1 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 1 d c b a 20 40 60 80 100 SE +/- 0.09, N = 3 80.22 79.99 79.78 79.28
Intel TensorFlow Model: resnet50_fp32_pretrained_model - Batch Size: 960 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_fp32_pretrained_model - Batch Size: 960 c b d a 40 80 120 160 200 SE +/- 0.30, N = 3 170.09 169.73 169.34 168.72
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 16 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 16 c d a b 30 60 90 120 150 SE +/- 0.30, N = 3 113.63 113.36 113.31 111.63
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 256 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 256 a c b d 80 160 240 320 400 SE +/- 0.54, N = 3 382.09 381.06 380.65 380.02
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 64 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 64 d b a c 30 60 90 120 150 SE +/- 0.54, N = 3 119.83 118.93 118.48 117.61
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 512 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 512 b c d a 80 160 240 320 400 SE +/- 0.21, N = 3 385.55 385.50 384.26 383.62
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16 c a d b 200 400 600 800 1000 SE +/- 0.39, N = 3 933.76 932.19 931.22 929.58
Intel TensorFlow Model: resnet50_int8_pretrained_model - Batch Size: 960 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: resnet50_int8_pretrained_model - Batch Size: 960 c b a d 90 180 270 360 450 SE +/- 0.57, N = 3 392.38 392.07 391.68 391.38
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64 c d a b 200 400 600 800 1000 SE +/- 0.55, N = 3 999.93 999.92 998.43 997.73
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 1 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 1 b a c d 8 16 24 32 40 SE +/- 0.26, N = 3 33.16 32.23 32.06 31.86
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 16 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 16 a b c d 400 800 1200 1600 2000 SE +/- 14.27, N = 3 2003.49 2002.09 1989.00 1984.38
SQLite Threads / Copies: 2 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 2 d b c a 0.4838 0.9676 1.4514 1.9352 2.419 SE +/- 0.004, N = 3 2.039 2.041 2.106 2.150 1. (CC) gcc options: -O2 -lz -lm
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 64 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 64 b a d c 500 1000 1500 2000 2500 SE +/- 10.67, N = 3 2120.77 2112.33 2091.66 2063.78
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 1 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 1 b d a c 15 30 45 60 75 SE +/- 0.07, N = 3 69.16 69.07 69.04 69.01
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 256 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 256 b c d a 12 24 36 48 60 SE +/- 0.02, N = 3 52.07 52.06 52.04 51.92
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 1 OpenBenchmarking.org ms, Fewer Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 1 d b c a 4 8 12 16 20 SE +/- 0.03, N = 3 14.38 14.42 14.43 14.44
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 960 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 960 b a d c 12 24 36 48 60 SE +/- 0.11, N = 3 51.78 51.76 51.62 51.59
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1 b c d a 200 400 600 800 1000 SE +/- 0.98, N = 3 1048.20 1046.79 1046.40 1045.59
Intel TensorFlow Model: inceptionv4_int8_pretrained_model - Batch Size: 512 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_int8_pretrained_model - Batch Size: 512 d c a b 30 60 90 120 150 SE +/- 0.14, N = 3 120.56 119.91 119.90 119.61
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 1 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 1 d c a b 400 800 1200 1600 2000 SE +/- 0.61, N = 3 1934.32 1933.58 1933.37 1932.47
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256 a c d b 200 400 600 800 1000 SE +/- 0.11, N = 3 1001.61 1001.43 1001.30 1000.28
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 16 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 16 b d a c 12 24 36 48 60 SE +/- 0.07, N = 3 53.47 53.22 53.20 52.99
Intel TensorFlow Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960 d a b c 200 400 600 800 1000 SE +/- 0.13, N = 3 984.36 983.76 982.71 982.46
SQLite Threads / Copies: 4 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 4 d c b a 0.7151 1.4302 2.1453 2.8604 3.5755 SE +/- 0.030, N = 15 2.712 2.904 2.938 3.178 1. (CC) gcc options: -O2 -lz -lm
Intel TensorFlow Model: mobilenetv1_int8_pretrained_model - Batch Size: 512 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: mobilenetv1_int8_pretrained_model - Batch Size: 512 b d a c 500 1000 1500 2000 2500 SE +/- 2.76, N = 3 2179.09 2171.79 2170.09 2161.98
Intel TensorFlow Model: inceptionv4_fp32_pretrained_model - Batch Size: 64 OpenBenchmarking.org images/sec, More Is Better Intel TensorFlow 2.12 Model: inceptionv4_fp32_pretrained_model - Batch Size: 64 a b d c 12 24 36 48 60 SE +/- 0.12, N = 3 52.44 52.40 51.97 51.72
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 8 d b c a 1.1396 2.2792 3.4188 4.5584 5.698 SE +/- 0.038, N = 3 3.761 3.856 3.966 5.065 1. (CC) gcc options: -O2 -lz -lm
SQLite Threads / Copies: 16 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 16 d b c a 2 4 6 8 10 SE +/- 0.163, N = 13 6.075 6.209 7.198 8.476 1. (CC) gcc options: -O2 -lz -lm
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 32 a d c b 3 6 9 12 15 SE +/- 0.05, N = 3 11.32 11.60 11.74 11.81 1. (CC) gcc options: -O2 -lz -lm
Phoronix Test Suite v10.8.4