epyc last

AMD EPYC 7343 16-Core testing with a Supermicro H12SSL-i v1.02 (2.4 BIOS) and astdrmfb on AlmaLinux 9.1 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304307-NE-EPYCLAST283&sor.

epyc lastProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen ResolutionabcdAMD EPYC 7343 16-Core @ 3.20GHz (16 Cores / 32 Threads)Supermicro H12SSL-i v1.02 (2.4 BIOS)8 x 64 GB DDR4-3200MT/s Samsung M393A8G40AB2-CWE2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbDELL E207WFPAlmaLinux 9.15.14.0-162.12.1.el9_1.x86_64 (x86_64)GCC 11.3.1 20220421ext41680x1050OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Disk Details- NONE / relatime,rw,stripe=32 / raid1 nvme1n1p3[0] nvme0n1p3[1] Block Size: 4096Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 Python Details- Python 3.9.14Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

epyc lastsqlite: 2sqlite: 4sqlite: 8sqlite: 16sqlite: 32quantlib: svt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pintel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 16intel-tensorflow: resnet50_fp32_pretrained_model - 32intel-tensorflow: resnet50_fp32_pretrained_model - 64intel-tensorflow: resnet50_fp32_pretrained_model - 96intel-tensorflow: resnet50_int8_pretrained_model - 16intel-tensorflow: resnet50_int8_pretrained_model - 32intel-tensorflow: resnet50_int8_pretrained_model - 64intel-tensorflow: resnet50_int8_pretrained_model - 96intel-tensorflow: resnet50_fp32_pretrained_model - 256intel-tensorflow: resnet50_fp32_pretrained_model - 512intel-tensorflow: resnet50_fp32_pretrained_model - 960intel-tensorflow: resnet50_int8_pretrained_model - 256intel-tensorflow: resnet50_int8_pretrained_model - 512intel-tensorflow: resnet50_int8_pretrained_model - 960intel-tensorflow: inceptionv4_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: mobilenetv1_fp32_pretrained_model - 1intel-tensorflow: mobilenetv1_int8_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 16intel-tensorflow: inceptionv4_fp32_pretrained_model - 32intel-tensorflow: inceptionv4_fp32_pretrained_model - 64intel-tensorflow: inceptionv4_fp32_pretrained_model - 96intel-tensorflow: inceptionv4_int8_pretrained_model - 16intel-tensorflow: inceptionv4_int8_pretrained_model - 32intel-tensorflow: inceptionv4_int8_pretrained_model - 64intel-tensorflow: inceptionv4_int8_pretrained_model - 96intel-tensorflow: mobilenetv1_fp32_pretrained_model - 16intel-tensorflow: mobilenetv1_fp32_pretrained_model - 32intel-tensorflow: mobilenetv1_fp32_pretrained_model - 64intel-tensorflow: mobilenetv1_fp32_pretrained_model - 96intel-tensorflow: mobilenetv1_int8_pretrained_model - 16intel-tensorflow: mobilenetv1_int8_pretrained_model - 32intel-tensorflow: mobilenetv1_int8_pretrained_model - 64intel-tensorflow: mobilenetv1_int8_pretrained_model - 96intel-tensorflow: inceptionv4_fp32_pretrained_model - 256intel-tensorflow: inceptionv4_fp32_pretrained_model - 512intel-tensorflow: inceptionv4_fp32_pretrained_model - 960intel-tensorflow: inceptionv4_int8_pretrained_model - 256intel-tensorflow: inceptionv4_int8_pretrained_model - 512intel-tensorflow: inceptionv4_int8_pretrained_model - 960intel-tensorflow: mobilenetv1_fp32_pretrained_model - 256intel-tensorflow: mobilenetv1_fp32_pretrained_model - 512intel-tensorflow: mobilenetv1_fp32_pretrained_model - 960intel-tensorflow: mobilenetv1_int8_pretrained_model - 256intel-tensorflow: mobilenetv1_int8_pretrained_model - 512intel-tensorflow: mobilenetv1_int8_pretrained_model - 960influxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000abcd2.1503.1785.0658.47611.3213202.13.76652.582174.519160.5019.02795.925547.498548.00779.28312.613221.8554.508168.744174.040170.509169.726346.017356.317365.095373.126167.968168.371168.721382.093383.615391.68032.2330.81769.0414.4361045.591933.3753.2053.1052.4451.83113.31117.00118.48118.25932.19981.32998.43990.352003.492056.182112.332083.5551.9251.7651.76119.18119.90120.741001.61976.58983.762090.972170.092133.181547894.41602099.92.0412.9383.8566.20911.8113206.13.78251.982175.683160.6649.06495.707542.028542.62579.77612.535221.7694.509171.632174.303170.516169.649347.926361.362364.043373.723167.998168.641169.729380.652385.546392.0733.1630.47569.1614.4151048.21932.4753.4752.9152.4051.72111.63117.81118.93119.097759734929.58984.41997.73986.722002.092110.192120.772081.8752.0751.8751.78119.18119.61120.941000.28974.84982.712106.542179.092128.11545780.81593776.42.1062.9043.9667.19811.7433200.73.79152.523172.696160.1399.12196.329535.678545.86479.98812.502216.3694.622171.049174.255170.603169.971348.365357.714365.003372.933168.055168.788170.093381.064385.5392.38332.0630.64769.0114.4261046.791933.5852.9953.1151.7251.90113.63116.93117.61119.82933.76982.66999.93988.1519892037.862063.782087.3952.0651.7751.59119.33119.91120.741001.43976.22982.462028.822161.982137.21552035.61599391.92.0392.7123.7616.07511.5983192.73.78452.572174.836159.5219.2895.648539.588547.43280.21812.466217.5934.596169.333172.269170.261169.31344.868357.035365.305372.928168.16168.365169.338380.015384.26391.38331.8630.72369.0714.3771046.41934.3253.2253.2751.9752.00113.36117.89119.83118.23931.22981.61999.92988.721984.382033.572091.662071.1852.0451.7651.62119.45120.56120.571001.3974.69984.362091.62171.792132.2915607581600346.9OpenBenchmarking.org

SQLite

Threads / Copies: 2

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2dbca0.48380.96761.45141.93522.419SE +/- 0.004, N = 32.0392.0412.1062.1501. (CC) gcc options: -O2 -lz -lm

SQLite

Threads / Copies: 4

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4dcba0.71511.43022.14532.86043.5755SE +/- 0.030, N = 152.7122.9042.9383.1781. (CC) gcc options: -O2 -lz -lm

SQLite

Threads / Copies: 8

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 8dbca1.13962.27923.41884.55845.698SE +/- 0.038, N = 33.7613.8563.9665.0651. (CC) gcc options: -O2 -lz -lm

SQLite

Threads / Copies: 16

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 16dbca246810SE +/- 0.163, N = 136.0756.2097.1988.4761. (CC) gcc options: -O2 -lz -lm

SQLite

Threads / Copies: 32

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 32adcb3691215SE +/- 0.05, N = 311.3211.6011.7411.811. (CC) gcc options: -O2 -lz -lm

QuantLib

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30bacd7001400210028003500SE +/- 1.35, N = 33206.13202.13200.73192.71. (CXX) g++ options: -O3 -march=native -fPIE -pie

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 4Kcdba0.8531.7062.5593.4124.265SE +/- 0.019, N = 33.7913.7843.7823.7661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 4Kadcb1224364860SE +/- 0.19, N = 352.5852.5752.5251.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 4Kbdac4080120160200SE +/- 0.56, N = 3175.68174.84174.52172.701. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 4Kbacd4080120160200SE +/- 0.85, N = 3160.66160.50160.14159.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 1080pdcba3691215SE +/- 0.031, N = 39.2809.1219.0649.0271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 1080pcabd20406080100SE +/- 0.42, N = 396.3395.9395.7195.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 1080pabdc120240360480600SE +/- 0.64, N = 3547.50542.03539.59535.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 1080padcb120240360480600SE +/- 0.34, N = 3548.01547.43545.86542.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba20406080100SE +/- 0.09, N = 380.2279.9979.7879.28

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba3691215SE +/- 0.01, N = 312.4712.5012.5412.61

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abdc50100150200250SE +/- 1.58, N = 3221.86221.77217.59216.37

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abdc1.042.083.124.165.2SE +/- 0.032, N = 34.5084.5094.5964.622

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 16bcda4080120160200SE +/- 0.83, N = 3171.63171.05169.33168.74

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 32bcad4080120160200SE +/- 0.31, N = 3174.30174.26174.04172.27

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 64cbad4080120160200SE +/- 0.12, N = 3170.60170.52170.51170.26

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 96cabd4080120160200SE +/- 0.11, N = 3169.97169.73169.65169.31

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 16cbad80160240320400SE +/- 1.45, N = 3348.37347.93346.02344.87

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 32bcda80160240320400SE +/- 0.47, N = 3361.36357.71357.04356.32

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 64dacb80160240320400SE +/- 0.43, N = 3365.31365.10365.00364.04

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 96bacd80160240320400SE +/- 0.27, N = 3373.72373.13372.93372.93

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 256dcba4080120160200SE +/- 0.12, N = 3168.16168.06168.00167.97

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 512cbad4080120160200SE +/- 0.21, N = 3168.79168.64168.37168.37

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 960

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 960cbda4080120160200SE +/- 0.30, N = 3170.09169.73169.34168.72

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 256acbd80160240320400SE +/- 0.54, N = 3382.09381.06380.65380.02

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 512bcda80160240320400SE +/- 0.21, N = 3385.55385.50384.26383.62

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 960

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 960cbad90180270360450SE +/- 0.57, N = 3392.38392.07391.68391.38

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1bacd816243240SE +/- 0.26, N = 333.1632.2332.0631.86

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1bcda714212835SE +/- 0.10, N = 330.4830.6530.7230.82

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1bdac1530456075SE +/- 0.07, N = 369.1669.0769.0469.01

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1dbca48121620SE +/- 0.03, N = 314.3814.4214.4314.44

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1bcda2004006008001000SE +/- 0.98, N = 31048.201046.791046.401045.59

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 1dcab400800120016002000SE +/- 0.61, N = 31934.321933.581933.371932.47

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 16bdac1224364860SE +/- 0.07, N = 353.4753.2253.2052.99

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 32dcab1224364860SE +/- 0.09, N = 353.2753.1153.1052.91

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 64abdc1224364860SE +/- 0.12, N = 352.4452.4051.9751.72

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 96dcab1224364860SE +/- 0.07, N = 352.0051.9051.8351.72

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 16cdab306090120150SE +/- 0.30, N = 3113.63113.36113.31111.63

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 32dbac306090120150SE +/- 0.81, N = 3117.89117.81117.00116.93

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 64dbac306090120150SE +/- 0.54, N = 3119.83118.93118.48117.61

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 96cbad306090120150SE +/- 0.42, N = 3119.82119.10118.25118.23

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16cadb2004006008001000SE +/- 0.39, N = 3933.76932.19931.22929.58

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32bcda2004006008001000SE +/- 1.14, N = 3984.41982.66981.61981.32

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64cdab2004006008001000SE +/- 0.55, N = 3999.93999.92998.43997.73

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96adcb2004006008001000SE +/- 1.26, N = 3990.35988.72988.15986.72

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 16abcd400800120016002000SE +/- 14.27, N = 32003.492002.091989.001984.38

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 32bacd5001000150020002500SE +/- 19.33, N = 72110.192056.182037.862033.57

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 64badc5001000150020002500SE +/- 10.67, N = 32120.772112.332091.662063.78

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 96cabd400800120016002000SE +/- 2.02, N = 32087.392083.552081.872071.18

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 256bcda1224364860SE +/- 0.02, N = 352.0752.0652.0451.92

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 512bcda1224364860SE +/- 0.05, N = 351.8751.7751.7651.76

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 960

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 960badc1224364860SE +/- 0.11, N = 351.7851.7651.6251.59

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 256dcba306090120150SE +/- 0.27, N = 3119.45119.33119.18119.18

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 512dcab306090120150SE +/- 0.14, N = 3120.56119.91119.90119.61

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 960

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 960bcad306090120150SE +/- 0.16, N = 3120.94120.74120.74120.57

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256acdb2004006008001000SE +/- 0.11, N = 31001.611001.431001.301000.28

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512acbd2004006008001000SE +/- 0.75, N = 3976.58976.22974.84974.69

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960dabc2004006008001000SE +/- 0.13, N = 3984.36983.76982.71982.46

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 256bdac5001000150020002500SE +/- 14.60, N = 32106.542091.602090.972028.82

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 512bdac5001000150020002500SE +/- 2.76, N = 32179.092171.792170.092161.98

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 960

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 960cadb5001000150020002500SE +/- 2.58, N = 32137.202133.182132.292128.10

InfluxDB

Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000dcab300K600K900K1200K1500KSE +/- 5338.36, N = 31560758.01552035.61547894.41545780.8

InfluxDB

Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000adcb300K600K900K1200K1500KSE +/- 3918.66, N = 31602099.91600346.91599391.91593776.4


Phoronix Test Suite v10.8.5