epyc last

AMD EPYC 7343 16-Core testing with a Supermicro H12SSL-i v1.02 (2.4 BIOS) and astdrmfb on AlmaLinux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2304307-NE-EPYCLAST283
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Database Test Suite 2 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
April 30 2023
  7 Hours, 46 Minutes
b
April 30 2023
  2 Hours, 34 Minutes
c
April 30 2023
  2 Hours, 34 Minutes
d
April 30 2023
  2 Hours, 34 Minutes
Invert Hiding All Results Option
  3 Hours, 52 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc lastOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7343 16-Core @ 3.20GHz (16 Cores / 32 Threads)Supermicro H12SSL-i v1.02 (2.4 BIOS)8 x 64 GB DDR4-3200MT/s Samsung M393A8G40AB2-CWE2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbDELL E207WFPAlmaLinux 9.15.14.0-162.12.1.el9_1.x86_64 (x86_64)GCC 11.3.1 20220421ext41680x1050ProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen ResolutionEpyc Last BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - NONE / relatime,rw,stripe=32 / raid1 nvme1n1p3[0] nvme0n1p3[1] Block Size: 4096 - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 - Python 3.9.14- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%104%109%113%118%SQLiteInfluxDBQuantLibIntel TensorFlowSVT-AV1

epyc lastsqlite: 2sqlite: 4sqlite: 8sqlite: 16sqlite: 32quantlib: svt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pintel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 16intel-tensorflow: resnet50_fp32_pretrained_model - 32intel-tensorflow: resnet50_fp32_pretrained_model - 64intel-tensorflow: resnet50_fp32_pretrained_model - 96intel-tensorflow: resnet50_int8_pretrained_model - 16intel-tensorflow: resnet50_int8_pretrained_model - 32intel-tensorflow: resnet50_int8_pretrained_model - 64intel-tensorflow: resnet50_int8_pretrained_model - 96intel-tensorflow: resnet50_fp32_pretrained_model - 256intel-tensorflow: resnet50_fp32_pretrained_model - 512intel-tensorflow: resnet50_fp32_pretrained_model - 960intel-tensorflow: resnet50_int8_pretrained_model - 256intel-tensorflow: resnet50_int8_pretrained_model - 512intel-tensorflow: resnet50_int8_pretrained_model - 960intel-tensorflow: inceptionv4_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: mobilenetv1_fp32_pretrained_model - 1intel-tensorflow: mobilenetv1_int8_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 16intel-tensorflow: inceptionv4_fp32_pretrained_model - 32intel-tensorflow: inceptionv4_fp32_pretrained_model - 64intel-tensorflow: inceptionv4_fp32_pretrained_model - 96intel-tensorflow: inceptionv4_int8_pretrained_model - 16intel-tensorflow: inceptionv4_int8_pretrained_model - 32intel-tensorflow: inceptionv4_int8_pretrained_model - 64intel-tensorflow: inceptionv4_int8_pretrained_model - 96intel-tensorflow: mobilenetv1_fp32_pretrained_model - 16intel-tensorflow: mobilenetv1_fp32_pretrained_model - 32intel-tensorflow: mobilenetv1_fp32_pretrained_model - 64intel-tensorflow: mobilenetv1_fp32_pretrained_model - 96intel-tensorflow: mobilenetv1_int8_pretrained_model - 16intel-tensorflow: mobilenetv1_int8_pretrained_model - 32intel-tensorflow: mobilenetv1_int8_pretrained_model - 64intel-tensorflow: mobilenetv1_int8_pretrained_model - 96intel-tensorflow: inceptionv4_fp32_pretrained_model - 256intel-tensorflow: inceptionv4_fp32_pretrained_model - 512intel-tensorflow: inceptionv4_fp32_pretrained_model - 960intel-tensorflow: inceptionv4_int8_pretrained_model - 256intel-tensorflow: inceptionv4_int8_pretrained_model - 512intel-tensorflow: inceptionv4_int8_pretrained_model - 960intel-tensorflow: mobilenetv1_fp32_pretrained_model - 256intel-tensorflow: mobilenetv1_fp32_pretrained_model - 512intel-tensorflow: mobilenetv1_fp32_pretrained_model - 960intel-tensorflow: mobilenetv1_int8_pretrained_model - 256intel-tensorflow: mobilenetv1_int8_pretrained_model - 512intel-tensorflow: mobilenetv1_int8_pretrained_model - 960influxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000abcd2.1503.1785.0658.47611.3213202.13.76652.582174.519160.5019.02795.925547.498548.00779.28312.613221.8554.508168.744174.040170.509169.726346.017356.317365.095373.126167.968168.371168.721382.093383.615391.68032.2330.81769.0414.4361045.591933.3753.2053.1052.4451.83113.31117.00118.48118.25932.19981.32998.43990.352003.492056.182112.332083.5551.9251.7651.76119.18119.90120.741001.61976.58983.762090.972170.092133.181547894.41602099.92.0412.9383.8566.20911.8113206.13.78251.982175.683160.6649.06495.707542.028542.62579.77612.535221.7694.509171.632174.303170.516169.649347.926361.362364.043373.723167.998168.641169.729380.652385.546392.0733.1630.47569.1614.4151048.21932.4753.4752.9152.4051.72111.63117.81118.93119.097759734929.58984.41997.73986.722002.092110.192120.772081.8752.0751.8751.78119.18119.61120.941000.28974.84982.712106.542179.092128.11545780.81593776.42.1062.9043.9667.19811.7433200.73.79152.523172.696160.1399.12196.329535.678545.86479.98812.502216.3694.622171.049174.255170.603169.971348.365357.714365.003372.933168.055168.788170.093381.064385.5392.38332.0630.64769.0114.4261046.791933.5852.9953.1151.7251.90113.63116.93117.61119.82933.76982.66999.93988.1519892037.862063.782087.3952.0651.7751.59119.33119.91120.741001.43976.22982.462028.822161.982137.21552035.61599391.92.0392.7123.7616.07511.5983192.73.78452.572174.836159.5219.2895.648539.588547.43280.21812.466217.5934.596169.333172.269170.261169.31344.868357.035365.305372.928168.16168.365169.338380.015384.26391.38331.8630.72369.0714.3771046.41934.3253.2253.2751.9752.00113.36117.89119.83118.23931.22981.61999.92988.721984.382033.572091.662071.1852.0451.7651.62119.45120.56120.571001.3974.69984.362091.62171.792132.2915607581600346.9OpenBenchmarking.org

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

Threads / Copies: 1

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

d: The test run did not produce a result.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2dbca0.48380.96761.45141.93522.419SE +/- 0.004, N = 32.0392.0412.1062.1501. (CC) gcc options: -O2 -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2dbca246810Min: 2.14 / Avg: 2.15 / Max: 2.161. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4dcba0.71511.43022.14532.86043.5755SE +/- 0.030, N = 152.7122.9042.9383.1781. (CC) gcc options: -O2 -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4dcba246810Min: 2.92 / Avg: 3.18 / Max: 3.361. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 8dbca1.13962.27923.41884.55845.698SE +/- 0.038, N = 33.7613.8563.9665.0651. (CC) gcc options: -O2 -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 8dbca246810Min: 5 / Avg: 5.06 / Max: 5.131. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 16dbca246810SE +/- 0.163, N = 136.0756.2097.1988.4761. (CC) gcc options: -O2 -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 16dbca3691215Min: 6.7 / Avg: 8.48 / Max: 8.971. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 32adcb3691215SE +/- 0.05, N = 311.3211.6011.7411.811. (CC) gcc options: -O2 -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 32adcb3691215Min: 11.24 / Avg: 11.32 / Max: 11.411. (CC) gcc options: -O2 -lz -lm

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30bacd7001400210028003500SE +/- 1.35, N = 33206.13202.13200.73192.71. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30bacd6001200180024003000Min: 3199.5 / Avg: 3202.07 / Max: 3204.11. (CXX) g++ options: -O3 -march=native -fPIE -pie

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 4Kcdba0.8531.7062.5593.4124.265SE +/- 0.019, N = 33.7913.7843.7823.7661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 4Kcdba246810Min: 3.74 / Avg: 3.77 / Max: 3.81. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 4Kadcb1224364860SE +/- 0.19, N = 352.5852.5752.5251.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 4Kadcb1122334455Min: 52.21 / Avg: 52.58 / Max: 52.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 4Kbdac4080120160200SE +/- 0.56, N = 3175.68174.84174.52172.701. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 4Kbdac306090120150Min: 173.42 / Avg: 174.52 / Max: 175.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 4Kbacd4080120160200SE +/- 0.85, N = 3160.66160.50160.14159.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 4Kbacd306090120150Min: 159.02 / Avg: 160.5 / Max: 161.971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 1080pdcba3691215SE +/- 0.031, N = 39.2809.1219.0649.0271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 1080pdcba3691215Min: 8.97 / Avg: 9.03 / Max: 9.071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 1080pcabd20406080100SE +/- 0.42, N = 396.3395.9395.7195.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 1080pcabd20406080100Min: 95.27 / Avg: 95.93 / Max: 96.71. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 1080pabdc120240360480600SE +/- 0.64, N = 3547.50542.03539.59535.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 1080pabdc100200300400500Min: 546.25 / Avg: 547.5 / Max: 548.391. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 1080padcb120240360480600SE +/- 0.34, N = 3548.01547.43545.86542.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 1080padcb100200300400500Min: 547.58 / Avg: 548.01 / Max: 548.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Intel TensorFlow

Intel optimized version of TensorFlow with benchmarks of Intel AI models and configurable batch sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba20406080100SE +/- 0.09, N = 380.2279.9979.7879.28
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba1530456075Min: 79.17 / Avg: 79.28 / Max: 79.45

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba3691215SE +/- 0.01, N = 312.4712.5012.5412.61
OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba48121620Min: 12.59 / Avg: 12.61 / Max: 12.63

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abdc50100150200250SE +/- 1.58, N = 3221.86221.77217.59216.37
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abdc4080120160200Min: 218.74 / Avg: 221.85 / Max: 223.81

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abdc1.042.083.124.165.2SE +/- 0.032, N = 34.5084.5094.5964.622
OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abdc246810Min: 4.47 / Avg: 4.51 / Max: 4.57

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 16bcda4080120160200SE +/- 0.83, N = 3171.63171.05169.33168.74
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 16bcda306090120150Min: 167.09 / Avg: 168.74 / Max: 169.73

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 32bcad4080120160200SE +/- 0.31, N = 3174.30174.26174.04172.27
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 32bcad306090120150Min: 173.45 / Avg: 174.04 / Max: 174.47

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 64cbad4080120160200SE +/- 0.12, N = 3170.60170.52170.51170.26
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 64cbad306090120150Min: 170.37 / Avg: 170.51 / Max: 170.74

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 96cabd4080120160200SE +/- 0.11, N = 3169.97169.73169.65169.31
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 96cabd306090120150Min: 169.53 / Avg: 169.73 / Max: 169.91

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 16cbad80160240320400SE +/- 1.45, N = 3348.37347.93346.02344.87
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 16cbad60120180240300Min: 343.12 / Avg: 346.02 / Max: 347.59

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 32bcda80160240320400SE +/- 0.47, N = 3361.36357.71357.04356.32
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 32bcda60120180240300Min: 355.77 / Avg: 356.32 / Max: 357.25

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 64dacb80160240320400SE +/- 0.43, N = 3365.31365.10365.00364.04
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 64dacb70140210280350Min: 364.23 / Avg: 365.1 / Max: 365.59

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 96bacd80160240320400SE +/- 0.27, N = 3373.72373.13372.93372.93
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 96bacd70140210280350Min: 372.64 / Avg: 373.13 / Max: 373.57

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 256dcba4080120160200SE +/- 0.12, N = 3168.16168.06168.00167.97
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 256dcba306090120150Min: 167.74 / Avg: 167.97 / Max: 168.14

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 512cbad4080120160200SE +/- 0.21, N = 3168.79168.64168.37168.37
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 512cbad306090120150Min: 168.02 / Avg: 168.37 / Max: 168.74

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 960cbda4080120160200SE +/- 0.30, N = 3170.09169.73169.34168.72
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 960cbda306090120150Min: 168.13 / Avg: 168.72 / Max: 169.12

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 256acbd80160240320400SE +/- 0.54, N = 3382.09381.06380.65380.02
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 256acbd70140210280350Min: 381.01 / Avg: 382.09 / Max: 382.69

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 512bcda80160240320400SE +/- 0.21, N = 3385.55385.50384.26383.62
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 512bcda70140210280350Min: 383.19 / Avg: 383.62 / Max: 383.84

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 960cbad90180270360450SE +/- 0.57, N = 3392.38392.07391.68391.38
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 960cbad70140210280350Min: 390.55 / Avg: 391.68 / Max: 392.41

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1bacd816243240SE +/- 0.26, N = 333.1632.2332.0631.86
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1bacd714212835Min: 31.92 / Avg: 32.23 / Max: 32.74

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1bcda714212835SE +/- 0.10, N = 330.4830.6530.7230.82
OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1bcda714212835Min: 30.63 / Avg: 30.82 / Max: 30.96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1bdac1530456075SE +/- 0.07, N = 369.1669.0769.0469.01
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1bdac1326395265Min: 68.9 / Avg: 69.04 / Max: 69.14

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1dbca48121620SE +/- 0.03, N = 314.3814.4214.4314.44
OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1dbca48121620Min: 14.39 / Avg: 14.44 / Max: 14.5

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1bcda2004006008001000SE +/- 0.98, N = 31048.201046.791046.401045.59
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1bcda2004006008001000Min: 1043.67 / Avg: 1045.59 / Max: 1046.92

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 1dcab400800120016002000SE +/- 0.61, N = 31934.321933.581933.371932.47
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 1dcab30060090012001500Min: 1932.38 / Avg: 1933.37 / Max: 1934.48

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 16bdac1224364860SE +/- 0.07, N = 353.4753.2253.2052.99
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 16bdac1122334455Min: 53.12 / Avg: 53.2 / Max: 53.33

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 32dcab1224364860SE +/- 0.09, N = 353.2753.1153.1052.91
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 32dcab1122334455Min: 52.93 / Avg: 53.1 / Max: 53.26

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 64abdc1224364860SE +/- 0.12, N = 352.4452.4051.9751.72
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 64abdc1122334455Min: 52.22 / Avg: 52.44 / Max: 52.64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 96dcab1224364860SE +/- 0.07, N = 352.0051.9051.8351.72
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 96dcab1020304050Min: 51.76 / Avg: 51.83 / Max: 51.97

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 16cdab306090120150SE +/- 0.30, N = 3113.63113.36113.31111.63
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 16cdab20406080100Min: 112.79 / Avg: 113.31 / Max: 113.81

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 32dbac306090120150SE +/- 0.81, N = 3117.89117.81117.00116.93
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 32dbac20406080100Min: 115.43 / Avg: 117 / Max: 118.11

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 64dbac306090120150SE +/- 0.54, N = 3119.83118.93118.48117.61
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 64dbac20406080100Min: 117.43 / Avg: 118.48 / Max: 119.25

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 96cbad306090120150SE +/- 0.42, N = 3119.82119.10118.25118.23
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 96cbad20406080100Min: 117.57 / Avg: 118.25 / Max: 119.01

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16cadb2004006008001000SE +/- 0.39, N = 3933.76932.19931.22929.58
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16cadb160320480640800Min: 931.44 / Avg: 932.19 / Max: 932.73

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32bcda2004006008001000SE +/- 1.14, N = 3984.41982.66981.61981.32
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32bcda2004006008001000Min: 979.13 / Avg: 981.32 / Max: 982.99

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64cdab2004006008001000SE +/- 0.55, N = 3999.93999.92998.43997.73
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64cdab2004006008001000Min: 997.72 / Avg: 998.43 / Max: 999.52

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96adcb2004006008001000SE +/- 1.26, N = 3990.35988.72988.15986.72
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96adcb2004006008001000Min: 989.05 / Avg: 990.35 / Max: 992.87

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 16abcd400800120016002000SE +/- 14.27, N = 32003.492002.091989.001984.38
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 16abcd30060090012001500Min: 1975.06 / Avg: 2003.49 / Max: 2019.85

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 32bacd5001000150020002500SE +/- 19.33, N = 72110.192056.182037.862033.57
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 32bacd400800120016002000Min: 1976.44 / Avg: 2056.18 / Max: 2111.88

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 64badc5001000150020002500SE +/- 10.67, N = 32120.772112.332091.662063.78
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 64badc400800120016002000Min: 2091 / Avg: 2112.33 / Max: 2123.58

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 96cabd400800120016002000SE +/- 2.02, N = 32087.392083.552081.872071.18
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 96cabd400800120016002000Min: 2081.06 / Avg: 2083.55 / Max: 2087.56

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 256bcda1224364860SE +/- 0.02, N = 352.0752.0652.0451.92
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 256bcda1020304050Min: 51.87 / Avg: 51.92 / Max: 51.94

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 512bcda1224364860SE +/- 0.05, N = 351.8751.7751.7651.76
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 512bcda1020304050Min: 51.67 / Avg: 51.76 / Max: 51.85

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 960badc1224364860SE +/- 0.11, N = 351.7851.7651.6251.59
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 960badc1020304050Min: 51.6 / Avg: 51.76 / Max: 51.98

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 256dcba306090120150SE +/- 0.27, N = 3119.45119.33119.18119.18
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 256dcba20406080100Min: 118.68 / Avg: 119.18 / Max: 119.61

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 512dcab306090120150SE +/- 0.14, N = 3120.56119.91119.90119.61
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 512dcab20406080100Min: 119.63 / Avg: 119.9 / Max: 120.06

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 960bcad306090120150SE +/- 0.16, N = 3120.94120.74120.74120.57
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 960bcad20406080100Min: 120.55 / Avg: 120.74 / Max: 121.05

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256acdb2004006008001000SE +/- 0.11, N = 31001.611001.431001.301000.28
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256acdb2004006008001000Min: 1001.38 / Avg: 1001.61 / Max: 1001.75

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512acbd2004006008001000SE +/- 0.75, N = 3976.58976.22974.84974.69
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512acbd2004006008001000Min: 975.24 / Avg: 976.58 / Max: 977.85

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960dabc2004006008001000SE +/- 0.13, N = 3984.36983.76982.71982.46
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960dabc2004006008001000Min: 983.6 / Avg: 983.76 / Max: 984.01

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 256bdac5001000150020002500SE +/- 14.60, N = 32106.542091.602090.972028.82
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 256bdac400800120016002000Min: 2061.77 / Avg: 2090.97 / Max: 2106.06

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 512bdac5001000150020002500SE +/- 2.76, N = 32179.092171.792170.092161.98
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 512bdac400800120016002000Min: 2164.81 / Avg: 2170.09 / Max: 2174.1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 960cadb5001000150020002500SE +/- 2.58, N = 32137.202133.182132.292128.10
OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 960cadb400800120016002000Min: 2128.12 / Avg: 2133.18 / Max: 2136.59

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000dcab300K600K900K1200K1500KSE +/- 5338.36, N = 31560758.01552035.61547894.41545780.8
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000dcab300K600K900K1200K1500KMin: 1538524 / Avg: 1547894.43 / Max: 1557011.4

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000adcb300K600K900K1200K1500KSE +/- 3918.66, N = 31602099.91600346.91599391.91593776.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000adcb300K600K900K1200K1500KMin: 1594384.6 / Avg: 1602099.87 / Max: 1607150.7

68 Results Shown

SQLite:
  2
  4
  8
  16
  32
QuantLib
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
Intel TensorFlow:
  resnet50_fp32_pretrained_model - 1:
    images/sec
    ms
  resnet50_int8_pretrained_model - 1:
    images/sec
    ms
  resnet50_fp32_pretrained_model - 16:
    images/sec
  resnet50_fp32_pretrained_model - 32:
    images/sec
  resnet50_fp32_pretrained_model - 64:
    images/sec
  resnet50_fp32_pretrained_model - 96:
    images/sec
  resnet50_int8_pretrained_model - 16:
    images/sec
  resnet50_int8_pretrained_model - 32:
    images/sec
  resnet50_int8_pretrained_model - 64:
    images/sec
  resnet50_int8_pretrained_model - 96:
    images/sec
  resnet50_fp32_pretrained_model - 256:
    images/sec
  resnet50_fp32_pretrained_model - 512:
    images/sec
  resnet50_fp32_pretrained_model - 960:
    images/sec
  resnet50_int8_pretrained_model - 256:
    images/sec
  resnet50_int8_pretrained_model - 512:
    images/sec
  resnet50_int8_pretrained_model - 960:
    images/sec
  inceptionv4_fp32_pretrained_model - 1:
    images/sec
    ms
  inceptionv4_int8_pretrained_model - 1:
    images/sec
    ms
  mobilenetv1_fp32_pretrained_model - 1:
    images/sec
  mobilenetv1_int8_pretrained_model - 1:
    images/sec
  inceptionv4_fp32_pretrained_model - 16:
    images/sec
  inceptionv4_fp32_pretrained_model - 32:
    images/sec
  inceptionv4_fp32_pretrained_model - 64:
    images/sec
  inceptionv4_fp32_pretrained_model - 96:
    images/sec
  inceptionv4_int8_pretrained_model - 16:
    images/sec
  inceptionv4_int8_pretrained_model - 32:
    images/sec
  inceptionv4_int8_pretrained_model - 64:
    images/sec
  inceptionv4_int8_pretrained_model - 96:
    images/sec
  mobilenetv1_fp32_pretrained_model - 16:
    images/sec
  mobilenetv1_fp32_pretrained_model - 32:
    images/sec
  mobilenetv1_fp32_pretrained_model - 64:
    images/sec
  mobilenetv1_fp32_pretrained_model - 96:
    images/sec
  mobilenetv1_int8_pretrained_model - 16:
    images/sec
  mobilenetv1_int8_pretrained_model - 32:
    images/sec
  mobilenetv1_int8_pretrained_model - 64:
    images/sec
  mobilenetv1_int8_pretrained_model - 96:
    images/sec
  inceptionv4_fp32_pretrained_model - 256:
    images/sec
  inceptionv4_fp32_pretrained_model - 512:
    images/sec
  inceptionv4_fp32_pretrained_model - 960:
    images/sec
  inceptionv4_int8_pretrained_model - 256:
    images/sec
  inceptionv4_int8_pretrained_model - 512:
    images/sec
  inceptionv4_int8_pretrained_model - 960:
    images/sec
  mobilenetv1_fp32_pretrained_model - 256:
    images/sec
  mobilenetv1_fp32_pretrained_model - 512:
    images/sec
  mobilenetv1_fp32_pretrained_model - 960:
    images/sec
  mobilenetv1_int8_pretrained_model - 256:
    images/sec
  mobilenetv1_int8_pretrained_model - 512:
    images/sec
  mobilenetv1_int8_pretrained_model - 960:
    images/sec
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000