epyc last

AMD EPYC 7343 16-Core testing with a Supermicro H12SSL-i v1.02 (2.4 BIOS) and astdrmfb on AlmaLinux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2304307-NE-EPYCLAST283
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Database Test Suite 2 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
April 30 2023
  7 Hours, 46 Minutes
b
April 30 2023
  2 Hours, 34 Minutes
c
April 30 2023
  2 Hours, 34 Minutes
d
April 30 2023
  2 Hours, 34 Minutes
Invert Hiding All Results Option
  3 Hours, 52 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc lastOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7343 16-Core @ 3.20GHz (16 Cores / 32 Threads)Supermicro H12SSL-i v1.02 (2.4 BIOS)8 x 64 GB DDR4-3200MT/s Samsung M393A8G40AB2-CWE2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbDELL E207WFPAlmaLinux 9.15.14.0-162.12.1.el9_1.x86_64 (x86_64)GCC 11.3.1 20220421ext41680x1050ProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen ResolutionEpyc Last BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - NONE / relatime,rw,stripe=32 / raid1 nvme1n1p3[0] nvme0n1p3[1] Block Size: 4096 - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 - Python 3.9.14- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000dcba300K600K900K1200K1500KSE +/- 5338.36, N = 31560758.01552035.61545780.81547894.4

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000dcba300K600K900K1200K1500KSE +/- 3918.66, N = 31600346.91599391.91593776.41602099.9

Intel TensorFlow

Intel optimized version of TensorFlow with benchmarks of Intel AI models and configurable batch sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba20406080100SE +/- 0.09, N = 380.2279.9979.7879.28

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1dcba3691215SE +/- 0.01, N = 312.4712.5012.5412.61

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1dcba50100150200250SE +/- 1.58, N = 3217.59216.37221.77221.86

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1dcba1.042.083.124.165.2SE +/- 0.032, N = 34.5964.6224.5094.508

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 16dcba4080120160200SE +/- 0.83, N = 3169.33171.05171.63168.74

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 32dcba4080120160200SE +/- 0.31, N = 3172.27174.26174.30174.04

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 64dcba4080120160200SE +/- 0.12, N = 3170.26170.60170.52170.51

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 96dcba4080120160200SE +/- 0.11, N = 3169.31169.97169.65169.73

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 16dcba80160240320400SE +/- 1.45, N = 3344.87348.37347.93346.02

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 32dcba80160240320400SE +/- 0.47, N = 3357.04357.71361.36356.32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 64dcba80160240320400SE +/- 0.43, N = 3365.31365.00364.04365.10

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 96dcba80160240320400SE +/- 0.27, N = 3372.93372.93373.72373.13

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 256dcba4080120160200SE +/- 0.12, N = 3168.16168.06168.00167.97

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 512dcba4080120160200SE +/- 0.21, N = 3168.37168.79168.64168.37

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 960dcba4080120160200SE +/- 0.30, N = 3169.34170.09169.73168.72

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 256dcba80160240320400SE +/- 0.54, N = 3380.02381.06380.65382.09

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 512dcba80160240320400SE +/- 0.21, N = 3384.26385.50385.55383.62

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 960dcba90180270360450SE +/- 0.57, N = 3391.38392.38392.07391.68

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1dcba816243240SE +/- 0.26, N = 331.8632.0633.1632.23

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1dcba714212835SE +/- 0.10, N = 330.7230.6530.4830.82

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1dcba1530456075SE +/- 0.07, N = 369.0769.0169.1669.04

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1dcba48121620SE +/- 0.03, N = 314.3814.4314.4214.44

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1dcba2004006008001000SE +/- 0.98, N = 31046.401046.791048.201045.59

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 1dcba400800120016002000SE +/- 0.61, N = 31934.321933.581932.471933.37

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 16dcba1224364860SE +/- 0.07, N = 353.2252.9953.4753.20

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 32dcba1224364860SE +/- 0.09, N = 353.2753.1152.9153.10

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 64dcba1224364860SE +/- 0.12, N = 351.9751.7252.4052.44

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 96dcba1224364860SE +/- 0.07, N = 352.0051.9051.7251.83

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 16dcba306090120150SE +/- 0.30, N = 3113.36113.63111.63113.31

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 32dcba306090120150SE +/- 0.81, N = 3117.89116.93117.81117.00

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 64dcba306090120150SE +/- 0.54, N = 3119.83117.61118.93118.48

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 96dcba306090120150SE +/- 0.42, N = 3118.23119.82119.10118.25

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16dcba2004006008001000SE +/- 0.39, N = 3931.22933.76929.58932.19

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32dcba2004006008001000SE +/- 1.14, N = 3981.61982.66984.41981.32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64dcba2004006008001000SE +/- 0.55, N = 3999.92999.93997.73998.43

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96dcba2004006008001000SE +/- 1.26, N = 3988.72988.15986.72990.35

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 16dcba400800120016002000SE +/- 14.27, N = 31984.381989.002002.092003.49

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 32dcba5001000150020002500SE +/- 19.33, N = 72033.572037.862110.192056.18

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 64dcba5001000150020002500SE +/- 10.67, N = 32091.662063.782120.772112.33

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 96dcba400800120016002000SE +/- 2.02, N = 32071.182087.392081.872083.55

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 256dcba1224364860SE +/- 0.02, N = 352.0452.0652.0751.92

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 512dcba1224364860SE +/- 0.05, N = 351.7651.7751.8751.76

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 960dcba1224364860SE +/- 0.11, N = 351.6251.5951.7851.76

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 256dcba306090120150SE +/- 0.27, N = 3119.45119.33119.18119.18

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 512dcba306090120150SE +/- 0.14, N = 3120.56119.91119.61119.90

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 960dcba306090120150SE +/- 0.16, N = 3120.57120.74120.94120.74

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256dcba2004006008001000SE +/- 0.11, N = 31001.301001.431000.281001.61

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512dcba2004006008001000SE +/- 0.75, N = 3974.69976.22974.84976.58

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960dcba2004006008001000SE +/- 0.13, N = 3984.36982.46982.71983.76

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 256dcba5001000150020002500SE +/- 14.60, N = 32091.602028.822106.542090.97

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 512dcba5001000150020002500SE +/- 2.76, N = 32171.792161.982179.092170.09

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 960dcba5001000150020002500SE +/- 2.58, N = 32132.292137.202128.102133.18

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30dcba7001400210028003500SE +/- 1.35, N = 33192.73200.73206.13202.11. (CXX) g++ options: -O3 -march=native -fPIE -pie

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

Threads / Copies: 1

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

d: The test run did not produce a result.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2dcba0.48380.96761.45141.93522.419SE +/- 0.004, N = 32.0392.1062.0412.1501. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4dcba0.71511.43022.14532.86043.5755SE +/- 0.030, N = 152.7122.9042.9383.1781. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 8dcba1.13962.27923.41884.55845.698SE +/- 0.038, N = 33.7613.9663.8565.0651. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 16dcba246810SE +/- 0.163, N = 136.0757.1986.2098.4761. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 32dcba3691215SE +/- 0.05, N = 311.6011.7411.8111.321. (CC) gcc options: -O2 -lz -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 4Kdcba0.8531.7062.5593.4124.265SE +/- 0.019, N = 33.7843.7913.7823.7661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 4Kdcba1224364860SE +/- 0.19, N = 352.5752.5251.9852.581. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 4Kdcba4080120160200SE +/- 0.56, N = 3174.84172.70175.68174.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 4Kdcba4080120160200SE +/- 0.85, N = 3159.52160.14160.66160.501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 1080pdcba3691215SE +/- 0.031, N = 39.2809.1219.0649.0271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 1080pdcba20406080100SE +/- 0.42, N = 395.6596.3395.7195.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 1080pdcba120240360480600SE +/- 0.64, N = 3539.59535.68542.03547.501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 1080pdcba120240360480600SE +/- 0.34, N = 3547.43545.86542.63548.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

68 Results Shown

InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
Intel TensorFlow:
  resnet50_fp32_pretrained_model - 1:
    images/sec
    ms
  resnet50_int8_pretrained_model - 1:
    images/sec
    ms
  resnet50_fp32_pretrained_model - 16:
    images/sec
  resnet50_fp32_pretrained_model - 32:
    images/sec
  resnet50_fp32_pretrained_model - 64:
    images/sec
  resnet50_fp32_pretrained_model - 96:
    images/sec
  resnet50_int8_pretrained_model - 16:
    images/sec
  resnet50_int8_pretrained_model - 32:
    images/sec
  resnet50_int8_pretrained_model - 64:
    images/sec
  resnet50_int8_pretrained_model - 96:
    images/sec
  resnet50_fp32_pretrained_model - 256:
    images/sec
  resnet50_fp32_pretrained_model - 512:
    images/sec
  resnet50_fp32_pretrained_model - 960:
    images/sec
  resnet50_int8_pretrained_model - 256:
    images/sec
  resnet50_int8_pretrained_model - 512:
    images/sec
  resnet50_int8_pretrained_model - 960:
    images/sec
  inceptionv4_fp32_pretrained_model - 1:
    images/sec
    ms
  inceptionv4_int8_pretrained_model - 1:
    images/sec
    ms
  mobilenetv1_fp32_pretrained_model - 1:
    images/sec
  mobilenetv1_int8_pretrained_model - 1:
    images/sec
  inceptionv4_fp32_pretrained_model - 16:
    images/sec
  inceptionv4_fp32_pretrained_model - 32:
    images/sec
  inceptionv4_fp32_pretrained_model - 64:
    images/sec
  inceptionv4_fp32_pretrained_model - 96:
    images/sec
  inceptionv4_int8_pretrained_model - 16:
    images/sec
  inceptionv4_int8_pretrained_model - 32:
    images/sec
  inceptionv4_int8_pretrained_model - 64:
    images/sec
  inceptionv4_int8_pretrained_model - 96:
    images/sec
  mobilenetv1_fp32_pretrained_model - 16:
    images/sec
  mobilenetv1_fp32_pretrained_model - 32:
    images/sec
  mobilenetv1_fp32_pretrained_model - 64:
    images/sec
  mobilenetv1_fp32_pretrained_model - 96:
    images/sec
  mobilenetv1_int8_pretrained_model - 16:
    images/sec
  mobilenetv1_int8_pretrained_model - 32:
    images/sec
  mobilenetv1_int8_pretrained_model - 64:
    images/sec
  mobilenetv1_int8_pretrained_model - 96:
    images/sec
  inceptionv4_fp32_pretrained_model - 256:
    images/sec
  inceptionv4_fp32_pretrained_model - 512:
    images/sec
  inceptionv4_fp32_pretrained_model - 960:
    images/sec
  inceptionv4_int8_pretrained_model - 256:
    images/sec
  inceptionv4_int8_pretrained_model - 512:
    images/sec
  inceptionv4_int8_pretrained_model - 960:
    images/sec
  mobilenetv1_fp32_pretrained_model - 256:
    images/sec
  mobilenetv1_fp32_pretrained_model - 512:
    images/sec
  mobilenetv1_fp32_pretrained_model - 960:
    images/sec
  mobilenetv1_int8_pretrained_model - 256:
    images/sec
  mobilenetv1_int8_pretrained_model - 512:
    images/sec
  mobilenetv1_int8_pretrained_model - 960:
    images/sec
QuantLib
SQLite:
  2
  4
  8
  16
  32
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p