epyc last

AMD EPYC 7343 16-Core testing with a Supermicro H12SSL-i v1.02 (2.4 BIOS) and astdrmfb on AlmaLinux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2304307-NE-EPYCLAST283
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

Database Test Suite 2 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
April 30 2023
  7 Hours, 46 Minutes
b
April 30 2023
  2 Hours, 34 Minutes
c
April 30 2023
  2 Hours, 34 Minutes
d
April 30 2023
  2 Hours, 34 Minutes
Invert Hiding All Results Option
  3 Hours, 52 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc lastOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7343 16-Core @ 3.20GHz (16 Cores / 32 Threads)Supermicro H12SSL-i v1.02 (2.4 BIOS)8 x 64 GB DDR4-3200MT/s Samsung M393A8G40AB2-CWE2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbDELL E207WFPAlmaLinux 9.15.14.0-162.12.1.el9_1.x86_64 (x86_64)GCC 11.3.1 20220421ext41680x1050ProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen ResolutionEpyc Last BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - NONE / relatime,rw,stripe=32 / raid1 nvme1n1p3[0] nvme0n1p3[1] Block Size: 4096 - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 - Python 3.9.14- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%104%109%113%118%SQLiteInfluxDBQuantLibIntel TensorFlowSVT-AV1

epyc lastquantlib: svt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pinfluxdb: 4 - 10000 - 2,5000,1 - 10000intel-tensorflow: inceptionv4_fp32_pretrained_model - 1influxdb: 64 - 10000 - 2,5000,1 - 10000intel-tensorflow: inceptionv4_fp32_pretrained_model - 32intel-tensorflow: inceptionv4_int8_pretrained_model - 32intel-tensorflow: mobilenetv1_int8_pretrained_model - 960intel-tensorflow: inceptionv4_fp32_pretrained_model - 96intel-tensorflow: inceptionv4_int8_pretrained_model - 256intel-tensorflow: mobilenetv1_fp32_pretrained_model - 512intel-tensorflow: mobilenetv1_fp32_pretrained_model - 96intel-tensorflow: mobilenetv1_int8_pretrained_model - 96intel-tensorflow: resnet50_fp32_pretrained_model - 64intel-tensorflow: resnet50_fp32_pretrained_model - 96intel-tensorflow: resnet50_fp32_pretrained_model - 16intel-tensorflow: resnet50_fp32_pretrained_model - 32intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 96intel-tensorflow: resnet50_int8_pretrained_model - 16intel-tensorflow: mobilenetv1_fp32_pretrained_model - 32intel-tensorflow: resnet50_int8_pretrained_model - 32intel-tensorflow: mobilenetv1_int8_pretrained_model - 32intel-tensorflow: resnet50_int8_pretrained_model - 64intel-tensorflow: inceptionv4_fp32_pretrained_model - 512intel-tensorflow: resnet50_int8_pretrained_model - 96intel-tensorflow: inceptionv4_int8_pretrained_model - 960intel-tensorflow: resnet50_fp32_pretrained_model - 256intel-tensorflow: mobilenetv1_int8_pretrained_model - 256intel-tensorflow: resnet50_fp32_pretrained_model - 512intel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 960intel-tensorflow: inceptionv4_int8_pretrained_model - 16intel-tensorflow: resnet50_int8_pretrained_model - 256intel-tensorflow: inceptionv4_int8_pretrained_model - 64intel-tensorflow: resnet50_int8_pretrained_model - 512intel-tensorflow: mobilenetv1_fp32_pretrained_model - 16intel-tensorflow: resnet50_int8_pretrained_model - 960intel-tensorflow: mobilenetv1_fp32_pretrained_model - 64intel-tensorflow: inceptionv4_fp32_pretrained_model - 1intel-tensorflow: mobilenetv1_int8_pretrained_model - 16sqlite: 2intel-tensorflow: mobilenetv1_int8_pretrained_model - 64intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 256intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 960intel-tensorflow: mobilenetv1_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 512intel-tensorflow: mobilenetv1_int8_pretrained_model - 1intel-tensorflow: mobilenetv1_fp32_pretrained_model - 256intel-tensorflow: inceptionv4_fp32_pretrained_model - 16intel-tensorflow: mobilenetv1_fp32_pretrained_model - 960sqlite: 4intel-tensorflow: mobilenetv1_int8_pretrained_model - 512intel-tensorflow: inceptionv4_fp32_pretrained_model - 64sqlite: 8sqlite: 16sqlite: 32abcd3202.13.76652.582174.519160.5019.02795.925547.498548.0071547894.430.8171602099.953.10117.002133.1851.83119.18976.58990.352083.55170.509169.726168.744174.0404.508221.85512.613118.25346.017981.32356.3172056.18365.09551.76373.126120.74167.9682090.97168.37179.283168.721113.31382.093118.48383.615932.19391.680998.4332.232003.492.1502112.3369.0451.9214.43651.761045.59119.901933.371001.6153.20983.763.1782170.0952.445.0658.47611.3213206.13.78251.982175.683160.6649.06495.707542.028542.6251545780.830.4751593776.452.91117.812128.151.72119.18974.84986.722081.87170.516169.649171.632174.3034.509221.76912.535119.097759734347.926984.41361.3622110.19364.04351.87373.723120.94167.9982106.54168.64179.776169.729111.63380.652118.93385.546929.58392.07997.7333.162002.092.0412120.7769.1652.0714.41551.781048.2119.611932.471000.2853.47982.712.9382179.0952.403.8566.20911.8113200.73.79152.523172.696160.1399.12196.329535.678545.8641552035.630.6471599391.953.11116.932137.251.90119.33976.22988.152087.39170.603169.971171.049174.2554.622216.36912.502119.82348.365982.66357.7142037.86365.00351.77372.933120.74168.0552028.82168.78879.988170.093113.63381.064117.61385.5933.76392.383999.9332.0619892.1062063.7869.0152.0614.42651.591046.79119.911933.581001.4352.99982.462.9042161.9851.723.9667.19811.7433192.73.78452.572174.836159.5219.2895.648539.588547.432156075830.7231600346.953.27117.892132.2952.00119.45974.69988.722071.18170.261169.31169.333172.2694.596217.59312.466118.23344.868981.61357.0352033.57365.30551.76372.928120.57168.162091.6168.36580.218169.338113.36380.015119.83384.26931.22391.383999.9231.861984.382.0392091.6669.0752.0414.37751.621046.4120.561934.321001.353.22984.362.7122171.7951.973.7616.07511.598OpenBenchmarking.org

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30dcab7001400210028003500SE +/- 1.35, N = 33192.73200.73202.13206.11. (CXX) g++ options: -O3 -march=native -fPIE -pie

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 4Kabdc0.8531.7062.5593.4124.265SE +/- 0.019, N = 33.7663.7823.7843.7911. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 4Kbcda1224364860SE +/- 0.19, N = 351.9852.5252.5752.581. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 4Kcadb4080120160200SE +/- 0.56, N = 3172.70174.52174.84175.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 4Kdcab4080120160200SE +/- 0.85, N = 3159.52160.14160.50160.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 1080pabcd3691215SE +/- 0.031, N = 39.0279.0649.1219.2801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 1080pdbac20406080100SE +/- 0.42, N = 395.6595.7195.9396.331. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 1080pcdba120240360480600SE +/- 0.64, N = 3535.68539.59542.03547.501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 1080pbcda120240360480600SE +/- 0.34, N = 3542.63545.86547.43548.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000bacd300K600K900K1200K1500KSE +/- 5338.36, N = 31545780.81547894.41552035.61560758.0

Intel TensorFlow

Intel optimized version of TensorFlow with benchmarks of Intel AI models and configurable batch sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1adcb714212835SE +/- 0.10, N = 330.8230.7230.6530.48

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000bcda300K600K900K1200K1500KSE +/- 3918.66, N = 31593776.41599391.91600346.91602099.9

Intel TensorFlow

Intel optimized version of TensorFlow with benchmarks of Intel AI models and configurable batch sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 32bacd1224364860SE +/- 0.09, N = 352.9153.1053.1153.27

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

Threads / Copies: 1

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

d: The test run did not produce a result.

Intel TensorFlow

Intel optimized version of TensorFlow with benchmarks of Intel AI models and configurable batch sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 32cabd306090120150SE +/- 0.81, N = 3116.93117.00117.81117.89

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 960bdac5001000150020002500SE +/- 2.58, N = 32128.102132.292133.182137.20

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 96bacd1224364860SE +/- 0.07, N = 351.7251.8351.9052.00

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 256abcd306090120150SE +/- 0.27, N = 3119.18119.18119.33119.45

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512dbca2004006008001000SE +/- 0.75, N = 3974.69974.84976.22976.58

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96bcda2004006008001000SE +/- 1.26, N = 3986.72988.15988.72990.35

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 96dbac400800120016002000SE +/- 2.02, N = 32071.182081.872083.552087.39

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 64dabc4080120160200SE +/- 0.12, N = 3170.26170.51170.52170.60

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 96dbac4080120160200SE +/- 0.11, N = 3169.31169.65169.73169.97

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 16adcb4080120160200SE +/- 0.83, N = 3168.74169.33171.05171.63

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 32dacb4080120160200SE +/- 0.31, N = 3172.27174.04174.26174.30

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1cdba1.042.083.124.165.2SE +/- 0.032, N = 34.6224.5964.5094.508

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1cdba50100150200250SE +/- 1.58, N = 3216.37217.59221.77221.86

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1abcd3691215SE +/- 0.01, N = 312.6112.5412.5012.47

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 96dabc306090120150SE +/- 0.42, N = 3118.23118.25119.10119.82

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 16dabc80160240320400SE +/- 1.45, N = 3344.87346.02347.93348.37

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32adcb2004006008001000SE +/- 1.14, N = 3981.32981.61982.66984.41

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 32adcb80160240320400SE +/- 0.47, N = 3356.32357.04357.71361.36

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 32dcab5001000150020002500SE +/- 19.33, N = 72033.572037.862056.182110.19

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 64bcad80160240320400SE +/- 0.43, N = 3364.04365.00365.10365.31

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 512adcb1224364860SE +/- 0.05, N = 351.7651.7651.7751.87

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 96dcab80160240320400SE +/- 0.27, N = 3372.93372.93373.13373.72

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 960dacb306090120150SE +/- 0.16, N = 3120.57120.74120.74120.94

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 256abcd4080120160200SE +/- 0.12, N = 3167.97168.00168.06168.16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 256cadb5001000150020002500SE +/- 14.60, N = 32028.822090.972091.602106.54

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 512dabc4080120160200SE +/- 0.21, N = 3168.37168.37168.64168.79

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1abcd20406080100SE +/- 0.09, N = 379.2879.7879.9980.22

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 960adbc4080120160200SE +/- 0.30, N = 3168.72169.34169.73170.09

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 16badc306090120150SE +/- 0.30, N = 3111.63113.31113.36113.63

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 256dbca80160240320400SE +/- 0.54, N = 3380.02380.65381.06382.09

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 64cabd306090120150SE +/- 0.54, N = 3117.61118.48118.93119.83

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 512adcb80160240320400SE +/- 0.21, N = 3383.62384.26385.50385.55

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16bdac2004006008001000SE +/- 0.39, N = 3929.58931.22932.19933.76

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 960dabc90180270360450SE +/- 0.57, N = 3391.38391.68392.07392.38

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64badc2004006008001000SE +/- 0.55, N = 3997.73998.43999.92999.93

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1dcab816243240SE +/- 0.26, N = 331.8632.0632.2333.16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 16dcba400800120016002000SE +/- 14.27, N = 31984.381989.002002.092003.49

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2acbd0.48380.96761.45141.93522.419SE +/- 0.004, N = 32.1502.1062.0412.0391. (CC) gcc options: -O2 -lz -lm

Intel TensorFlow

Intel optimized version of TensorFlow with benchmarks of Intel AI models and configurable batch sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 64cdab5001000150020002500SE +/- 10.67, N = 32063.782091.662112.332120.77

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1cadb1530456075SE +/- 0.07, N = 369.0169.0469.0769.16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 256adcb1224364860SE +/- 0.02, N = 351.9252.0452.0652.07

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1acbd48121620SE +/- 0.03, N = 314.4414.4314.4214.38

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 960cdab1224364860SE +/- 0.11, N = 351.5951.6251.7651.78

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1adcb2004006008001000SE +/- 0.98, N = 31045.591046.401046.791048.20

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 512bacd306090120150SE +/- 0.14, N = 3119.61119.90119.91120.56

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 1bacd400800120016002000SE +/- 0.61, N = 31932.471933.371933.581934.32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256bdca2004006008001000SE +/- 0.11, N = 31000.281001.301001.431001.61

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 16cadb1224364860SE +/- 0.07, N = 352.9953.2053.2253.47

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960cbad2004006008001000SE +/- 0.13, N = 3982.46982.71983.76984.36

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4abcd0.71511.43022.14532.86043.5755SE +/- 0.030, N = 153.1782.9382.9042.7121. (CC) gcc options: -O2 -lz -lm

Intel TensorFlow

Intel optimized version of TensorFlow with benchmarks of Intel AI models and configurable batch sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 512cadb5001000150020002500SE +/- 2.76, N = 32161.982170.092171.792179.09

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 64cdba1224364860SE +/- 0.12, N = 351.7251.9752.4052.44

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 8acbd1.13962.27923.41884.55845.698SE +/- 0.038, N = 35.0653.9663.8563.7611. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 16acbd246810SE +/- 0.163, N = 138.4767.1986.2096.0751. (CC) gcc options: -O2 -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 32bcda3691215SE +/- 0.05, N = 311.8111.7411.6011.321. (CC) gcc options: -O2 -lz -lm

68 Results Shown

QuantLib
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
InfluxDB
Intel TensorFlow
InfluxDB
Intel TensorFlow:
  inceptionv4_fp32_pretrained_model - 32
  inceptionv4_int8_pretrained_model - 32
  mobilenetv1_int8_pretrained_model - 960
  inceptionv4_fp32_pretrained_model - 96
  inceptionv4_int8_pretrained_model - 256
  mobilenetv1_fp32_pretrained_model - 512
  mobilenetv1_fp32_pretrained_model - 96
  mobilenetv1_int8_pretrained_model - 96
  resnet50_fp32_pretrained_model - 64
  resnet50_fp32_pretrained_model - 96
  resnet50_fp32_pretrained_model - 16
  resnet50_fp32_pretrained_model - 32
  resnet50_int8_pretrained_model - 1
  resnet50_int8_pretrained_model - 1
  resnet50_fp32_pretrained_model - 1
  inceptionv4_int8_pretrained_model - 96
  resnet50_int8_pretrained_model - 16
  mobilenetv1_fp32_pretrained_model - 32
  resnet50_int8_pretrained_model - 32
  mobilenetv1_int8_pretrained_model - 32
  resnet50_int8_pretrained_model - 64
  inceptionv4_fp32_pretrained_model - 512
  resnet50_int8_pretrained_model - 96
  inceptionv4_int8_pretrained_model - 960
  resnet50_fp32_pretrained_model - 256
  mobilenetv1_int8_pretrained_model - 256
  resnet50_fp32_pretrained_model - 512
  resnet50_fp32_pretrained_model - 1
  resnet50_fp32_pretrained_model - 960
  inceptionv4_int8_pretrained_model - 16
  resnet50_int8_pretrained_model - 256
  inceptionv4_int8_pretrained_model - 64
  resnet50_int8_pretrained_model - 512
  mobilenetv1_fp32_pretrained_model - 16
  resnet50_int8_pretrained_model - 960
  mobilenetv1_fp32_pretrained_model - 64
  inceptionv4_fp32_pretrained_model - 1
  mobilenetv1_int8_pretrained_model - 16
SQLite
Intel TensorFlow:
  mobilenetv1_int8_pretrained_model - 64
  inceptionv4_int8_pretrained_model - 1
  inceptionv4_fp32_pretrained_model - 256
  inceptionv4_int8_pretrained_model - 1
  inceptionv4_fp32_pretrained_model - 960
  mobilenetv1_fp32_pretrained_model - 1
  inceptionv4_int8_pretrained_model - 512
  mobilenetv1_int8_pretrained_model - 1
  mobilenetv1_fp32_pretrained_model - 256
  inceptionv4_fp32_pretrained_model - 16
  mobilenetv1_fp32_pretrained_model - 960
SQLite
Intel TensorFlow:
  mobilenetv1_int8_pretrained_model - 512
  inceptionv4_fp32_pretrained_model - 64
SQLite:
  8
  16
  32