arm8-tensorflow-lite

Docker testing on Ubuntu 22.04.4 LTS via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402295-NICH-ARM8TEN79
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
ARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments
February 29
  19 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


arm8-tensorflow-liteOpenBenchmarking.orgPhoronix Test SuiteARMv8 Cortex-A72 (8 Cores)Texas Instruments AM69 SK28GB32GB G1M15L + 32GB SL32Gtidssdrmfb16PM6QUbuntu 22.04.4 LTS6.1.46-g5892b80d6b (aarch64)GCC 11.4.0overlayfs1920x1080DockerProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen ResolutionSystem LayerArm8-tensorflow-lite BenchmarksSystem Logs- Transparent Huge Pages: always- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

arm8-tensorflow-litetensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2ARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments29038.043690565284.524568.511713.0395575OpenBenchmarking.org

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments6K12K18K24K30KSE +/- 196.01, N = 329038.0

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4ARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments90K180K270K360K450KSE +/- 677.29, N = 3436905

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments14K28K42K56K70KSE +/- 913.07, N = 365284.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments5K10K15K20K25KSE +/- 107.09, N = 324568.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments3K6K9K12K15KSE +/- 30.42, N = 311713.0

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2ARMv8 Cortex-A72 - tidssdrmfb - Texas Instruments80K160K240K320K400KSE +/- 1268.03, N = 3395575