ti-arm-2-core-tensorflow-lite-2-29-24

Docker testing on Ubuntu 22.04.4 LTS via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2402290-NICH-TIARM2C35
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
2-29-24 tensorflow-lite
February 29
  20 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ti-arm-2-core-tensorflow-lite-2-29-24OpenBenchmarking.orgPhoronix Test SuiteARMv8 Cortex-A72 (2 Cores)Texas Instruments J721E SK2560MB32GB SL32Gtidssdrmfb16PM6QUbuntu 22.04.4 LTS6.1.46-g5892b80d6b (aarch64)overlayfs1920x1080DockerProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelFile-SystemScreen ResolutionSystem LayerTi-arm-2-core-tensorflow-lite-2-29-24 BenchmarksSystem Logs- Transparent Huge Pages: always- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

ti-arm-2-core-tensorflow-lite-2-29-24tensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V22-29-24 tensorflow-lite98026.7142686714774566831.538015.11231813OpenBenchmarking.org

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet2-29-24 tensorflow-lite20K40K60K80K100KSE +/- 290.74, N = 398026.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V42-29-24 tensorflow-lite300K600K900K1200K1500KSE +/- 3423.84, N = 31426867

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile2-29-24 tensorflow-lite30K60K90K120K150KSE +/- 146.08, N = 3147745

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float2-29-24 tensorflow-lite14K28K42K56K70KSE +/- 355.83, N = 366831.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant2-29-24 tensorflow-lite8K16K24K32K40KSE +/- 24.30, N = 338015.1

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V22-29-24 tensorflow-lite300K600K900K1200K1500KSE +/- 6419.16, N = 31231813