This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time.
To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark tensorflow-lite.
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data updated weekly as of 26 March 2024.
Revision History
pts/tensorflow-lite-1.1.0 [View Source] Thu, 19 May 2022 09:57:39 GMT Update against latest upstream nightly.
pts/tensorflow-lite-1.0.0 [View Source] Sun, 23 Aug 2020 14:13:10 GMT TensorFlow Lite initial commit.
OpenBenchmarking.org metrics for this test profile configuration based on 1,506 public results since 23 August 2020 with the latest data as of 3 October 2023.
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Based on OpenBenchmarking.org data, the selected test / test configuration (TensorFlow Lite 2020-08-23 - Model: Inception ResNet V2) has an average run-time of 6 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.2%.
Does It Scale Well With Increasing Cores?
Yes, based on the automated analysis of the collected public benchmark data, this test / test settings does generally scale well with increasing CPU core counts. Data based on publicly available results for this test / test settings, separated by vendor, result divided by the reference CPU clock speed, grouped by matching physical CPU core count, and normalized against the smallest core count tested from each vendor for each CPU having a sufficient number of test samples and statistically significant data.
Notable Instruction Set Usage
Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Bulldozer (2011).
VFMADD132PS VFMADD231PS VFMADD213PS
Last automated analysis: 18 January 2022
This test profile binary relies on the shared libraries libm.so.6, libpthread.so.0, libdl.so.2, librt.so.1, libc.so.6.
Tested CPU Architectures
This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.