LiteRT

This is a benchmark of LiteRT (formerly known as TensorFlow Lite) focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This LiteRT benchmark test profile is measuring the average inference time.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark litert.

Project Site

ai.google.dev

Source Repository

github.com

Test Created

15 October 2024

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

7 Seconds

Average Run Time

3 Minutes, 16 Seconds

Accolades

Recently Created Test Profile

Supported Platforms


Revision History

pts/litert-1.0.0   [View Source]   Tue, 15 Oct 2024 16:03:11 GMT
Initial commit of LiteRT now that TensorFlow Lite rebranded, fork from pts/tensorflow-lite test profile.


Performance Metrics

This test profile is too new - it does not have enough data available on OpenBenchmarking.org yet to provide any detailed metrics.