Tensorflow

This is a benchmark of the Tensorflow deep learning framework using the CIFAR10 data set.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark tensorflow.

Project Site

tensorflow.org

Test Created

6 February 2017

Last Updated

6 March 2020

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

1 Minute, 6 Seconds

Average Run Time

2 Minutes, 40 Seconds

Test Dependencies

Python

Accolades

50k+ Downloads

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page Views ***OpenBenchmarking.orgEventsTensorflow Popularity Statisticspts/tensorflow2017.022017.042017.062017.082017.102017.122018.022018.042018.062018.082018.102018.122019.022019.042019.062019.082019.102019.122020.022020.042020.062020.082020.102020.122021.022021.042021.062021.082021.105K10K15K20K25K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
*** Test profile page view reporting began March 2021.
Data current as of 26 November 2021.

Revision History

pts/tensorflow-1.1.0   [View Source]   Fri, 06 Mar 2020 07:23:10 GMT
Rework test, increase max steps to 1000

pts/tensorflow-1.0.0   [View Source]   Mon, 06 Feb 2017 18:08:28 GMT
Initial commit of TensorFlow benchmark

Suites Using This Test

Machine Learning

CPU Massive

HPC - High Performance Computing


Performance Metrics

Analyze Test Configuration:

Tensorflow

Build: Cifar10

OpenBenchmarking.org metrics for this test profile configuration based on 301 public results since 6 March 2020 with the latest data as of 25 October 2021.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
Seconds (Average)
Mid-Tier
75th
> 55
55th
3
104 +/- 1
Median
50th
156
Low-Tier
25th
> 261
1st
3
546 +/- 1
OpenBenchmarking.orgDistribution Of Public Results - Build: Cifar10298 Results Range From 29 To 548 Seconds29527598121144167190213236259282305328351374397420443466489512535558306090120150

Based on OpenBenchmarking.org data, the selected test / test configuration (Tensorflow - Build: Cifar10) has an average run-time of 3 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkBuild: Cifar10Run-Time48121620Min: 1 / Avg: 2.63 / Max: 15

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.2%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsBuild: Cifar10Deviation246810Min: 0 / Avg: 0.17 / Max: 2