AMD EPYC 7742 Series

2 x AMD EPYC 7742 64-Core testing with a AMD DAYTONA_X (RDY1006G BIOS) and ASPEED on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2008248-NE-AMDEPYC7736
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 4 Tests
Creator Workloads 4 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 2 Tests
Multi-Core 3 Tests
OCR 2 Tests
Python Tests 2 Tests
Server CPU Tests 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7742
August 23 2020
  1 Hour, 47 Minutes
EPYC 7742 v2
August 23 2020
  1 Hour, 47 Minutes
EPYC 7742 v3
August 23 2020
  1 Hour, 48 Minutes
EPYC 7742 2P
August 23 2020
  2 Hours, 12 Minutes
EPYC 7742 2P v2
August 23 2020
  1 Hour, 53 Minutes
EPYC 7742 2P v3
August 24 2020
  1 Hour, 55 Minutes
Invert Hiding All Results Option
  1 Hour, 54 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7742 SeriesProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3AMD EPYC 7742 64-Core @ 2.25GHz (64 Cores / 128 Threads)AMD DAYTONA_X (RDY1006G BIOS)AMD Device 1480252GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDVE2282 x Mellanox MT27710Ubuntu 18.045.3.0-40-generic (x86_64)GNOME Shell 3.28.4X Server 1.20.5modesetting 1.20.5GCC 7.4.0ext41920x10802 x AMD EPYC 7742 64-Core @ 2.25GHz (128 Cores / 256 Threads)504GBOpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301034Python Details- Python 2.7.17 + Python 3.6.9Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Not affected

EPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3Result OverviewPhoronix Test Suite100%115%130%144%NAMDAI Benchmark AlphaTimed Linux Kernel CompilationHuginASTC EncoderRodiniaOCRMyPDFTesseract OCRTensorFlow Lite

AMD EPYC 7742 Seriesrodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomsbuild-linux-kernel: Time To Compileastcenc: Thoroughastcenc: Exhaustivehugin: Panorama Photo Assistant + Stitching Timeocrmypdf: Processing 60 Page PDF Documentai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scoretesseract-ocr: Time To OCR 7 Imagestensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2EPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3122.721103.63948.4407.4948.2620.4259225.8135.6140.8661.72222.39918531079293232.01268114.996300791502.638533.639197.5836611122.656104.93248.0007.5058.2320.4250925.8535.6140.8862.42822.12718501075292532.08068089.999245392499.338165.238508.2813326122.771104.37348.2907.4528.2560.4249425.7725.6140.8861.24022.29218531061291432.23267719.098819992634.238103.139282.881841867.793112.98449.7939.2299.6540.2676219.5008.9821.5269.41322.8791426781220732.93767656.9645423103132.335624.433533.349860068.403113.06649.0539.4349.4290.2696019.5518.9322.0270.90122.8811436775221132.58667599.565237110345668.600112.37749.7898.8419.3870.2676119.4348.8621.7970.14422.7741436783221932.32865948.164405910535034701.633795.0499602OpenBenchmarking.org

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3306090120150SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 3SE +/- 0.55, N = 3SE +/- 0.63, N = 3122.72122.66122.7767.7968.4068.601. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v320406080100Min: 122.57 / Avg: 122.72 / Max: 122.89Min: 122.6 / Avg: 122.66 / Max: 122.73Min: 122.57 / Avg: 122.77 / Max: 122.95Min: 67.67 / Avg: 67.79 / Max: 67.87Min: 67.7 / Avg: 68.4 / Max: 69.49Min: 67.49 / Avg: 68.6 / Max: 69.671. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3306090120150SE +/- 0.19, N = 3SE +/- 0.63, N = 3SE +/- 0.54, N = 3SE +/- 0.19, N = 3SE +/- 0.94, N = 3SE +/- 1.33, N = 6103.64104.93104.37112.98113.07112.381. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v320406080100Min: 103.27 / Avg: 103.64 / Max: 103.88Min: 103.67 / Avg: 104.93 / Max: 105.6Min: 103.49 / Avg: 104.37 / Max: 105.34Min: 112.65 / Avg: 112.98 / Max: 113.31Min: 111.25 / Avg: 113.07 / Max: 114.42Min: 105.9 / Avg: 112.38 / Max: 114.331. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v31122334455SE +/- 0.25, N = 3SE +/- 0.13, N = 3SE +/- 0.34, N = 3SE +/- 0.67, N = 3SE +/- 0.44, N = 3SE +/- 0.04, N = 348.4448.0048.2949.7949.0549.791. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v31020304050Min: 48.14 / Avg: 48.44 / Max: 48.93Min: 47.73 / Avg: 48 / Max: 48.15Min: 47.85 / Avg: 48.29 / Max: 48.96Min: 48.45 / Avg: 49.79 / Max: 50.48Min: 48.54 / Avg: 49.05 / Max: 49.93Min: 49.71 / Avg: 49.79 / Max: 49.841. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v33691215SE +/- 0.106, N = 3SE +/- 0.082, N = 3SE +/- 0.065, N = 3SE +/- 0.092, N = 12SE +/- 0.093, N = 3SE +/- 0.092, N = 87.4947.5057.4529.2299.4348.8411. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v33691215Min: 7.37 / Avg: 7.49 / Max: 7.71Min: 7.35 / Avg: 7.51 / Max: 7.63Min: 7.33 / Avg: 7.45 / Max: 7.54Min: 8.58 / Avg: 9.23 / Max: 9.8Min: 9.29 / Avg: 9.43 / Max: 9.61Min: 8.48 / Avg: 8.84 / Max: 9.251. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v33691215SE +/- 0.029, N = 3SE +/- 0.016, N = 3SE +/- 0.024, N = 3SE +/- 0.104, N = 15SE +/- 0.154, N = 3SE +/- 0.070, N = 38.2628.2328.2569.6549.4299.3871. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v33691215Min: 8.21 / Avg: 8.26 / Max: 8.3Min: 8.2 / Avg: 8.23 / Max: 8.26Min: 8.22 / Avg: 8.26 / Max: 8.3Min: 9.22 / Avg: 9.65 / Max: 10.49Min: 9.16 / Avg: 9.43 / Max: 9.7Min: 9.25 / Avg: 9.39 / Max: 9.471. (CXX) g++ options: -O2 -lOpenCL

NAMD

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v30.09580.19160.28740.38320.479SE +/- 0.00036, N = 3SE +/- 0.00032, N = 3SE +/- 0.00052, N = 3SE +/- 0.00234, N = 3SE +/- 0.00296, N = 3SE +/- 0.00215, N = 30.425920.425090.424940.267620.269600.26761
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v312345Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.42 / Avg: 0.43 / Max: 0.43Min: 0.42 / Avg: 0.42 / Max: 0.43Min: 0.26 / Avg: 0.27 / Max: 0.27Min: 0.26 / Avg: 0.27 / Max: 0.27Min: 0.26 / Avg: 0.27 / Max: 0.27

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3612182430SE +/- 0.20, N = 13SE +/- 0.19, N = 14SE +/- 0.18, N = 15SE +/- 0.20, N = 13SE +/- 0.22, N = 13SE +/- 0.19, N = 1425.8125.8525.7719.5019.5519.43
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3612182430Min: 25.46 / Avg: 25.81 / Max: 28.19Min: 25.6 / Avg: 25.85 / Max: 28.3Min: 25.31 / Avg: 25.77 / Max: 28.19Min: 19.2 / Avg: 19.5 / Max: 21.94Min: 19.17 / Avg: 19.55 / Max: 22.17Min: 19.12 / Avg: 19.43 / Max: 21.94

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v33691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.12, N = 4SE +/- 0.02, N = 3SE +/- 0.03, N = 35.615.615.618.988.938.861. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v33691215Min: 5.6 / Avg: 5.61 / Max: 5.61Min: 5.61 / Avg: 5.61 / Max: 5.61Min: 5.6 / Avg: 5.61 / Max: 5.61Min: 8.68 / Avg: 8.98 / Max: 9.27Min: 8.89 / Avg: 8.93 / Max: 8.97Min: 8.81 / Avg: 8.86 / Max: 8.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3918273645SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.30, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 340.8640.8840.8821.5222.0221.791. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3918273645Min: 40.85 / Avg: 40.86 / Max: 40.87Min: 40.87 / Avg: 40.88 / Max: 40.91Min: 40.87 / Avg: 40.88 / Max: 40.88Min: 21.2 / Avg: 21.52 / Max: 22.12Min: 21.94 / Avg: 22.02 / Max: 22.13Min: 21.72 / Avg: 21.79 / Max: 21.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v31632486480SE +/- 0.27, N = 3SE +/- 0.55, N = 3SE +/- 0.36, N = 3SE +/- 0.56, N = 3SE +/- 0.65, N = 15SE +/- 0.77, N = 361.7262.4361.2469.4170.9070.14
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v31428425670Min: 61.21 / Avg: 61.72 / Max: 62.1Min: 61.78 / Avg: 62.43 / Max: 63.53Min: 60.7 / Avg: 61.24 / Max: 61.92Min: 68.4 / Avg: 69.41 / Max: 70.34Min: 68.36 / Avg: 70.9 / Max: 77.25Min: 68.6 / Avg: 70.14 / Max: 70.92

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 6.1.2Processing 60 Page PDF DocumentEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3510152025SE +/- 0.15, N = 3SE +/- 0.09, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 322.4022.1322.2922.8822.8822.77
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 6.1.2Processing 60 Page PDF DocumentEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3510152025Min: 22.17 / Avg: 22.4 / Max: 22.67Min: 21.96 / Avg: 22.13 / Max: 22.26Min: 22.03 / Avg: 22.29 / Max: 22.47Min: 22.67 / Avg: 22.88 / Max: 23.02Min: 22.81 / Avg: 22.88 / Max: 22.98Min: 22.6 / Avg: 22.77 / Max: 22.87

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3400800120016002000185018501853142614361436

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v32004006008001000109010751061781775783

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v36001200180024003000294029252914220722112219

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3816243240SE +/- 0.14, N = 3SE +/- 0.16, N = 3SE +/- 0.22, N = 3SE +/- 0.14, N = 3SE +/- 0.33, N = 3SE +/- 0.17, N = 332.0132.0832.2332.9432.5932.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3714212835Min: 31.84 / Avg: 32.01 / Max: 32.29Min: 31.91 / Avg: 32.08 / Max: 32.4Min: 31.79 / Avg: 32.23 / Max: 32.48Min: 32.66 / Avg: 32.94 / Max: 33.13Min: 32.05 / Avg: 32.59 / Max: 33.18Min: 32.05 / Avg: 32.33 / Max: 32.63

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v315K30K45K60K75KSE +/- 68.84, N = 3SE +/- 299.20, N = 3SE +/- 116.18, N = 3SE +/- 346.75, N = 3SE +/- 906.98, N = 4SE +/- 662.33, N = 368114.968089.967719.067656.967599.565948.1
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v312K24K36K48K60KMin: 67985.8 / Avg: 68114.9 / Max: 68220.9Min: 67614.6 / Avg: 68089.9 / Max: 68642.4Min: 67590 / Avg: 67719.03 / Max: 67950.9Min: 67067.3 / Avg: 67656.9 / Max: 68267.9Min: 65356.3 / Avg: 67599.53 / Max: 69798.2Min: 64716 / Avg: 65948.13 / Max: 66985.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3200K400K600K800K1000KSE +/- 3631.10, N = 3SE +/- 3094.21, N = 3SE +/- 6485.51, N = 3SE +/- 8269.96, N = 3SE +/- 1737.41, N = 3SE +/- 6995.72, N = 3963007992453988199645423652371644059
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v3200K400K600K800K1000KMin: 956046 / Avg: 963007 / Max: 968280Min: 986518 / Avg: 992453.33 / Max: 996938Min: 979266 / Avg: 988198.67 / Max: 1000810Min: 628890 / Avg: 645423 / Max: 654104Min: 649979 / Avg: 652371.33 / Max: 655750Min: 630150 / Avg: 644058.67 / Max: 652329

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v320K40K60K80K100KSE +/- 241.21, N = 3SE +/- 400.83, N = 3SE +/- 261.02, N = 3SE +/- 895.18, N = 11SE +/- 949.21, N = 3SE +/- 1496.22, N = 491502.692499.392634.2103132.3103456.0105350.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v2EPYC 7742 2P v320K40K60K80K100KMin: 91121 / Avg: 91502.6 / Max: 91949Min: 92088.4 / Avg: 92499.33 / Max: 93300.9Min: 92115.9 / Avg: 92634.17 / Max: 92947.5Min: 99510.6 / Avg: 103132.28 / Max: 109718Min: 101560 / Avg: 103456 / Max: 104487Min: 101524 / Avg: 105350 / Max: 108690

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v38K16K24K32K40KSE +/- 201.87, N = 3SE +/- 510.96, N = 3SE +/- 386.96, N = 3SE +/- 339.07, N = 14SE +/- 387.91, N = 338533.638165.238103.135624.434701.6
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v37K14K21K28K35KMin: 38141.4 / Avg: 38533.57 / Max: 38812.8Min: 37174.3 / Avg: 38165.17 / Max: 38877.1Min: 37336.6 / Avg: 38103.07 / Max: 38579.1Min: 34007.3 / Avg: 35624.39 / Max: 38787Min: 33928.4 / Avg: 34701.63 / Max: 35143.1

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v38K16K24K32K40KSE +/- 346.87, N = 3SE +/- 584.38, N = 3SE +/- 306.10, N = 3SE +/- 218.48, N = 3SE +/- 108.68, N = 339197.538508.239282.833533.333795.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v37K14K21K28K35KMin: 38790.2 / Avg: 39197.5 / Max: 39887.5Min: 37382.2 / Avg: 38508.2 / Max: 39342.5Min: 38710.6 / Avg: 39282.8 / Max: 39757.4Min: 33109.1 / Avg: 33533.3 / Max: 33836.2Min: 33670.6 / Avg: 33795.03 / Max: 34011.6

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v3200K400K600K800K1000KSE +/- 1602.71, N = 3SE +/- 1103.66, N = 3SE +/- 9054.74, N = 3SE +/- 5960.58, N = 5SE +/- 4981.43, N = 3836611813326818418498600499602
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7742EPYC 7742 v2EPYC 7742 v3EPYC 7742 2PEPYC 7742 2P v3150K300K450K600K750KMin: 833454 / Avg: 836611.33 / Max: 838669Min: 811130 / Avg: 813326.33 / Max: 814615Min: 803037 / Avg: 818418 / Max: 834387Min: 478838 / Avg: 498600 / Max: 516126Min: 491868 / Avg: 499602 / Max: 508908