epyc-7f72-eo-september

AMD EPYC 7F72 24-Core testing with a ASRockRack EPYCD8 (P2.10 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009304-FI-EPYC7F72E54
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 5 Tests
Database Test Suite 2 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 8 Tests
Machine Learning 4 Tests
NVIDIA GPU Compute 3 Tests
Python Tests 2 Tests
Scientific Computing 4 Tests
Server 2 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F72
September 28 2020
  1 Hour, 55 Minutes
EPYC 7F72 x
September 29 2020
  5 Hours, 52 Minutes
2
September 29 2020
  6 Hours, 51 Minutes
3
September 29 2020
  6 Hours, 4 Minutes
Invert Hiding All Results Option
  5 Hours, 10 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc-7f72-eo-september ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionEPYC 7F72EPYC 7F72 x23AMD EPYC 7F72 24-Core @ 3.20GHz (24 Cores / 48 Threads)ASRockRack EPYCD8 (P2.10 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDAMD Starship/Matisse2 x Intel I350Ubuntu 20.045.9.0-050900rc6daily20200921-generic (x86_64) 20200920GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.8GCC 9.3.0ext41024x768OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x830101cPython Details- Python 3.8.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7F72EPYC 7F72 x23Result OverviewPhoronix Test Suite100%101%101%102%102%LeelaChessZeroTimed MAFFT AlignmentBYTE Unix BenchmarkFFTEApache CouchDBKeyDBDolfynTimed HMMer Search

epyc-7f72-eo-september lczero: BLASlczero: Eigenlczero: Randdolfyn: Computational Fluid Dynamicsffte: N=256, 3D Complex FFT Routinehmmer: Pfam Database Searchmafft: Multiple Sequence Alignment - LSU RNAbyte: Dhrystone 2couchdb: 100 - 1000 - 24keydb: caffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: AlexNet - CPU - 1000caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200caffe: GoogleNet - CPU - 1000ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyhint: FLOATmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionEPYC 7F72EPYC 7F72 x232227207316467118.627121809.43318954142.4139.46537756888.4103.685420383.502256217816431118.637122399.72540484142.6119.55938374014.4103.859421676.2875293150373751617190013379880190254318.8719.429.218.809.328.4011.093.7419.6232.8313.049.1622.7830.20328822812.2251861.0439.4524.281.652213216516619818.540121264.37712651142.5959.64437972490.0104.448419756.0375164150269752070189355380086190048719.3819.939.268.819.338.4511.033.7719.7432.8513.059.4222.7130.14328868294.6880960.7739.3024.281.652228209616552818.646122021.48057904142.5389.57537884312.2104.647419091.1975300150626751645190069379324190083319.6220.429.288.889.348.4111.003.7319.6832.5213.009.2522.9230.27328901110.0086961.3939.6124.261.65OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7F72 x3EPYC 7F7225001000150020002500SE +/- 24.04, N = 7SE +/- 9.53, N = 3SE +/- 11.27, N = 3SE +/- 28.46, N = 522562228222722131. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7F72 x3EPYC 7F722400800120016002000Min: 2171 / Avg: 2255.71 / Max: 2338Min: 2210 / Avg: 2228.33 / Max: 2242Min: 2207 / Avg: 2227 / Max: 2246Min: 2138 / Avg: 2212.6 / Max: 22751. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F72 x23EPYC 7F725001000150020002500SE +/- 26.56, N = 3SE +/- 11.22, N = 3SE +/- 28.45, N = 9SE +/- 23.67, N = 921782165209620731. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F72 x23EPYC 7F72400800120016002000Min: 2132 / Avg: 2177.67 / Max: 2224Min: 2148 / Avg: 2164.67 / Max: 2186Min: 1914 / Avg: 2096.11 / Max: 2187Min: 1948 / Avg: 2072.67 / Max: 21541. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Random23EPYC 7F72EPYC 7F72 x40K80K120K160K200KSE +/- 1148.31, N = 3SE +/- 1873.00, N = 3SE +/- 1317.43, N = 3SE +/- 1500.11, N = 31661981655281646711643111. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Random23EPYC 7F72EPYC 7F72 x30K60K90K120K150KMin: 163924 / Avg: 166197.67 / Max: 167615Min: 161788 / Avg: 165528.33 / Max: 167577Min: 162634 / Avg: 164671.33 / Max: 167137Min: 162612 / Avg: 164311 / Max: 1673021. (CXX) g++ options: -flto -pthread

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics2EPYC 7F72EPYC 7F72 x3510152025SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 318.5418.6318.6418.65
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics2EPYC 7F72EPYC 7F72 x3510152025Min: 18.5 / Avg: 18.54 / Max: 18.6Min: 18.57 / Avg: 18.63 / Max: 18.75Min: 18.51 / Avg: 18.64 / Max: 18.79Min: 18.57 / Avg: 18.65 / Max: 18.76

FFTE

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F72 x3EPYC 7F72230K60K90K120K150KSE +/- 285.55, N = 3SE +/- 254.91, N = 3SE +/- 630.92, N = 3SE +/- 807.96, N = 3122399.73122021.48121809.43121264.381. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F72 x3EPYC 7F72220K40K60K80K100KMin: 121945.75 / Avg: 122399.73 / Max: 122926.78Min: 121563.81 / Avg: 122021.48 / Max: 122444.85Min: 120557.37 / Avg: 121809.43 / Max: 122571.19Min: 119649.53 / Avg: 121264.38 / Max: 122122.931. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F7232EPYC 7F72 x306090120150SE +/- 0.22, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3142.41142.54142.60142.611. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F7232EPYC 7F72 x306090120150Min: 141.98 / Avg: 142.41 / Max: 142.65Min: 142.5 / Avg: 142.54 / Max: 142.58Min: 142.52 / Avg: 142.6 / Max: 142.65Min: 142.52 / Avg: 142.61 / Max: 142.671. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F72EPYC 7F72 x323691215SE +/- 0.055, N = 3SE +/- 0.028, N = 3SE +/- 0.057, N = 3SE +/- 0.053, N = 39.4659.5599.5759.6441. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F72EPYC 7F72 x323691215Min: 9.36 / Avg: 9.47 / Max: 9.54Min: 9.5 / Avg: 9.56 / Max: 9.6Min: 9.47 / Avg: 9.57 / Max: 9.67Min: 9.55 / Avg: 9.64 / Max: 9.731. (CC) gcc options: -std=c99 -O3 -lm -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F72 x23EPYC 7F728M16M24M32M40MSE +/- 421072.96, N = 3SE +/- 493714.96, N = 5SE +/- 550044.12, N = 3SE +/- 390690.78, N = 338374014.437972490.037884312.237756888.4
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F72 x23EPYC 7F727M14M21M28M35MMin: 37694443.3 / Avg: 38374014.43 / Max: 39144548.9Min: 36332488 / Avg: 37972489.98 / Max: 39284791.6Min: 36964762.8 / Avg: 37884312.23 / Max: 38867027.9Min: 37005527.3 / Avg: 37756888.37 / Max: 38318338.3

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F72EPYC 7F72 x2320406080100SE +/- 0.33, N = 3SE +/- 0.06, N = 3SE +/- 0.60, N = 3SE +/- 0.60, N = 3103.69103.86104.45104.651. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F72EPYC 7F72 x2320406080100Min: 103.1 / Avg: 103.68 / Max: 104.24Min: 103.78 / Avg: 103.86 / Max: 103.97Min: 103.83 / Avg: 104.45 / Max: 105.65Min: 103.64 / Avg: 104.65 / Max: 105.721. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F72 xEPYC 7F722390K180K270K360K450KSE +/- 2108.70, N = 3SE +/- 250.94, N = 3SE +/- 1309.71, N = 3SE +/- 1744.78, N = 3421676.28420383.50419756.03419091.191. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F72 xEPYC 7F722370K140K210K280K350KMin: 418696.42 / Avg: 421676.28 / Max: 425750.8Min: 419929.56 / Avg: 420383.5 / Max: 420795.86Min: 417242.78 / Avg: 419756.03 / Max: 421651.96Min: 416483.28 / Avg: 419091.19 / Max: 422403.11. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1002EPYC 7F72 x316K32K48K64K80KSE +/- 171.98, N = 3SE +/- 23.86, N = 3SE +/- 215.05, N = 37516475293753001. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1002EPYC 7F72 x313K26K39K52K65KMin: 74895 / Avg: 75163.67 / Max: 75484Min: 75259 / Avg: 75293 / Max: 75339Min: 74878 / Avg: 75300.33 / Max: 755821. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 2002EPYC 7F72 x330K60K90K120K150KSE +/- 69.27, N = 3SE +/- 211.31, N = 3SE +/- 75.05, N = 31502691503731506261. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 2002EPYC 7F72 x330K60K90K120K150KMin: 150131 / Avg: 150269.33 / Max: 150345Min: 150024 / Avg: 150373.33 / Max: 150754Min: 150483 / Avg: 150626 / Max: 1507371. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x32160K320K480K640K800KSE +/- 670.58, N = 3SE +/- 811.75, N = 3SE +/- 654.75, N = 37516177516457520701. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x32130K260K390K520K650KMin: 750333 / Avg: 751617.33 / Max: 752594Min: 750365 / Avg: 751645.33 / Max: 753150Min: 751183 / Avg: 752070.33 / Max: 7533481. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1002EPYC 7F72 x340K80K120K160K200KSE +/- 245.02, N = 3SE +/- 464.46, N = 3SE +/- 407.34, N = 31893551900131900691. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1002EPYC 7F72 x330K60K90K120K150KMin: 188916 / Avg: 189355.33 / Max: 189763Min: 189178 / Avg: 190013 / Max: 190783Min: 189367 / Avg: 190068.67 / Max: 1907781. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 2003EPYC 7F72 x280K160K240K320K400KSE +/- 266.02, N = 3SE +/- 429.90, N = 3SE +/- 518.16, N = 33793243798803800861. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 2003EPYC 7F72 x270K140K210K280K350KMin: 378792 / Avg: 379324 / Max: 379596Min: 379218 / Avg: 379879.67 / Max: 380686Min: 379470 / Avg: 380086.33 / Max: 3811161. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100023EPYC 7F72 x400K800K1200K1600K2000KSE +/- 1530.16, N = 3SE +/- 909.25, N = 3SE +/- 2432.39, N = 31900487190083319025431. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100023EPYC 7F72 x300K600K900K1200K1500KMin: 1897860 / Avg: 1900486.67 / Max: 1903160Min: 1899370 / Avg: 1900833.33 / Max: 1902500Min: 1898950 / Avg: 1902543.33 / Max: 19071801. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetEPYC 7F72 x23510152025SE +/- 0.10, N = 3SE +/- 0.20, N = 7SE +/- 0.30, N = 318.8719.3819.62MIN: 18.36 / MAX: 20.85MIN: 18.46 / MAX: 118.13MIN: 18.65 / MAX: 21.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetEPYC 7F72 x23510152025Min: 18.72 / Avg: 18.87 / Max: 19.07Min: 18.93 / Avg: 19.38 / Max: 20.44Min: 19.05 / Avg: 19.62 / Max: 20.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetEPYC 7F72 x23510152025SE +/- 0.01, N = 3SE +/- 0.28, N = 7SE +/- 0.68, N = 319.4219.9320.42MIN: 18.9 / MAX: 21.51MIN: 18.87 / MAX: 22.76MIN: 18.91 / MAX: 84.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetEPYC 7F72 x23510152025Min: 19.41 / Avg: 19.42 / Max: 19.44Min: 19.38 / Avg: 19.93 / Max: 21.16Min: 19.68 / Avg: 20.42 / Max: 21.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F72 x233691215SE +/- 0.05, N = 3SE +/- 0.04, N = 7SE +/- 0.12, N = 39.219.269.28MIN: 8.89 / MAX: 10.51MIN: 8.75 / MAX: 11.07MIN: 8.89 / MAX: 10.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F72 x233691215Min: 9.15 / Avg: 9.21 / Max: 9.3Min: 9.08 / Avg: 9.26 / Max: 9.37Min: 9.16 / Avg: 9.28 / Max: 9.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F72 x23246810SE +/- 0.03, N = 3SE +/- 0.04, N = 7SE +/- 0.07, N = 38.808.818.88MIN: 8.56 / MAX: 10.78MIN: 8.52 / MAX: 10.79MIN: 8.49 / MAX: 76.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F72 x233691215Min: 8.76 / Avg: 8.8 / Max: 8.85Min: 8.69 / Avg: 8.81 / Max: 8.95Min: 8.74 / Avg: 8.88 / Max: 8.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2EPYC 7F72 x233691215SE +/- 0.02, N = 3SE +/- 0.06, N = 6SE +/- 0.06, N = 39.329.339.34MIN: 9.09 / MAX: 14.74MIN: 9.05 / MAX: 11.33MIN: 9.14 / MAX: 10.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2EPYC 7F72 x233691215Min: 9.28 / Avg: 9.32 / Max: 9.35Min: 9.15 / Avg: 9.33 / Max: 9.55Min: 9.28 / Avg: 9.34 / Max: 9.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetEPYC 7F72 x32246810SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 78.408.418.45MIN: 8.11 / MAX: 10.16MIN: 8.08 / MAX: 11.46MIN: 8.05 / MAX: 10.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetEPYC 7F72 x323691215Min: 8.38 / Avg: 8.4 / Max: 8.43Min: 8.29 / Avg: 8.41 / Max: 8.51Min: 8.32 / Avg: 8.45 / Max: 8.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b032EPYC 7F72 x3691215SE +/- 0.08, N = 3SE +/- 0.06, N = 7SE +/- 0.09, N = 311.0011.0311.09MIN: 10.66 / MAX: 13.36MIN: 10.67 / MAX: 16.51MIN: 10.74 / MAX: 13.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b032EPYC 7F72 x3691215Min: 10.85 / Avg: 11 / Max: 11.14Min: 10.85 / Avg: 11.03 / Max: 11.24Min: 10.93 / Avg: 11.09 / Max: 11.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface3EPYC 7F72 x20.84831.69662.54493.39324.2415SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 73.733.743.77MIN: 3.58 / MAX: 4.65MIN: 3.46 / MAX: 4.73MIN: 3.58 / MAX: 61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface3EPYC 7F72 x2246810Min: 3.68 / Avg: 3.73 / Max: 3.79Min: 3.7 / Avg: 3.74 / Max: 3.79Min: 3.73 / Avg: 3.77 / Max: 3.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetEPYC 7F72 x32510152025SE +/- 0.14, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 719.6219.6819.74MIN: 19.13 / MAX: 22.12MIN: 19.15 / MAX: 21.9MIN: 19.14 / MAX: 25.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetEPYC 7F72 x32510152025Min: 19.45 / Avg: 19.62 / Max: 19.9Min: 19.37 / Avg: 19.68 / Max: 19.96Min: 19.35 / Avg: 19.74 / Max: 20.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg163EPYC 7F72 x2816243240SE +/- 0.22, N = 3SE +/- 0.56, N = 3SE +/- 0.30, N = 732.5232.8332.85MIN: 31.78 / MAX: 34.87MIN: 31.34 / MAX: 36.46MIN: 31.1 / MAX: 98.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg163EPYC 7F72 x2714212835Min: 32.09 / Avg: 32.52 / Max: 32.84Min: 31.73 / Avg: 32.83 / Max: 33.54Min: 31.5 / Avg: 32.85 / Max: 34.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet183EPYC 7F72 x23691215SE +/- 0.13, N = 3SE +/- 0.27, N = 3SE +/- 0.13, N = 713.0013.0413.05MIN: 12.56 / MAX: 14.64MIN: 12.34 / MAX: 38.34MIN: 12.38 / MAX: 54.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet183EPYC 7F72 x248121620Min: 12.76 / Avg: 13 / Max: 13.22Min: 12.55 / Avg: 13.04 / Max: 13.47Min: 12.54 / Avg: 13.05 / Max: 13.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetEPYC 7F72 x323691215SE +/- 0.26, N = 3SE +/- 0.23, N = 3SE +/- 0.19, N = 69.169.259.42MIN: 8.54 / MAX: 10.41MIN: 8.56 / MAX: 12.4MIN: 8.51 / MAX: 10.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetEPYC 7F72 x323691215Min: 8.7 / Avg: 9.16 / Max: 9.61Min: 8.82 / Avg: 9.25 / Max: 9.62Min: 8.69 / Avg: 9.42 / Max: 9.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet502EPYC 7F72 x3510152025SE +/- 0.06, N = 7SE +/- 0.16, N = 3SE +/- 0.23, N = 322.7122.7822.92MIN: 22.12 / MAX: 25.08MIN: 22.17 / MAX: 85.73MIN: 22.33 / MAX: 105.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet502EPYC 7F72 x3510152025Min: 22.45 / Avg: 22.71 / Max: 22.94Min: 22.51 / Avg: 22.78 / Max: 23.05Min: 22.62 / Avg: 22.92 / Max: 23.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny2EPYC 7F72 x3714212835SE +/- 0.11, N = 7SE +/- 0.14, N = 3SE +/- 0.21, N = 330.1430.2030.27MIN: 29.27 / MAX: 65.59MIN: 29.67 / MAX: 32.8MIN: 29.38 / MAX: 33.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny2EPYC 7F72 x3714212835Min: 29.61 / Avg: 30.14 / Max: 30.42Min: 30 / Avg: 30.2 / Max: 30.47Min: 29.84 / Avg: 30.27 / Max: 30.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT32EPYC 7F72 x70M140M210M280M350MSE +/- 377618.26, N = 3SE +/- 40216.53, N = 3SE +/- 53237.74, N = 3328901110.01328868294.69328822812.231. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT32EPYC 7F72 x60M120M180M240M300MMin: 328299368.72 / Avg: 328901110.01 / Max: 329597223.88Min: 328787999.13 / Avg: 328868294.69 / Max: 328912513.79Min: 328734302.38 / Avg: 328822812.23 / Max: 328918324.121. (CC) gcc options: -O3 -march=native -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica2EPYC 7F72 x31428425670SE +/- 0.99, N = 3SE +/- 0.88, N = 3SE +/- 0.19, N = 360.7761.0461.39
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica2EPYC 7F72 x31224364860Min: 58.88 / Avg: 60.77 / Max: 62.21Min: 60 / Avg: 61.04 / Max: 62.79Min: 61.14 / Avg: 61.39 / Max: 61.76

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda2EPYC 7F72 x3918273645SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.09, N = 339.3039.4539.61
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda2EPYC 7F72 x3816243240Min: 39.27 / Avg: 39.3 / Max: 39.35Min: 39.28 / Avg: 39.45 / Max: 39.71Min: 39.49 / Avg: 39.61 / Max: 39.79

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm3EPYC 7F72 x2612182430SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 324.2624.2824.28
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm3EPYC 7F72 x2612182430Min: 24.21 / Avg: 24.26 / Max: 24.37Min: 24.18 / Avg: 24.28 / Max: 24.36Min: 24.18 / Avg: 24.28 / Max: 24.34

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F72 x230.37130.74261.11391.48521.8565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.651.651.65
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F72 x23246810Min: 1.63 / Avg: 1.65 / Max: 1.66Min: 1.63 / Avg: 1.65 / Max: 1.67Min: 1.64 / Avg: 1.65 / Max: 1.65