epyc-7f72-eo-september

AMD EPYC 7F72 24-Core testing with a ASRockRack EPYCD8 (P2.10 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009304-FI-EPYC7F72E54
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 5 Tests
Database Test Suite 2 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 8 Tests
Machine Learning 4 Tests
NVIDIA GPU Compute 3 Tests
Python Tests 2 Tests
Scientific Computing 4 Tests
Server 2 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F72
September 28 2020
  1 Hour, 55 Minutes
EPYC 7F72 x
September 29 2020
  5 Hours, 52 Minutes
2
September 29 2020
  6 Hours, 51 Minutes
3
September 29 2020
  6 Hours, 4 Minutes
Invert Hiding All Results Option
  5 Hours, 10 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc-7f72-eo-september ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionEPYC 7F72EPYC 7F72 x23AMD EPYC 7F72 24-Core @ 3.20GHz (24 Cores / 48 Threads)ASRockRack EPYCD8 (P2.10 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDAMD Starship/Matisse2 x Intel I350Ubuntu 20.045.9.0-050900rc6daily20200921-generic (x86_64) 20200920GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.8GCC 9.3.0ext41024x768OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x830101cPython Details- Python 3.8.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7F72EPYC 7F72 x23Result OverviewPhoronix Test Suite100%101%101%102%102%LeelaChessZeroTimed MAFFT AlignmentBYTE Unix BenchmarkFFTEApache CouchDBKeyDBDolfynTimed HMMer Search

epyc-7f72-eo-september lczero: BLASlczero: Eigenlczero: Randdolfyn: Computational Fluid Dynamicsffte: N=256, 3D Complex FFT Routinehmmer: Pfam Database Searchmafft: Multiple Sequence Alignment - LSU RNAbyte: Dhrystone 2couchdb: 100 - 1000 - 24keydb: caffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: AlexNet - CPU - 1000caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200caffe: GoogleNet - CPU - 1000ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyhint: FLOATmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionEPYC 7F72EPYC 7F72 x232227207316467118.627121809.43318954142.4139.46537756888.4103.685420383.502256217816431118.637122399.72540484142.6119.55938374014.4103.859421676.2875293150373751617190013379880190254318.8719.429.218.809.328.4011.093.7419.6232.8313.049.1622.7830.20328822812.2251861.0439.4524.281.652213216516619818.540121264.37712651142.5959.64437972490.0104.448419756.0375164150269752070189355380086190048719.3819.939.268.819.338.4511.033.7719.7432.8513.059.4222.7130.14328868294.6880960.7739.3024.281.652228209616552818.646122021.48057904142.5389.57537884312.2104.647419091.1975300150626751645190069379324190083319.6220.429.288.889.348.4111.003.7319.6832.5213.009.2522.9230.27328901110.0086961.3939.6124.261.65OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS2EPYC 7F723EPYC 7F72 x5001000150020002500SE +/- 28.46, N = 5SE +/- 11.27, N = 3SE +/- 9.53, N = 3SE +/- 24.04, N = 722132227222822561. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS2EPYC 7F723EPYC 7F72 x400800120016002000Min: 2138 / Avg: 2212.6 / Max: 2275Min: 2207 / Avg: 2227 / Max: 2246Min: 2210 / Avg: 2228.33 / Max: 2242Min: 2171 / Avg: 2255.71 / Max: 23381. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F7232EPYC 7F72 x5001000150020002500SE +/- 23.67, N = 9SE +/- 28.45, N = 9SE +/- 11.22, N = 3SE +/- 26.56, N = 320732096216521781. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F7232EPYC 7F72 x400800120016002000Min: 1948 / Avg: 2072.67 / Max: 2154Min: 1914 / Avg: 2096.11 / Max: 2187Min: 2148 / Avg: 2164.67 / Max: 2186Min: 2132 / Avg: 2177.67 / Max: 22241. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomEPYC 7F72 xEPYC 7F723240K80K120K160K200KSE +/- 1500.11, N = 3SE +/- 1317.43, N = 3SE +/- 1873.00, N = 3SE +/- 1148.31, N = 31643111646711655281661981. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomEPYC 7F72 xEPYC 7F723230K60K90K120K150KMin: 162612 / Avg: 164311 / Max: 167302Min: 162634 / Avg: 164671.33 / Max: 167137Min: 161788 / Avg: 165528.33 / Max: 167577Min: 163924 / Avg: 166197.67 / Max: 1676151. (CXX) g++ options: -flto -pthread

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics3EPYC 7F72 xEPYC 7F722510152025SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 318.6518.6418.6318.54
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics3EPYC 7F72 xEPYC 7F722510152025Min: 18.57 / Avg: 18.65 / Max: 18.76Min: 18.51 / Avg: 18.64 / Max: 18.79Min: 18.57 / Avg: 18.63 / Max: 18.75Min: 18.5 / Avg: 18.54 / Max: 18.6

FFTE

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine2EPYC 7F723EPYC 7F72 x30K60K90K120K150KSE +/- 807.96, N = 3SE +/- 630.92, N = 3SE +/- 254.91, N = 3SE +/- 285.55, N = 3121264.38121809.43122021.48122399.731. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine2EPYC 7F723EPYC 7F72 x20K40K60K80K100KMin: 119649.53 / Avg: 121264.38 / Max: 122122.93Min: 120557.37 / Avg: 121809.43 / Max: 122571.19Min: 121563.81 / Avg: 122021.48 / Max: 122444.85Min: 121945.75 / Avg: 122399.73 / Max: 122926.781. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F72 x23EPYC 7F72306090120150SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.22, N = 3142.61142.60142.54142.411. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F72 x23EPYC 7F72306090120150Min: 142.52 / Avg: 142.61 / Max: 142.67Min: 142.52 / Avg: 142.6 / Max: 142.65Min: 142.5 / Avg: 142.54 / Max: 142.58Min: 141.98 / Avg: 142.41 / Max: 142.651. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA23EPYC 7F72 xEPYC 7F723691215SE +/- 0.053, N = 3SE +/- 0.057, N = 3SE +/- 0.028, N = 3SE +/- 0.055, N = 39.6449.5759.5599.4651. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA23EPYC 7F72 xEPYC 7F723691215Min: 9.55 / Avg: 9.64 / Max: 9.73Min: 9.47 / Avg: 9.57 / Max: 9.67Min: 9.5 / Avg: 9.56 / Max: 9.6Min: 9.36 / Avg: 9.47 / Max: 9.541. (CC) gcc options: -std=c99 -O3 -lm -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F7232EPYC 7F72 x8M16M24M32M40MSE +/- 390690.78, N = 3SE +/- 550044.12, N = 3SE +/- 493714.96, N = 5SE +/- 421072.96, N = 337756888.437884312.237972490.038374014.4
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F7232EPYC 7F72 x7M14M21M28M35MMin: 37005527.3 / Avg: 37756888.37 / Max: 38318338.3Min: 36964762.8 / Avg: 37884312.23 / Max: 38867027.9Min: 36332488 / Avg: 37972489.98 / Max: 39284791.6Min: 37694443.3 / Avg: 38374014.43 / Max: 39144548.9

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 2432EPYC 7F72 xEPYC 7F7220406080100SE +/- 0.60, N = 3SE +/- 0.60, N = 3SE +/- 0.06, N = 3SE +/- 0.33, N = 3104.65104.45103.86103.691. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 2432EPYC 7F72 xEPYC 7F7220406080100Min: 103.64 / Avg: 104.65 / Max: 105.72Min: 103.83 / Avg: 104.45 / Max: 105.65Min: 103.78 / Avg: 103.86 / Max: 103.97Min: 103.1 / Avg: 103.68 / Max: 104.241. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1632EPYC 7F72EPYC 7F72 x90K180K270K360K450KSE +/- 1744.78, N = 3SE +/- 1309.71, N = 3SE +/- 250.94, N = 3SE +/- 2108.70, N = 3419091.19419756.03420383.50421676.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1632EPYC 7F72EPYC 7F72 x70K140K210K280K350KMin: 416483.28 / Avg: 419091.19 / Max: 422403.1Min: 417242.78 / Avg: 419756.03 / Max: 421651.96Min: 419929.56 / Avg: 420383.5 / Max: 420795.86Min: 418696.42 / Avg: 421676.28 / Max: 425750.81. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1003EPYC 7F72 x216K32K48K64K80KSE +/- 215.05, N = 3SE +/- 23.86, N = 3SE +/- 171.98, N = 37530075293751641. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1003EPYC 7F72 x213K26K39K52K65KMin: 74878 / Avg: 75300.33 / Max: 75582Min: 75259 / Avg: 75293 / Max: 75339Min: 74895 / Avg: 75163.67 / Max: 754841. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 2003EPYC 7F72 x230K60K90K120K150KSE +/- 75.05, N = 3SE +/- 211.31, N = 3SE +/- 69.27, N = 31506261503731502691. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 2003EPYC 7F72 x230K60K90K120K150KMin: 150483 / Avg: 150626 / Max: 150737Min: 150024 / Avg: 150373.33 / Max: 150754Min: 150131 / Avg: 150269.33 / Max: 1503451. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100023EPYC 7F72 x160K320K480K640K800KSE +/- 654.75, N = 3SE +/- 811.75, N = 3SE +/- 670.58, N = 37520707516457516171. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100023EPYC 7F72 x130K260K390K520K650KMin: 751183 / Avg: 752070.33 / Max: 753348Min: 750365 / Avg: 751645.33 / Max: 753150Min: 750333 / Avg: 751617.33 / Max: 7525941. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1003EPYC 7F72 x240K80K120K160K200KSE +/- 407.34, N = 3SE +/- 464.46, N = 3SE +/- 245.02, N = 31900691900131893551. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1003EPYC 7F72 x230K60K90K120K150KMin: 189367 / Avg: 190068.67 / Max: 190778Min: 189178 / Avg: 190013 / Max: 190783Min: 188916 / Avg: 189355.33 / Max: 1897631. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 2002EPYC 7F72 x380K160K240K320K400KSE +/- 518.16, N = 3SE +/- 429.90, N = 3SE +/- 266.02, N = 33800863798803793241. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 2002EPYC 7F72 x370K140K210K280K350KMin: 379470 / Avg: 380086.33 / Max: 381116Min: 379218 / Avg: 379879.67 / Max: 380686Min: 378792 / Avg: 379324 / Max: 3795961. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x32400K800K1200K1600K2000KSE +/- 2432.39, N = 3SE +/- 909.25, N = 3SE +/- 1530.16, N = 31902543190083319004871. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x32300K600K900K1200K1500KMin: 1898950 / Avg: 1902543.33 / Max: 1907180Min: 1899370 / Avg: 1900833.33 / Max: 1902500Min: 1897860 / Avg: 1900486.67 / Max: 19031601. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet32EPYC 7F72 x510152025SE +/- 0.30, N = 3SE +/- 0.20, N = 7SE +/- 0.10, N = 319.6219.3818.87MIN: 18.65 / MAX: 21.93MIN: 18.46 / MAX: 118.13MIN: 18.36 / MAX: 20.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet32EPYC 7F72 x510152025Min: 19.05 / Avg: 19.62 / Max: 20.04Min: 18.93 / Avg: 19.38 / Max: 20.44Min: 18.72 / Avg: 18.87 / Max: 19.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet32EPYC 7F72 x510152025SE +/- 0.68, N = 3SE +/- 0.28, N = 7SE +/- 0.01, N = 320.4219.9319.42MIN: 18.91 / MAX: 84.5MIN: 18.87 / MAX: 22.76MIN: 18.9 / MAX: 21.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet32EPYC 7F72 x510152025Min: 19.68 / Avg: 20.42 / Max: 21.78Min: 19.38 / Avg: 19.93 / Max: 21.16Min: 19.41 / Avg: 19.42 / Max: 19.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v232EPYC 7F72 x3691215SE +/- 0.12, N = 3SE +/- 0.04, N = 7SE +/- 0.05, N = 39.289.269.21MIN: 8.89 / MAX: 10.96MIN: 8.75 / MAX: 11.07MIN: 8.89 / MAX: 10.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v232EPYC 7F72 x3691215Min: 9.16 / Avg: 9.28 / Max: 9.53Min: 9.08 / Avg: 9.26 / Max: 9.37Min: 9.15 / Avg: 9.21 / Max: 9.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v332EPYC 7F72 x246810SE +/- 0.07, N = 3SE +/- 0.04, N = 7SE +/- 0.03, N = 38.888.818.80MIN: 8.49 / MAX: 76.01MIN: 8.52 / MAX: 10.79MIN: 8.56 / MAX: 10.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v332EPYC 7F72 x3691215Min: 8.74 / Avg: 8.88 / Max: 8.97Min: 8.69 / Avg: 8.81 / Max: 8.95Min: 8.76 / Avg: 8.8 / Max: 8.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v232EPYC 7F72 x3691215SE +/- 0.06, N = 3SE +/- 0.06, N = 6SE +/- 0.02, N = 39.349.339.32MIN: 9.14 / MAX: 10.56MIN: 9.05 / MAX: 11.33MIN: 9.09 / MAX: 14.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v232EPYC 7F72 x3691215Min: 9.28 / Avg: 9.34 / Max: 9.45Min: 9.15 / Avg: 9.33 / Max: 9.55Min: 9.28 / Avg: 9.32 / Max: 9.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet23EPYC 7F72 x246810SE +/- 0.03, N = 7SE +/- 0.06, N = 3SE +/- 0.01, N = 38.458.418.40MIN: 8.05 / MAX: 10.2MIN: 8.08 / MAX: 11.46MIN: 8.11 / MAX: 10.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet23EPYC 7F72 x3691215Min: 8.32 / Avg: 8.45 / Max: 8.56Min: 8.29 / Avg: 8.41 / Max: 8.51Min: 8.38 / Avg: 8.4 / Max: 8.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0EPYC 7F72 x233691215SE +/- 0.09, N = 3SE +/- 0.06, N = 7SE +/- 0.08, N = 311.0911.0311.00MIN: 10.74 / MAX: 13.3MIN: 10.67 / MAX: 16.51MIN: 10.66 / MAX: 13.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0EPYC 7F72 x233691215Min: 10.93 / Avg: 11.09 / Max: 11.24Min: 10.85 / Avg: 11.03 / Max: 11.24Min: 10.85 / Avg: 11 / Max: 11.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface2EPYC 7F72 x30.84831.69662.54493.39324.2415SE +/- 0.01, N = 7SE +/- 0.03, N = 3SE +/- 0.03, N = 33.773.743.73MIN: 3.58 / MAX: 6MIN: 3.46 / MAX: 4.73MIN: 3.58 / MAX: 4.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface2EPYC 7F72 x3246810Min: 3.73 / Avg: 3.77 / Max: 3.82Min: 3.7 / Avg: 3.74 / Max: 3.79Min: 3.68 / Avg: 3.73 / Max: 3.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet23EPYC 7F72 x510152025SE +/- 0.09, N = 7SE +/- 0.17, N = 3SE +/- 0.14, N = 319.7419.6819.62MIN: 19.14 / MAX: 25.15MIN: 19.15 / MAX: 21.9MIN: 19.13 / MAX: 22.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet23EPYC 7F72 x510152025Min: 19.35 / Avg: 19.74 / Max: 20.09Min: 19.37 / Avg: 19.68 / Max: 19.96Min: 19.45 / Avg: 19.62 / Max: 19.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg162EPYC 7F72 x3816243240SE +/- 0.30, N = 7SE +/- 0.56, N = 3SE +/- 0.22, N = 332.8532.8332.52MIN: 31.1 / MAX: 98.34MIN: 31.34 / MAX: 36.46MIN: 31.78 / MAX: 34.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg162EPYC 7F72 x3714212835Min: 31.5 / Avg: 32.85 / Max: 34.03Min: 31.73 / Avg: 32.83 / Max: 33.54Min: 32.09 / Avg: 32.52 / Max: 32.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet182EPYC 7F72 x33691215SE +/- 0.13, N = 7SE +/- 0.27, N = 3SE +/- 0.13, N = 313.0513.0413.00MIN: 12.38 / MAX: 54.07MIN: 12.34 / MAX: 38.34MIN: 12.56 / MAX: 14.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet182EPYC 7F72 x348121620Min: 12.54 / Avg: 13.05 / Max: 13.5Min: 12.55 / Avg: 13.04 / Max: 13.47Min: 12.76 / Avg: 13 / Max: 13.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet23EPYC 7F72 x3691215SE +/- 0.19, N = 6SE +/- 0.23, N = 3SE +/- 0.26, N = 39.429.259.16MIN: 8.51 / MAX: 10.8MIN: 8.56 / MAX: 12.4MIN: 8.54 / MAX: 10.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet23EPYC 7F72 x3691215Min: 8.69 / Avg: 9.42 / Max: 9.88Min: 8.82 / Avg: 9.25 / Max: 9.62Min: 8.7 / Avg: 9.16 / Max: 9.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet503EPYC 7F72 x2510152025SE +/- 0.23, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 722.9222.7822.71MIN: 22.33 / MAX: 105.21MIN: 22.17 / MAX: 85.73MIN: 22.12 / MAX: 25.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet503EPYC 7F72 x2510152025Min: 22.62 / Avg: 22.92 / Max: 23.38Min: 22.51 / Avg: 22.78 / Max: 23.05Min: 22.45 / Avg: 22.71 / Max: 22.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny3EPYC 7F72 x2714212835SE +/- 0.21, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 730.2730.2030.14MIN: 29.38 / MAX: 33.06MIN: 29.67 / MAX: 32.8MIN: 29.27 / MAX: 65.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny3EPYC 7F72 x2714212835Min: 29.84 / Avg: 30.27 / Max: 30.48Min: 30 / Avg: 30.2 / Max: 30.47Min: 29.61 / Avg: 30.14 / Max: 30.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F72 x2370M140M210M280M350MSE +/- 53237.74, N = 3SE +/- 40216.53, N = 3SE +/- 377618.26, N = 3328822812.23328868294.69328901110.011. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F72 x2360M120M180M240M300MMin: 328734302.38 / Avg: 328822812.23 / Max: 328918324.12Min: 328787999.13 / Avg: 328868294.69 / Max: 328912513.79Min: 328299368.72 / Avg: 328901110.01 / Max: 329597223.881. (CC) gcc options: -O3 -march=native -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica3EPYC 7F72 x21428425670SE +/- 0.19, N = 3SE +/- 0.88, N = 3SE +/- 0.99, N = 361.3961.0460.77
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica3EPYC 7F72 x21224364860Min: 61.14 / Avg: 61.39 / Max: 61.76Min: 60 / Avg: 61.04 / Max: 62.79Min: 58.88 / Avg: 60.77 / Max: 62.21

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda3EPYC 7F72 x2918273645SE +/- 0.09, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 339.6139.4539.30
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda3EPYC 7F72 x2816243240Min: 39.49 / Avg: 39.61 / Max: 39.79Min: 39.28 / Avg: 39.45 / Max: 39.71Min: 39.27 / Avg: 39.3 / Max: 39.35

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm2EPYC 7F72 x3612182430SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 324.2824.2824.26
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm2EPYC 7F72 x3612182430Min: 24.18 / Avg: 24.28 / Max: 24.34Min: 24.18 / Avg: 24.28 / Max: 24.36Min: 24.21 / Avg: 24.26 / Max: 24.37

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression32EPYC 7F72 x0.37130.74261.11391.48521.8565SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.651.651.65
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression32EPYC 7F72 x246810Min: 1.64 / Avg: 1.65 / Max: 1.65Min: 1.63 / Avg: 1.65 / Max: 1.67Min: 1.63 / Avg: 1.65 / Max: 1.66