epyc-7f72-eo-september

AMD EPYC 7F72 24-Core testing with a ASRockRack EPYCD8 (P2.10 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009304-FI-EPYC7F72E54
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 5 Tests
Database Test Suite 2 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 8 Tests
Machine Learning 4 Tests
NVIDIA GPU Compute 3 Tests
Python Tests 2 Tests
Scientific Computing 4 Tests
Server 2 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Perf-Per
Dollar
Date
Triggered
  Test
  Duration
EPYC 7F72
September 28 2020
  1 Hour, 55 Minutes
EPYC 7F72 x
September 29 2020
  5 Hours, 52 Minutes
2
September 29 2020
  6 Hours, 51 Minutes
3
September 29 2020
  6 Hours, 4 Minutes
Invert Hiding All Results Option
  5 Hours, 10 Minutes
Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


epyc-7f72-eo-september ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionEPYC 7F72EPYC 7F72 x23AMD EPYC 7F72 24-Core @ 3.20GHz (24 Cores / 48 Threads)ASRockRack EPYCD8 (P2.10 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDAMD Starship/Matisse2 x Intel I350Ubuntu 20.045.9.0-050900rc6daily20200921-generic (x86_64) 20200920GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.8GCC 9.3.0ext41024x768OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x830101cPython Details- Python 3.8.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7F72EPYC 7F72 x23Result OverviewPhoronix Test Suite 10.2.2100%101%101%102%102%LeelaChessZeroTimed MAFFT AlignmentBYTE Unix BenchmarkFFTEApache CouchDBKeyDBDolfynTimed HMMer Search

epyc-7f72-eo-september lczero: BLASlczero: Eigenlczero: Randdolfyn: Computational Fluid Dynamicsffte: N=256, 3D Complex FFT Routinehmmer: Pfam Database Searchmafft: Multiple Sequence Alignment - LSU RNAbyte: Dhrystone 2couchdb: 100 - 1000 - 24keydb: caffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: AlexNet - CPU - 1000caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200caffe: GoogleNet - CPU - 1000ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyhint: FLOATmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionEPYC 7F72EPYC 7F72 x232227207316467118.627121809.43318954142.4139.46537756888.4103.685420383.502256217816431118.637122399.72540484142.6119.55938374014.4103.859421676.2875293150373751617190013379880190254318.8719.429.218.809.328.4011.093.7419.6232.8313.049.1622.7830.20328822812.2251861.0439.4524.281.652213216516619818.540121264.37712651142.5959.64437972490.0104.448419756.0375164150269752070189355380086190048719.3819.939.268.819.338.4511.033.7719.7432.8513.059.4222.7130.14328868294.6880960.7739.3024.281.652228209616552818.646122021.48057904142.5389.57537884312.2104.647419091.1975300150626751645190069379324190083319.6220.429.288.889.348.4111.003.7319.6832.5213.009.2522.9230.27328901110.0086961.3939.6124.261.65OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7F72EPYC 7F72 x235001000150020002500SE +/- 11.27, N = 3SE +/- 24.04, N = 7SE +/- 28.46, N = 5SE +/- 9.53, N = 322272256221322281. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7F72EPYC 7F72 x23400800120016002000Min: 2207 / Avg: 2227 / Max: 2246Min: 2171 / Avg: 2255.71 / Max: 2338Min: 2138 / Avg: 2212.6 / Max: 2275Min: 2210 / Avg: 2228.33 / Max: 22421. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F72EPYC 7F72 x235001000150020002500SE +/- 23.67, N = 9SE +/- 26.56, N = 3SE +/- 11.22, N = 3SE +/- 28.45, N = 920732178216520961. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F72EPYC 7F72 x23400800120016002000Min: 1948 / Avg: 2072.67 / Max: 2154Min: 2132 / Avg: 2177.67 / Max: 2224Min: 2148 / Avg: 2164.67 / Max: 2186Min: 1914 / Avg: 2096.11 / Max: 21871. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomEPYC 7F72EPYC 7F72 x2340K80K120K160K200KSE +/- 1317.43, N = 3SE +/- 1500.11, N = 3SE +/- 1148.31, N = 3SE +/- 1873.00, N = 31646711643111661981655281. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomEPYC 7F72EPYC 7F72 x2330K60K90K120K150KMin: 162634 / Avg: 164671.33 / Max: 167137Min: 162612 / Avg: 164311 / Max: 167302Min: 163924 / Avg: 166197.67 / Max: 167615Min: 161788 / Avg: 165528.33 / Max: 1675771. (CXX) g++ options: -flto -pthread

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsEPYC 7F72EPYC 7F72 x23510152025SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 318.6318.6418.5418.65
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsEPYC 7F72EPYC 7F72 x23510152025Min: 18.57 / Avg: 18.63 / Max: 18.75Min: 18.51 / Avg: 18.64 / Max: 18.79Min: 18.5 / Avg: 18.54 / Max: 18.6Min: 18.57 / Avg: 18.65 / Max: 18.76

FFTE

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F72EPYC 7F72 x2330K60K90K120K150KSE +/- 630.92, N = 3SE +/- 285.55, N = 3SE +/- 807.96, N = 3SE +/- 254.91, N = 3121809.43122399.73121264.38122021.481. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F72EPYC 7F72 x2320K40K60K80K100KMin: 120557.37 / Avg: 121809.43 / Max: 122571.19Min: 121945.75 / Avg: 122399.73 / Max: 122926.78Min: 119649.53 / Avg: 121264.38 / Max: 122122.93Min: 121563.81 / Avg: 122021.48 / Max: 122444.851. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F72EPYC 7F72 x23306090120150SE +/- 0.22, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3142.41142.61142.60142.541. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F72EPYC 7F72 x23306090120150Min: 141.98 / Avg: 142.41 / Max: 142.65Min: 142.52 / Avg: 142.61 / Max: 142.67Min: 142.52 / Avg: 142.6 / Max: 142.65Min: 142.5 / Avg: 142.54 / Max: 142.581. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F72EPYC 7F72 x233691215SE +/- 0.055, N = 3SE +/- 0.028, N = 3SE +/- 0.053, N = 3SE +/- 0.057, N = 39.4659.5599.6449.5751. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F72EPYC 7F72 x233691215Min: 9.36 / Avg: 9.47 / Max: 9.54Min: 9.5 / Avg: 9.56 / Max: 9.6Min: 9.55 / Avg: 9.64 / Max: 9.73Min: 9.47 / Avg: 9.57 / Max: 9.671. (CC) gcc options: -std=c99 -O3 -lm -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F72EPYC 7F72 x238M16M24M32M40MSE +/- 390690.78, N = 3SE +/- 421072.96, N = 3SE +/- 493714.96, N = 5SE +/- 550044.12, N = 337756888.438374014.437972490.037884312.2
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F72EPYC 7F72 x237M14M21M28M35MMin: 37005527.3 / Avg: 37756888.37 / Max: 38318338.3Min: 37694443.3 / Avg: 38374014.43 / Max: 39144548.9Min: 36332488 / Avg: 37972489.98 / Max: 39284791.6Min: 36964762.8 / Avg: 37884312.23 / Max: 38867027.9

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F72EPYC 7F72 x2320406080100SE +/- 0.33, N = 3SE +/- 0.06, N = 3SE +/- 0.60, N = 3SE +/- 0.60, N = 3103.69103.86104.45104.651. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F72EPYC 7F72 x2320406080100Min: 103.1 / Avg: 103.68 / Max: 104.24Min: 103.78 / Avg: 103.86 / Max: 103.97Min: 103.83 / Avg: 104.45 / Max: 105.65Min: 103.64 / Avg: 104.65 / Max: 105.721. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F72EPYC 7F72 x2390K180K270K360K450KSE +/- 250.94, N = 3SE +/- 2108.70, N = 3SE +/- 1309.71, N = 3SE +/- 1744.78, N = 3420383.50421676.28419756.03419091.191. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F72EPYC 7F72 x2370K140K210K280K350KMin: 419929.56 / Avg: 420383.5 / Max: 420795.86Min: 418696.42 / Avg: 421676.28 / Max: 425750.8Min: 417242.78 / Avg: 419756.03 / Max: 421651.96Min: 416483.28 / Avg: 419091.19 / Max: 422403.11. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EPYC 7F72 x2316K32K48K64K80KSE +/- 23.86, N = 3SE +/- 171.98, N = 3SE +/- 215.05, N = 37529375164753001. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EPYC 7F72 x2313K26K39K52K65KMin: 75259 / Avg: 75293 / Max: 75339Min: 74895 / Avg: 75163.67 / Max: 75484Min: 74878 / Avg: 75300.33 / Max: 755821. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F72 x2330K60K90K120K150KSE +/- 211.31, N = 3SE +/- 69.27, N = 3SE +/- 75.05, N = 31503731502691506261. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F72 x2330K60K90K120K150KMin: 150024 / Avg: 150373.33 / Max: 150754Min: 150131 / Avg: 150269.33 / Max: 150345Min: 150483 / Avg: 150626 / Max: 1507371. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x23160K320K480K640K800KSE +/- 670.58, N = 3SE +/- 654.75, N = 3SE +/- 811.75, N = 37516177520707516451. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x23130K260K390K520K650KMin: 750333 / Avg: 751617.33 / Max: 752594Min: 751183 / Avg: 752070.33 / Max: 753348Min: 750365 / Avg: 751645.33 / Max: 7531501. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100EPYC 7F72 x2340K80K120K160K200KSE +/- 464.46, N = 3SE +/- 245.02, N = 3SE +/- 407.34, N = 31900131893551900691. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100EPYC 7F72 x2330K60K90K120K150KMin: 189178 / Avg: 190013 / Max: 190783Min: 188916 / Avg: 189355.33 / Max: 189763Min: 189367 / Avg: 190068.67 / Max: 1907781. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F72 x2380K160K240K320K400KSE +/- 429.90, N = 3SE +/- 518.16, N = 3SE +/- 266.02, N = 33798803800863793241. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F72 x2370K140K210K280K350KMin: 379218 / Avg: 379879.67 / Max: 380686Min: 379470 / Avg: 380086.33 / Max: 381116Min: 378792 / Avg: 379324 / Max: 3795961. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x23400K800K1200K1600K2000KSE +/- 2432.39, N = 3SE +/- 1530.16, N = 3SE +/- 909.25, N = 31902543190048719008331. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000EPYC 7F72 x23300K600K900K1200K1500KMin: 1898950 / Avg: 1902543.33 / Max: 1907180Min: 1897860 / Avg: 1900486.67 / Max: 1903160Min: 1899370 / Avg: 1900833.33 / Max: 19025001. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetEPYC 7F72 x23510152025SE +/- 0.10, N = 3SE +/- 0.20, N = 7SE +/- 0.30, N = 318.8719.3819.62MIN: 18.36 / MAX: 20.85MIN: 18.46 / MAX: 118.13MIN: 18.65 / MAX: 21.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetEPYC 7F72 x23510152025Min: 18.72 / Avg: 18.87 / Max: 19.07Min: 18.93 / Avg: 19.38 / Max: 20.44Min: 19.05 / Avg: 19.62 / Max: 20.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetEPYC 7F72 x23510152025SE +/- 0.01, N = 3SE +/- 0.28, N = 7SE +/- 0.68, N = 319.4219.9320.42MIN: 18.9 / MAX: 21.51MIN: 18.87 / MAX: 22.76MIN: 18.91 / MAX: 84.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetEPYC 7F72 x23510152025Min: 19.41 / Avg: 19.42 / Max: 19.44Min: 19.38 / Avg: 19.93 / Max: 21.16Min: 19.68 / Avg: 20.42 / Max: 21.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F72 x233691215SE +/- 0.05, N = 3SE +/- 0.04, N = 7SE +/- 0.12, N = 39.219.269.28MIN: 8.89 / MAX: 10.51MIN: 8.75 / MAX: 11.07MIN: 8.89 / MAX: 10.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F72 x233691215Min: 9.15 / Avg: 9.21 / Max: 9.3Min: 9.08 / Avg: 9.26 / Max: 9.37Min: 9.16 / Avg: 9.28 / Max: 9.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F72 x23246810SE +/- 0.03, N = 3SE +/- 0.04, N = 7SE +/- 0.07, N = 38.808.818.88MIN: 8.56 / MAX: 10.78MIN: 8.52 / MAX: 10.79MIN: 8.49 / MAX: 76.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F72 x233691215Min: 8.76 / Avg: 8.8 / Max: 8.85Min: 8.69 / Avg: 8.81 / Max: 8.95Min: 8.74 / Avg: 8.88 / Max: 8.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2EPYC 7F72 x233691215SE +/- 0.02, N = 3SE +/- 0.06, N = 6SE +/- 0.06, N = 39.329.339.34MIN: 9.09 / MAX: 14.74MIN: 9.05 / MAX: 11.33MIN: 9.14 / MAX: 10.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2EPYC 7F72 x233691215Min: 9.28 / Avg: 9.32 / Max: 9.35Min: 9.15 / Avg: 9.33 / Max: 9.55Min: 9.28 / Avg: 9.34 / Max: 9.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetEPYC 7F72 x23246810SE +/- 0.01, N = 3SE +/- 0.03, N = 7SE +/- 0.06, N = 38.408.458.41MIN: 8.11 / MAX: 10.16MIN: 8.05 / MAX: 10.2MIN: 8.08 / MAX: 11.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetEPYC 7F72 x233691215Min: 8.38 / Avg: 8.4 / Max: 8.43Min: 8.32 / Avg: 8.45 / Max: 8.56Min: 8.29 / Avg: 8.41 / Max: 8.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0EPYC 7F72 x233691215SE +/- 0.09, N = 3SE +/- 0.06, N = 7SE +/- 0.08, N = 311.0911.0311.00MIN: 10.74 / MAX: 13.3MIN: 10.67 / MAX: 16.51MIN: 10.66 / MAX: 13.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0EPYC 7F72 x233691215Min: 10.93 / Avg: 11.09 / Max: 11.24Min: 10.85 / Avg: 11.03 / Max: 11.24Min: 10.85 / Avg: 11 / Max: 11.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceEPYC 7F72 x230.84831.69662.54493.39324.2415SE +/- 0.03, N = 3SE +/- 0.01, N = 7SE +/- 0.03, N = 33.743.773.73MIN: 3.46 / MAX: 4.73MIN: 3.58 / MAX: 6MIN: 3.58 / MAX: 4.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceEPYC 7F72 x23246810Min: 3.7 / Avg: 3.74 / Max: 3.79Min: 3.73 / Avg: 3.77 / Max: 3.82Min: 3.68 / Avg: 3.73 / Max: 3.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetEPYC 7F72 x23510152025SE +/- 0.14, N = 3SE +/- 0.09, N = 7SE +/- 0.17, N = 319.6219.7419.68MIN: 19.13 / MAX: 22.12MIN: 19.14 / MAX: 25.15MIN: 19.15 / MAX: 21.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetEPYC 7F72 x23510152025Min: 19.45 / Avg: 19.62 / Max: 19.9Min: 19.35 / Avg: 19.74 / Max: 20.09Min: 19.37 / Avg: 19.68 / Max: 19.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16EPYC 7F72 x23816243240SE +/- 0.56, N = 3SE +/- 0.30, N = 7SE +/- 0.22, N = 332.8332.8532.52MIN: 31.34 / MAX: 36.46MIN: 31.1 / MAX: 98.34MIN: 31.78 / MAX: 34.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16EPYC 7F72 x23714212835Min: 31.73 / Avg: 32.83 / Max: 33.54Min: 31.5 / Avg: 32.85 / Max: 34.03Min: 32.09 / Avg: 32.52 / Max: 32.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18EPYC 7F72 x233691215SE +/- 0.27, N = 3SE +/- 0.13, N = 7SE +/- 0.13, N = 313.0413.0513.00MIN: 12.34 / MAX: 38.34MIN: 12.38 / MAX: 54.07MIN: 12.56 / MAX: 14.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18EPYC 7F72 x2348121620Min: 12.55 / Avg: 13.04 / Max: 13.47Min: 12.54 / Avg: 13.05 / Max: 13.5Min: 12.76 / Avg: 13 / Max: 13.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetEPYC 7F72 x233691215SE +/- 0.26, N = 3SE +/- 0.19, N = 6SE +/- 0.23, N = 39.169.429.25MIN: 8.54 / MAX: 10.41MIN: 8.51 / MAX: 10.8MIN: 8.56 / MAX: 12.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetEPYC 7F72 x233691215Min: 8.7 / Avg: 9.16 / Max: 9.61Min: 8.69 / Avg: 9.42 / Max: 9.88Min: 8.82 / Avg: 9.25 / Max: 9.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50EPYC 7F72 x23510152025SE +/- 0.16, N = 3SE +/- 0.06, N = 7SE +/- 0.23, N = 322.7822.7122.92MIN: 22.17 / MAX: 85.73MIN: 22.12 / MAX: 25.08MIN: 22.33 / MAX: 105.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50EPYC 7F72 x23510152025Min: 22.51 / Avg: 22.78 / Max: 23.05Min: 22.45 / Avg: 22.71 / Max: 22.94Min: 22.62 / Avg: 22.92 / Max: 23.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyEPYC 7F72 x23714212835SE +/- 0.14, N = 3SE +/- 0.11, N = 7SE +/- 0.21, N = 330.2030.1430.27MIN: 29.67 / MAX: 32.8MIN: 29.27 / MAX: 65.59MIN: 29.38 / MAX: 33.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyEPYC 7F72 x23714212835Min: 30 / Avg: 30.2 / Max: 30.47Min: 29.61 / Avg: 30.14 / Max: 30.42Min: 29.84 / Avg: 30.27 / Max: 30.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F72 x2370M140M210M280M350MSE +/- 53237.74, N = 3SE +/- 40216.53, N = 3SE +/- 377618.26, N = 3328822812.23328868294.69328901110.011. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F72 x2360M120M180M240M300MMin: 328734302.38 / Avg: 328822812.23 / Max: 328918324.12Min: 328787999.13 / Avg: 328868294.69 / Max: 328912513.79Min: 328299368.72 / Avg: 328901110.01 / Max: 329597223.881. (CC) gcc options: -O3 -march=native -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F72 x231428425670SE +/- 0.88, N = 3SE +/- 0.99, N = 3SE +/- 0.19, N = 361.0460.7761.39
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F72 x231224364860Min: 60 / Avg: 61.04 / Max: 62.79Min: 58.88 / Avg: 60.77 / Max: 62.21Min: 61.14 / Avg: 61.39 / Max: 61.76

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7F72 x23918273645SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 339.4539.3039.61
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7F72 x23816243240Min: 39.28 / Avg: 39.45 / Max: 39.71Min: 39.27 / Avg: 39.3 / Max: 39.35Min: 39.49 / Avg: 39.61 / Max: 39.79

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7F72 x23612182430SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 324.2824.2824.26
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7F72 x23612182430Min: 24.18 / Avg: 24.28 / Max: 24.36Min: 24.18 / Avg: 24.28 / Max: 24.34Min: 24.21 / Avg: 24.26 / Max: 24.37

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F72 x230.37130.74261.11391.48521.8565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.651.651.65
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F72 x23246810Min: 1.63 / Avg: 1.65 / Max: 1.66Min: 1.63 / Avg: 1.65 / Max: 1.67Min: 1.64 / Avg: 1.65 / Max: 1.65


OpenBenchmarking.org Community User Comments

Post A Comment