AMD EPYC 7F32 2021 Linux

AMD EPYC 7F32 8-Core testing with a ASRockRack EPYCD8 (P2.40 BIOS) and ASPEED on Debian 11 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2201040-NE-AMDEPYC7F37
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 4 Tests
Compression Tests 3 Tests
CPU Massive 13 Tests
Creator Workloads 6 Tests
Fortran Tests 5 Tests
HPC - High Performance Computing 18 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 2 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 5 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 6 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 6 Tests
OpenMPI Tests 11 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Renderers 3 Tests
Scientific Computing 9 Tests
Server 2 Tests
Server CPU Tests 7 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
6 Channel
December 31 2021
  3 Hours, 12 Minutes
8 Channel
January 03 2022
  2 Hours, 59 Minutes
8c
January 04 2022
  3 Hours, 7 Minutes
AMD 8c
January 04 2022
  3 Hours, 9 Minutes
AMD
January 04 2022
  2 Hours, 32 Minutes
Invert Hiding All Results Option
  3 Hours

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7F32 2021 LinuxProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolution6 Channel8 Channel8cAMD 8cAMDAMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads)ASRockRack EPYCD8 (P2.40 BIOS)AMD Starship/Matisse24GBSamsung SSD 970 EVO Plus 250GBASPEEDVE2282 x Intel I350Debian 115.10.0-10-amd64 (x86_64)GNOME Shell 3.38.6X ServerGCC 10.2.1 20210110ext41920x108032GB1024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034 Java Details- OpenJDK Runtime Environment (build 11.0.13+8-post-Debian-1deb11u1)Python Details- Python 3.9.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

6 Channel8 Channel8cAMD 8cAMDResult OverviewPhoronix Test Suite100%142%184%226%High Performance Conjugate GradientAlgebraic Multi-Grid BenchmarkASKAPXcompact3d Incompact3dStress-NGNAS Parallel BenchmarksMobile Neural NetworkLULESHRodiniaNCNNLeelaChessZeroC-BloscONNX RuntimeCloverLeafApache HTTP ServerGROMACSQuantum ESPRESSOApache CassandraRenaissanceNatron7-Zip CompressionNAMDAOM AV1QMCPACKBlenderQuantLib

AMD EPYC 7F32 2021 Linuxblosc: blosclzquantlib: hpcg: npb: BT.Cnpb: CG.Cnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Bnpb: SP.Clczero: BLASlczero: Eigencloverleaf: Lagrangian-Eulerian Hydrodynamicsrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomsamg: qmcpack: simple-H2Oincompact3d: input.i3d 129 Cells Per Directionincompact3d: input.i3d 193 Cells Per Directionqe: AUSURF112lulesh: renaissance: In-Memory Database Shootoutaom-av1: Speed 10 Realtime - Bosphorus 4Kcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingaskap: tConvolve MT - Griddingaskap: tConvolve MT - Degriddingaskap: tConvolve OpenMP - Griddingaskap: tConvolve OpenMP - Degriddingaskap: Hogbom Clean OpenMPgromacs: MPI CPU - water_GMX50_barestress-ng: CPU Cachemnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlycassandra: Readscassandra: Writesnatron: Spaceshiponnx: yolov4 - CPUonnx: fcn-resnet101-11 - CPUonnx: super-resolution-10 - CPUapache: 1apache: 20apache: 100apache: 200apache: 500apache: 1000compress-rar: Linux Source Tree Archiving To RARkripke: brl-cad: VGR Performance Metricv-ray: CPUopencv: Object Detectionbuild-nodejs: Time To Compileopenjpeg: NASA Curiosity Panorama M34mnn: SqueezeNetV1.0ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mnasnet6 Channel8 Channel8cAMD 8cAMD16263.82286.25.0030212549.87541.719682.3128662.3318672.3610290.687627.9791889742.7922.2422.3067039637720026.50822.202732182.311058602.617552.09275393.836.9970649597541400.892283.011358.452182.43231.4811.15761.765.2928.09238.7038.4948.18547.71626.427.218.4311.23.8722.6167.6422.2614.0735.4143.5633.5324.62150.85395.46196.161649.3478.7574577753472.61492740797798.1250548.4965225.2570589.0268373.7566317.8292.0986165830012903110394141380511.61318911.4229913.401822503.5112279.9227048.71361.1742479.3542930.1214980.7612193.291092103538.1417.9072.2505072378980026.04216.064571457.3659477566.2610006.2925112.437.8672415596882866.823123.242585.013207.9313.481.2368.795.1186.78429.0655.5774.45942.75420.476.828.1712.873.8819.7239.2114.789.5825.1332.6328.5924.69150.22401.58198.591652.87485.8866602751592.61883329547409.1349925.6669156.0678219.9872991.5871121.0386.9391108500001260871040299200508.1287812316428.92295.45.6472713061.959314.0324038.131230.1832624.7524277.111834.9710093.45969100038.7917.4422.3216557438710026.1918.798366572.5886612575.649414.9125283.937.4174893601742864.893148.632560.153247.02309.5981.25261.75.0596.67128.8075.4744.46942.20620.856.748.0910.313.7118.939.214.639.3424.6431.3428.2324.04150.26400.33197.361644.19485.9575029754782.72093929317238.1251482.5170509.7279015.8373170.8171951.1791.0411079418001263611049885210504.8487829111.6387.876.916659.52293.75.5612913072.119320.6124036.481230.3732695.4524274.5611725.2210014.4885103438.5417.4812.3239257511700026.15917.754299272.4789429578.249330.29985115.937.3375534600092864.893146.32560.153247.02314.4651.25355.95.1046.6828.7955.5174.4743.22120.466.728.0810.33.7218.9439.1814.619.3224.6531.3128.3123.94150.48401.93197.121648.1486.3772809759822.61723029557380.0748980.0570487.2379944.0773891.1471581.2791.143113397400128103985092146506.7557635311.8227.676.916490.92290.55.6568813014.589250.5624013.781243.2232632.4924245.1811635.0310093.6106393838.1218.9492.3186357891390025.91118.774908172.4598846577.69326.19354980.937.2574081601132861.043143.982535.773247.02313.481.25749.935.076.81129.4895.8414.59342.31221.366.758.1310.273.6919.339.0814.749.3824.8534.6428.4524.35150.31401.88197.451650.97486.4270859757132.72102929127140.8111.5487.697.21OpenBenchmarking.org

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclzAMDAMD 8c8c8 Channel6 Channel4K8K12K16K20K16490.916659.516428.918911.416263.81. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21AMDAMD 8c8c8 Channel6 Channel50010001500200025002290.52293.72295.42299.02286.21. (CXX) g++ options: -O3 -march=native -rdynamic

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1AMDAMD 8c8c8 Channel6 Channel36912155.656885.561295.6472713.401805.003021. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CAMDAMD 8c8c8 Channel6 Channel5K10K15K20K25K13014.5813072.1113061.9522503.5112549.801. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CAMDAMD 8c8c8 Channel6 Channel3K6K9K12K15K9250.569320.619314.0312279.927541.701. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CAMDAMD 8c8c8 Channel6 Channel6K12K18K24K30K24013.7824036.4824038.1327048.7019682.311. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DAMDAMD 8c8c8 Channel300600900120015001243.221230.371230.181361.171. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

Test / Class: IS.D

6 Channel: The test quit with a non-zero exit status.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CAMDAMD 8c8c8 Channel6 Channel9K18K27K36K45K32632.4932695.4532624.7542479.3528662.331. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CAMDAMD 8c8c8 Channel6 Channel9K18K27K36K45K24245.1824274.5624277.1042930.1218672.361. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BAMDAMD 8c8c8 Channel6 Channel3K6K9K12K15K11635.0311725.2211834.9714980.7610290.681. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CAMDAMD 8c8c8 Channel6 Channel3K6K9K12K15K10093.6010014.4010093.4512193.297627.971. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASAMDAMD 8c8c8 Channel6 Channel2004006008001000106388596910929181. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAMDAMD 8c8c8 Channel6 Channel20040060080010009381034100010358971. (CXX) g++ options: -flto -pthread

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsAMDAMD 8c8c8 Channel6 Channel102030405038.1238.5438.7938.1442.791. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterAMDAMD 8c8c8 Channel6 Channel51015202518.9517.4817.4417.9122.241. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsAMDAMD 8c8c8 Channel6 Channel0.52291.04581.56872.09162.61452.318632.323922.321652.250502.30670

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2AMDAMD 8c8c8 Channel6 Channel160M320M480M640M800M5789139005751170005743871007237898003963772001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.11Input: simple-H2OAMDAMD 8c8c8 Channel6 Channel61218243025.9126.1626.1926.0426.511. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm -ldl

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionAMDAMD 8c8c8 Channel6 Channel51015202518.7717.7518.8016.0622.201. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionAMDAMD 8c8c8 Channel6 Channel2040608010072.4672.4872.5957.3782.311. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 7.0Input: AUSURF112AMDAMD 8c8c8 Channel6 Channel130260390520650577.60578.24575.64566.26602.611. (F9X) gfortran options: -pthread -fopenmp -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3_omp -lfftw3 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3AMDAMD 8c8c8 Channel6 Channel2K4K6K8K10K9326.199330.309414.9110006.297552.091. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database ShootoutAMDAMD 8c8c8 Channel6 Channel120024003600480060004980.95115.95283.95112.45393.8MIN: 4679.53 / MAX: 5662.54MIN: 4666.09 / MAX: 5964.8MIN: 4814.03 / MAX: 5626.21MIN: 4623.73 / MAX: 5687.4MIN: 4921.3 / MAX: 5704.16

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KAMDAMD 8c8c8 Channel6 Channel91827364537.2537.3337.4137.8636.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression RatingAMDAMD 8c8c8 Channel6 Channel16K32K48K64K80K74081755347489372415706491. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression RatingAMDAMD 8c8c8 Channel6 Channel13K26K39K52K65K60113600096017459688597541. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingAMDAMD 8c8c8 Channel6 Channel60012001800240030002861.042864.892864.892866.821400.891. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingAMDAMD 8c8c8 Channel6 Channel70014002100280035003143.983146.303148.633123.242283.011. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingAMDAMD 8c8c8 Channel6 Channel60012001800240030002535.772560.152560.152585.011358.451. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingAMDAMD 8c8c8 Channel6 Channel70014002100280035003247.023247.023247.023207.902182.431. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPAMDAMD 8c8c8 Channel6 Channel70140210280350313.48314.47309.60313.48231.481. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bareAMDAMD 8c8c8 Channel6 Channel0.28280.56560.84841.13121.4141.2571.2531.2521.2301.157-lm1. (CXX) g++ options: -O3 -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.13.02Test: CPU CacheAMDAMD 8c8c8 Channel6 Channel153045607549.9355.9061.7068.7961.761. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lsctp -lz -ldl -pthread -lc -latomic

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3AMDAMD 8c8c8 Channel6 Channel1.19072.38143.57214.76285.95355.0705.1045.0595.1185.292MIN: 5.03 / MAX: 5.2MIN: 5.06 / MAX: 8.5MIN: 5.02 / MAX: 5.35MIN: 4.68 / MAX: 16.78MIN: 5.25 / MAX: 7.721. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1AMDAMD 8c8c8 Channel6 Channel2468106.8116.6806.6716.7848.092MIN: 6.77 / MAX: 10.43MIN: 6.65 / MAX: 6.87MIN: 6.63 / MAX: 19.13MIN: 6.73 / MAX: 17.72MIN: 8.01 / MAX: 11.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50AMDAMD 8c8c8 Channel6 Channel91827364529.4928.8028.8129.0738.70MIN: 28.46 / MAX: 41.67MIN: 28.3 / MAX: 31.52MIN: 28.35 / MAX: 29.29MIN: 28.63 / MAX: 29.77MIN: 36.54 / MAX: 71.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224AMDAMD 8c8c8 Channel6 Channel2468105.8415.5175.4745.5778.494MIN: 5.76 / MAX: 8.12MIN: 5.44 / MAX: 18.62MIN: 5.42 / MAX: 9.15MIN: 5.48 / MAX: 8.68MIN: 8.44 / MAX: 8.811. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0AMDAMD 8c8c8 Channel6 Channel2468104.5934.4704.4694.4598.185MIN: 4.55 / MAX: 4.85MIN: 4.42 / MAX: 4.9MIN: 4.42 / MAX: 4.83MIN: 4.41 / MAX: 4.87MIN: 8.03 / MAX: 9.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3AMDAMD 8c8c8 Channel6 Channel112233445542.3143.2242.2142.7547.72MIN: 41.61 / MAX: 59.31MIN: 42.25 / MAX: 55.66MIN: 41.55 / MAX: 55.62MIN: 42.17 / MAX: 53.54MIN: 45.39 / MAX: 881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenetAMDAMD 8c8c8 Channel6 Channel61218243021.3620.4620.8520.4726.42MIN: 20.4 / MAX: 22.31MIN: 20.13 / MAX: 21.14MIN: 20.42 / MAX: 21.54MIN: 20.15 / MAX: 21.29MIN: 26.16 / MAX: 26.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3AMDAMD 8c8c8 Channel6 Channel2468106.756.726.746.827.21MIN: 6.48 / MAX: 6.87MIN: 6.63 / MAX: 6.9MIN: 6.66 / MAX: 6.89MIN: 6.74 / MAX: 6.94MIN: 7.06 / MAX: 7.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2AMDAMD 8c8c8 Channel6 Channel2468108.138.088.098.178.43MIN: 8.05 / MAX: 8.28MIN: 7.97 / MAX: 8.54MIN: 8.03 / MAX: 8.2MIN: 8.1 / MAX: 8.49MIN: 8.34 / MAX: 8.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0AMDAMD 8c8c8 Channel6 Channel369121510.2710.3010.3112.8711.20MIN: 10.18 / MAX: 10.37MIN: 10.22 / MAX: 10.71MIN: 10.24 / MAX: 10.4MIN: 10.33 / MAX: 101.47MIN: 11.1 / MAX: 11.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceAMDAMD 8c8c8 Channel6 Channel0.8731.7462.6193.4924.3653.693.723.713.883.87MIN: 3.58 / MAX: 3.76MIN: 3.62 / MAX: 3.82MIN: 3.61 / MAX: 3.79MIN: 3.73 / MAX: 7.29MIN: 3.69 / MAX: 4.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetAMDAMD 8c8c8 Channel6 Channel51015202519.3018.9418.9019.7222.61MIN: 19.16 / MAX: 20.03MIN: 18.74 / MAX: 30.02MIN: 18.69 / MAX: 22.18MIN: 19.51 / MAX: 20.08MIN: 22.34 / MAX: 23.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16AMDAMD 8c8c8 Channel6 Channel153045607539.0839.1839.2039.2167.64MIN: 38.76 / MAX: 40.42MIN: 38.75 / MAX: 39.61MIN: 38.6 / MAX: 50.04MIN: 38.62 / MAX: 48.96MIN: 66.99 / MAX: 68.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18AMDAMD 8c8c8 Channel6 Channel51015202514.7414.6114.6314.7822.26MIN: 14.46 / MAX: 15.26MIN: 14.39 / MAX: 15.07MIN: 14.43 / MAX: 15.12MIN: 14.56 / MAX: 16.14MIN: 21.79 / MAX: 23.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetAMDAMD 8c8c8 Channel6 Channel481216209.389.329.349.5814.07MIN: 9.28 / MAX: 9.81MIN: 9.22 / MAX: 10.6MIN: 9.26 / MAX: 9.8MIN: 9.3 / MAX: 52.6MIN: 13.9 / MAX: 14.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50AMDAMD 8c8c8 Channel6 Channel81624324024.8524.6524.6425.1335.41MIN: 24.61 / MAX: 28.04MIN: 24.38 / MAX: 25.67MIN: 24.35 / MAX: 25.4MIN: 24.89 / MAX: 26.13MIN: 32.26 / MAX: 160.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyAMDAMD 8c8c8 Channel6 Channel102030405034.6431.3131.3432.6343.56MIN: 33.75 / MAX: 38.82MIN: 30.92 / MAX: 34.17MIN: 30.95 / MAX: 32.51MIN: 30.92 / MAX: 114.96MIN: 39.7 / MAX: 118.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdAMDAMD 8c8c8 Channel6 Channel81624324028.4528.3128.2328.5933.53MIN: 27.87 / MAX: 88.18MIN: 28.07 / MAX: 28.83MIN: 27.93 / MAX: 28.74MIN: 27.15 / MAX: 29.17MIN: 31.54 / MAX: 93.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mAMDAMD 8c8c8 Channel6 Channel61218243024.3523.9424.0424.6924.62MIN: 23.49 / MAX: 24.74MIN: 23.04 / MAX: 26.46MIN: 23.06 / MAX: 24.6MIN: 23.98 / MAX: 25.32MIN: 23.92 / MAX: 25.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: BMW27 - Compute: CPU-OnlyAMDAMD 8c8c8 Channel6 Channel306090120150150.31150.48150.26150.22150.85

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Classroom - Compute: CPU-OnlyAMDAMD 8c8c8 Channel6 Channel90180270360450401.88401.93400.33401.58395.46

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Fishy Cat - Compute: CPU-OnlyAMDAMD 8c8c8 Channel6 Channel4080120160200197.45197.12197.36198.59196.16

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Barbershop - Compute: CPU-OnlyAMDAMD 8c8c8 Channel6 Channel4008001200160020001650.971648.101644.191652.871649.30

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Pabellon Barcelona - Compute: CPU-OnlyAMDAMD 8c8c8 Channel6 Channel110220330440550486.42486.37485.95485.88478.75

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: ReadsAMDAMD 8c8c8 Channel6 Channel16K32K48K64K80K7085972809750296660274577

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: WritesAMDAMD 8c8c8 Channel6 Channel16K32K48K64K80K7571375982754787515975347

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: SpaceshipAMDAMD 8c8c8 Channel6 Channel0.60751.2151.82252.433.03752.72.62.72.62.6

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: yolov4 - Device: CPUAMDAMD 8c8c8 Channel6 Channel501001502002502101722091881491. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: fcn-resnet101-11 - Device: CPUAMDAMD 8c8c8 Channel6 Channel91827364529303933271. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: super-resolution-10 - Device: CPUAMDAMD 8c8c8 Channel6 Channel9001800270036004500291229552931295440791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1AMDAMD 8c8c8 Channel6 Channel2K4K6K8K10K7140.817380.077238.127409.137798.121. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 20AMD 8c8c8 Channel6 Channel11K22K33K44K55K48980.0551482.5149925.6650548.491. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100AMD 8c8c8 Channel6 Channel15K30K45K60K75K70487.2370509.7269156.0665225.251. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200AMD 8c8c8 Channel6 Channel20K40K60K80K100K79944.0779015.8378219.9870589.021. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500AMD 8c8c8 Channel6 Channel16K32K48K64K80K73891.1473170.8172991.5868373.751. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000AMD 8c8c8 Channel6 Channel15K30K45K60K75K71581.2771951.1771121.0366317.821. (CC) gcc options: -shared -fPIC -O2 -pthread

RAR Compression

This test measures the time needed to archive/compress two copies of the Linux 5.14 kernel source tree using RAR/WinRAR compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRAR Compression 6.0.2Linux Source Tree Archiving To RARAMD 8c8c8 Channel6 Channel2040608010091.1491.0486.9492.10

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4AMD 8c8c8 Channel6 Channel20M40M60M80M100M113397400107941800110850000616583001. (CXX) g++ options: -O3 -fopenmp

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.2VGR Performance MetricAMD 8c8c8 Channel6 Channel30K60K90K120K150K1281031263611260871290311. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUAMD 8c8c8 Channel6 Channel2K4K6K8K10K9850104981040210394

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object DetectionAMD 8c8c8 Channel6 Channel30K60K90K120K150K9214685210992001413801. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To CompileAMD 8c8c8 Channel6 Channel110220330440550506.76504.85508.13511.61

OpenJPEG

OpenJPEG is an open-source JPEG 2000 codec written in the C programming language. The default input for this test profile is the NASA/JPL-Caltech/MSSS Curiosity panorama 717MB TIFF image file converting to JPEG2000 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenJPEG 2.4Encode: NASA Curiosity Panorama M34AMD 8c8c8 Channel20K40K60K80K100KSE +/- 214.11, N = 37635378291781231. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgms, Fewer Is BetterOpenJPEG 2.4Encode: NASA Curiosity Panorama M34AMD 8c8c8 Channel14K28K42K56K70KMin: 77811 / Avg: 78123 / Max: 785331. (CXX) g++ options: -rdynamic

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0AMDAMD 8c8c369121511.5511.8211.64MIN: 11.44 / MAX: 12.11MIN: 11.75 / MAX: 14.09MIN: 11.59 / MAX: 12.11. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2AMDAMD 8c8c2468107.697.677.87MIN: 7.09 / MAX: 49.13MIN: 7.13 / MAX: 7.89MIN: 7.76 / MAX: 8.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetAMDAMD 8c8c2468107.216.906.90MIN: 7.12 / MAX: 7.29MIN: 6.83 / MAX: 7.12MIN: 6.81 / MAX: 7.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread