AMD EPYC 7F32 2021 Linux

AMD EPYC 7F32 8-Core testing with a ASRockRack EPYCD8 (P2.40 BIOS) and ASPEED on Debian 11 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2201040-NE-AMDEPYC7F37
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 4 Tests
Compression Tests 3 Tests
CPU Massive 13 Tests
Creator Workloads 6 Tests
Fortran Tests 5 Tests
HPC - High Performance Computing 18 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 2 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 5 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 6 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 6 Tests
OpenMPI Tests 11 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Renderers 3 Tests
Scientific Computing 9 Tests
Server 2 Tests
Server CPU Tests 7 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
6 Channel
December 31 2021
  3 Hours, 12 Minutes
8 Channel
January 03 2022
  2 Hours, 59 Minutes
8c
January 04 2022
  3 Hours, 7 Minutes
AMD 8c
January 04 2022
  3 Hours, 9 Minutes
AMD
January 04 2022
  2 Hours, 32 Minutes
Invert Hiding All Results Option
  3 Hours

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7F32 2021 LinuxProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolution6 Channel8 Channel8cAMD 8cAMDAMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads)ASRockRack EPYCD8 (P2.40 BIOS)AMD Starship/Matisse24GBSamsung SSD 970 EVO Plus 250GBASPEEDVE2282 x Intel I350Debian 115.10.0-10-amd64 (x86_64)GNOME Shell 3.38.6X ServerGCC 10.2.1 20210110ext41920x108032GB1024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034 Java Details- OpenJDK Runtime Environment (build 11.0.13+8-post-Debian-1deb11u1)Python Details- Python 3.9.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

6 Channel8 Channel8cAMD 8cAMDResult OverviewPhoronix Test Suite100%142%184%226%High Performance Conjugate GradientAlgebraic Multi-Grid BenchmarkASKAPXcompact3d Incompact3dStress-NGNAS Parallel BenchmarksMobile Neural NetworkLULESHRodiniaNCNNLeelaChessZeroC-BloscONNX RuntimeCloverLeafApache HTTP ServerGROMACSQuantum ESPRESSOApache CassandraRenaissanceNatron7-Zip CompressionNAMDAOM AV1QMCPACKBlenderQuantLib

AMD EPYC 7F32 2021 Linuxblosc: blosclzquantlib: hpcg: npb: BT.Cnpb: CG.Cnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Bnpb: SP.Clczero: BLASlczero: Eigencloverleaf: Lagrangian-Eulerian Hydrodynamicsrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomsamg: qmcpack: simple-H2Oincompact3d: input.i3d 129 Cells Per Directionincompact3d: input.i3d 193 Cells Per Directionqe: AUSURF112lulesh: renaissance: In-Memory Database Shootoutaom-av1: Speed 10 Realtime - Bosphorus 4Kcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingaskap: tConvolve MT - Griddingaskap: tConvolve MT - Degriddingaskap: tConvolve OpenMP - Griddingaskap: tConvolve OpenMP - Degriddingaskap: Hogbom Clean OpenMPgromacs: MPI CPU - water_GMX50_barestress-ng: CPU Cachemnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlycassandra: Readscassandra: Writesnatron: Spaceshiponnx: yolov4 - CPUonnx: fcn-resnet101-11 - CPUonnx: super-resolution-10 - CPUapache: 1apache: 20apache: 100apache: 200apache: 500apache: 1000compress-rar: Linux Source Tree Archiving To RARkripke: brl-cad: VGR Performance Metricv-ray: CPUopencv: Object Detectionbuild-nodejs: Time To Compileopenjpeg: NASA Curiosity Panorama M34mnn: SqueezeNetV1.0ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mnasnet6 Channel8 Channel8cAMD 8cAMD16263.82286.25.0030212549.87541.719682.3128662.3318672.3610290.687627.9791889742.7922.2422.3067039637720026.50822.202732182.311058602.617552.09275393.836.9970649597541400.892283.011358.452182.43231.4811.15761.765.2928.09238.7038.4948.18547.71626.427.218.4311.23.8722.6167.6422.2614.0735.4143.5633.5324.62150.85395.46196.161649.3478.7574577753472.61492740797798.1250548.4965225.2570589.0268373.7566317.8292.0986165830012903110394141380511.61318911.4229913.401822503.5112279.9227048.71361.1742479.3542930.1214980.7612193.291092103538.1417.9072.2505072378980026.04216.064571457.3659477566.2610006.2925112.437.8672415596882866.823123.242585.013207.9313.481.2368.795.1186.78429.0655.5774.45942.75420.476.828.1712.873.8819.7239.2114.789.5825.1332.6328.5924.69150.22401.58198.591652.87485.8866602751592.61883329547409.1349925.6669156.0678219.9872991.5871121.0386.9391108500001260871040299200508.1287812316428.92295.45.6472713061.959314.0324038.131230.1832624.7524277.111834.9710093.45969100038.7917.4422.3216557438710026.1918.798366572.5886612575.649414.9125283.937.4174893601742864.893148.632560.153247.02309.5981.25261.75.0596.67128.8075.4744.46942.20620.856.748.0910.313.7118.939.214.639.3424.6431.3428.2324.04150.26400.33197.361644.19485.9575029754782.72093929317238.1251482.5170509.7279015.8373170.8171951.1791.0411079418001263611049885210504.8487829111.6387.876.916659.52293.75.5612913072.119320.6124036.481230.3732695.4524274.5611725.2210014.4885103438.5417.4812.3239257511700026.15917.754299272.4789429578.249330.29985115.937.3375534600092864.893146.32560.153247.02314.4651.25355.95.1046.6828.7955.5174.4743.22120.466.728.0810.33.7218.9439.1814.619.3224.6531.3128.3123.94150.48401.93197.121648.1486.3772809759822.61723029557380.0748980.0570487.2379944.0773891.1471581.2791.143113397400128103985092146506.7557635311.8227.676.916490.92290.55.6568813014.589250.5624013.781243.2232632.4924245.1811635.0310093.6106393838.1218.9492.3186357891390025.91118.774908172.4598846577.69326.19354980.937.2574081601132861.043143.982535.773247.02313.481.25749.935.076.81129.4895.8414.59342.31221.366.758.1310.273.6919.339.0814.749.3824.8534.6428.4524.35150.31401.88197.451650.97486.4270859757132.72102929127140.8111.5487.697.21OpenBenchmarking.org

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz6 Channel8 Channel8cAMD 8cAMD4K8K12K16K20K16263.818911.416428.916659.516490.91. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.216 Channel8 Channel8cAMD 8cAMD50010001500200025002286.22299.02295.42293.72290.51. (CXX) g++ options: -O3 -march=native -rdynamic

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.16 Channel8 Channel8cAMD 8cAMD36912155.0030213.401805.647275.561295.656881. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C6 Channel8 Channel8cAMD 8cAMD5K10K15K20K25K12549.8022503.5113061.9513072.1113014.581. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C6 Channel8 Channel8cAMD 8cAMD3K6K9K12K15K7541.7012279.929314.039320.619250.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C6 Channel8 Channel8cAMD 8cAMD6K12K18K24K30K19682.3127048.7024038.1324036.4824013.781. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D8 Channel8cAMD 8cAMD300600900120015001361.171230.181230.371243.221. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

Test / Class: IS.D

6 Channel: The test quit with a non-zero exit status.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C6 Channel8 Channel8cAMD 8cAMD9K18K27K36K45K28662.3342479.3532624.7532695.4532632.491. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C6 Channel8 Channel8cAMD 8cAMD9K18K27K36K45K18672.3642930.1224277.1024274.5624245.181. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B6 Channel8 Channel8cAMD 8cAMD3K6K9K12K15K10290.6814980.7611834.9711725.2211635.031. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C6 Channel8 Channel8cAMD 8cAMD3K6K9K12K15K7627.9712193.2910093.4510014.4010093.601. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAS6 Channel8 Channel8cAMD 8cAMD2004006008001000918109296988510631. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigen6 Channel8 Channel8cAMD 8cAMD20040060080010008971035100010349381. (CXX) g++ options: -flto -pthread

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics6 Channel8 Channel8cAMD 8cAMD102030405042.7938.1438.7938.5438.121. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster6 Channel8 Channel8cAMD 8cAMD51015202522.2417.9117.4417.4818.951. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms6 Channel8 Channel8cAMD 8cAMD0.52291.04581.56872.09162.61452.306702.250502.321652.323922.31863

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.26 Channel8 Channel8cAMD 8cAMD160M320M480M640M800M3963772007237898005743871005751170005789139001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.11Input: simple-H2O6 Channel8 Channel8cAMD 8cAMD61218243026.5126.0426.1926.1625.911. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm -ldl

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction6 Channel8 Channel8cAMD 8cAMD51015202522.2016.0618.8017.7518.771. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Direction6 Channel8 Channel8cAMD 8cAMD2040608010082.3157.3772.5972.4872.461. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 7.0Input: AUSURF1126 Channel8 Channel8cAMD 8cAMD130260390520650602.61566.26575.64578.24577.601. (F9X) gfortran options: -pthread -fopenmp -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3_omp -lfftw3 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.36 Channel8 Channel8cAMD 8cAMD2K4K6K8K10K7552.0910006.299414.919330.309326.191. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout6 Channel8 Channel8cAMD 8cAMD120024003600480060005393.85112.45283.95115.94980.9MIN: 4921.3 / MAX: 5704.16MIN: 4623.73 / MAX: 5687.4MIN: 4814.03 / MAX: 5626.21MIN: 4666.09 / MAX: 5964.8MIN: 4679.53 / MAX: 5662.54

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K6 Channel8 Channel8cAMD 8cAMD91827364536.9937.8637.4137.3337.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Rating6 Channel8 Channel8cAMD 8cAMD16K32K48K64K80K70649724157489375534740811. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Rating6 Channel8 Channel8cAMD 8cAMD13K26K39K52K65K59754596886017460009601131. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding6 Channel8 Channel8cAMD 8cAMD60012001800240030001400.892866.822864.892864.892861.041. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding6 Channel8 Channel8cAMD 8cAMD70014002100280035002283.013123.243148.633146.303143.981. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding6 Channel8 Channel8cAMD 8cAMD60012001800240030001358.452585.012560.152560.152535.771. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding6 Channel8 Channel8cAMD 8cAMD70014002100280035002182.433207.903247.023247.023247.021. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP6 Channel8 Channel8cAMD 8cAMD70140210280350231.48313.48309.60314.47313.481. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare6 Channel8 Channel8cAMD 8cAMD0.28280.56560.84841.13121.4141.1571.2301.2521.2531.257-lm1. (CXX) g++ options: -O3 -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.13.02Test: CPU Cache6 Channel8 Channel8cAMD 8cAMD153045607561.7668.7961.7055.9049.931. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lsctp -lz -ldl -pthread -lc -latomic

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV36 Channel8 Channel8cAMD 8cAMD1.19072.38143.57214.76285.95355.2925.1185.0595.1045.070MIN: 5.25 / MAX: 7.72MIN: 4.68 / MAX: 16.78MIN: 5.02 / MAX: 5.35MIN: 5.06 / MAX: 8.5MIN: 5.03 / MAX: 5.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.16 Channel8 Channel8cAMD 8cAMD2468108.0926.7846.6716.6806.811MIN: 8.01 / MAX: 11.62MIN: 6.73 / MAX: 17.72MIN: 6.63 / MAX: 19.13MIN: 6.65 / MAX: 6.87MIN: 6.77 / MAX: 10.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-506 Channel8 Channel8cAMD 8cAMD91827364538.7029.0728.8128.8029.49MIN: 36.54 / MAX: 71.95MIN: 28.63 / MAX: 29.77MIN: 28.35 / MAX: 29.29MIN: 28.3 / MAX: 31.52MIN: 28.46 / MAX: 41.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2246 Channel8 Channel8cAMD 8cAMD2468108.4945.5775.4745.5175.841MIN: 8.44 / MAX: 8.81MIN: 5.48 / MAX: 8.68MIN: 5.42 / MAX: 9.15MIN: 5.44 / MAX: 18.62MIN: 5.76 / MAX: 8.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.06 Channel8 Channel8cAMD 8cAMD2468108.1854.4594.4694.4704.593MIN: 8.03 / MAX: 9.21MIN: 4.41 / MAX: 4.87MIN: 4.42 / MAX: 4.83MIN: 4.42 / MAX: 4.9MIN: 4.55 / MAX: 4.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v36 Channel8 Channel8cAMD 8cAMD112233445547.7242.7542.2143.2242.31MIN: 45.39 / MAX: 88MIN: 42.17 / MAX: 53.54MIN: 41.55 / MAX: 55.62MIN: 42.25 / MAX: 55.66MIN: 41.61 / MAX: 59.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet6 Channel8 Channel8cAMD 8cAMD61218243026.4220.4720.8520.4621.36MIN: 26.16 / MAX: 26.69MIN: 20.15 / MAX: 21.29MIN: 20.42 / MAX: 21.54MIN: 20.13 / MAX: 21.14MIN: 20.4 / MAX: 22.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v36 Channel8 Channel8cAMD 8cAMD2468107.216.826.746.726.75MIN: 7.06 / MAX: 7.41MIN: 6.74 / MAX: 6.94MIN: 6.66 / MAX: 6.89MIN: 6.63 / MAX: 6.9MIN: 6.48 / MAX: 6.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v26 Channel8 Channel8cAMD 8cAMD2468108.438.178.098.088.13MIN: 8.34 / MAX: 8.67MIN: 8.1 / MAX: 8.49MIN: 8.03 / MAX: 8.2MIN: 7.97 / MAX: 8.54MIN: 8.05 / MAX: 8.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b06 Channel8 Channel8cAMD 8cAMD369121511.2012.8710.3110.3010.27MIN: 11.1 / MAX: 11.36MIN: 10.33 / MAX: 101.47MIN: 10.24 / MAX: 10.4MIN: 10.22 / MAX: 10.71MIN: 10.18 / MAX: 10.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface6 Channel8 Channel8cAMD 8cAMD0.8731.7462.6193.4924.3653.873.883.713.723.69MIN: 3.69 / MAX: 4.32MIN: 3.73 / MAX: 7.29MIN: 3.61 / MAX: 3.79MIN: 3.62 / MAX: 3.82MIN: 3.58 / MAX: 3.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet6 Channel8 Channel8cAMD 8cAMD51015202522.6119.7218.9018.9419.30MIN: 22.34 / MAX: 23.36MIN: 19.51 / MAX: 20.08MIN: 18.69 / MAX: 22.18MIN: 18.74 / MAX: 30.02MIN: 19.16 / MAX: 20.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg166 Channel8 Channel8cAMD 8cAMD153045607567.6439.2139.2039.1839.08MIN: 66.99 / MAX: 68.67MIN: 38.62 / MAX: 48.96MIN: 38.6 / MAX: 50.04MIN: 38.75 / MAX: 39.61MIN: 38.76 / MAX: 40.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet186 Channel8 Channel8cAMD 8cAMD51015202522.2614.7814.6314.6114.74MIN: 21.79 / MAX: 23.91MIN: 14.56 / MAX: 16.14MIN: 14.43 / MAX: 15.12MIN: 14.39 / MAX: 15.07MIN: 14.46 / MAX: 15.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet6 Channel8 Channel8cAMD 8cAMD4812162014.079.589.349.329.38MIN: 13.9 / MAX: 14.81MIN: 9.3 / MAX: 52.6MIN: 9.26 / MAX: 9.8MIN: 9.22 / MAX: 10.6MIN: 9.28 / MAX: 9.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet506 Channel8 Channel8cAMD 8cAMD81624324035.4125.1324.6424.6524.85MIN: 32.26 / MAX: 160.12MIN: 24.89 / MAX: 26.13MIN: 24.35 / MAX: 25.4MIN: 24.38 / MAX: 25.67MIN: 24.61 / MAX: 28.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny6 Channel8 Channel8cAMD 8cAMD102030405043.5632.6331.3431.3134.64MIN: 39.7 / MAX: 118.6MIN: 30.92 / MAX: 114.96MIN: 30.95 / MAX: 32.51MIN: 30.92 / MAX: 34.17MIN: 33.75 / MAX: 38.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd6 Channel8 Channel8cAMD 8cAMD81624324033.5328.5928.2328.3128.45MIN: 31.54 / MAX: 93.95MIN: 27.15 / MAX: 29.17MIN: 27.93 / MAX: 28.74MIN: 28.07 / MAX: 28.83MIN: 27.87 / MAX: 88.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m6 Channel8 Channel8cAMD 8cAMD61218243024.6224.6924.0423.9424.35MIN: 23.92 / MAX: 25.19MIN: 23.98 / MAX: 25.32MIN: 23.06 / MAX: 24.6MIN: 23.04 / MAX: 26.46MIN: 23.49 / MAX: 24.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: BMW27 - Compute: CPU-Only6 Channel8 Channel8cAMD 8cAMD306090120150150.85150.22150.26150.48150.31

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Classroom - Compute: CPU-Only6 Channel8 Channel8cAMD 8cAMD90180270360450395.46401.58400.33401.93401.88

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Fishy Cat - Compute: CPU-Only6 Channel8 Channel8cAMD 8cAMD4080120160200196.16198.59197.36197.12197.45

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Barbershop - Compute: CPU-Only6 Channel8 Channel8cAMD 8cAMD4008001200160020001649.301652.871644.191648.101650.97

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Pabellon Barcelona - Compute: CPU-Only6 Channel8 Channel8cAMD 8cAMD110220330440550478.75485.88485.95486.37486.42

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Reads6 Channel8 Channel8cAMD 8cAMD16K32K48K64K80K7457766602750297280970859

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes6 Channel8 Channel8cAMD 8cAMD16K32K48K64K80K7534775159754787598275713

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship6 Channel8 Channel8cAMD 8cAMD0.60751.2151.82252.433.03752.62.62.72.62.7

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: yolov4 - Device: CPU6 Channel8 Channel8cAMD 8cAMD501001502002501491882091722101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: fcn-resnet101-11 - Device: CPU6 Channel8 Channel8cAMD 8cAMD91827364527333930291. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: super-resolution-10 - Device: CPU6 Channel8 Channel8cAMD 8cAMD9001800270036004500407929542931295529121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 16 Channel8 Channel8cAMD 8cAMD2K4K6K8K10K7798.127409.137238.127380.077140.811. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 206 Channel8 Channel8cAMD 8c11K22K33K44K55K50548.4949925.6651482.5148980.051. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1006 Channel8 Channel8cAMD 8c15K30K45K60K75K65225.2569156.0670509.7270487.231. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 2006 Channel8 Channel8cAMD 8c20K40K60K80K100K70589.0278219.9879015.8379944.071. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 5006 Channel8 Channel8cAMD 8c16K32K48K64K80K68373.7572991.5873170.8173891.141. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 10006 Channel8 Channel8cAMD 8c15K30K45K60K75K66317.8271121.0371951.1771581.271. (CC) gcc options: -shared -fPIC -O2 -pthread

RAR Compression

This test measures the time needed to archive/compress two copies of the Linux 5.14 kernel source tree using RAR/WinRAR compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRAR Compression 6.0.2Linux Source Tree Archiving To RAR6 Channel8 Channel8cAMD 8c2040608010092.1086.9491.0491.14

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.46 Channel8 Channel8cAMD 8c20M40M60M80M100M616583001108500001079418001133974001. (CXX) g++ options: -O3 -fopenmp

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.2VGR Performance Metric6 Channel8 Channel8cAMD 8c30K60K90K120K150K1290311260871263611281031. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPU6 Channel8 Channel8cAMD 8c2K4K6K8K10K1039410402104989850

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object Detection6 Channel8 Channel8cAMD 8c30K60K90K120K150K1413809920085210921461. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To Compile6 Channel8 Channel8cAMD 8c110220330440550511.61508.13504.85506.76

OpenJPEG

OpenJPEG is an open-source JPEG 2000 codec written in the C programming language. The default input for this test profile is the NASA/JPL-Caltech/MSSS Curiosity panorama 717MB TIFF image file converting to JPEG2000 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenJPEG 2.4Encode: NASA Curiosity Panorama M348 Channel8cAMD 8c20K40K60K80K100KSE +/- 214.11, N = 37812378291763531. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgms, Fewer Is BetterOpenJPEG 2.4Encode: NASA Curiosity Panorama M348 Channel8cAMD 8c14K28K42K56K70KMin: 77811 / Avg: 78123 / Max: 785331. (CXX) g++ options: -rdynamic

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.08cAMD 8cAMD369121511.6411.8211.55MIN: 11.59 / MAX: 12.1MIN: 11.75 / MAX: 14.09MIN: 11.44 / MAX: 12.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v28cAMD 8cAMD2468107.877.677.69MIN: 7.76 / MAX: 8.01MIN: 7.13 / MAX: 7.89MIN: 7.09 / MAX: 49.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet8cAMD 8cAMD2468106.906.907.21MIN: 6.81 / MAX: 7.18MIN: 6.83 / MAX: 7.12MIN: 7.12 / MAX: 7.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread