2700 august

AMD Ryzen 7 2700 Eight-Core testing with a Gigabyte AB350N-Gaming WIFI-CF (F20 BIOS) and HIS AMD Radeon HD 6450/7450/8450 / R5 230 OEM 1GB on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2108316-TJ-2700AUGUS79
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 5 Tests
Creator Workloads 6 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 4 Tests
Multi-Core 7 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 2 Tests
Raytracing 2 Tests
Renderers 3 Tests
Server CPU Tests 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
August 30 2021
  2 Hours, 39 Minutes
2
August 30 2021
  9 Hours, 51 Minutes
3
August 31 2021
  3 Hours, 14 Minutes
4
August 31 2021
  2 Hours, 41 Minutes
Invert Hiding All Results Option
  4 Hours, 36 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2700 augustProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution1234AMD Ryzen 7 2700 Eight-Core @ 3.20GHz (8 Cores / 16 Threads)Gigabyte AB350N-Gaming WIFI-CF (F20 BIOS)AMD 17h16GB120GB ADATA SU700HIS AMD Radeon HD 6450/7450/8450 / R5 230 OEM 1GBAMD Caicos HDMI AudioDELL S2409WRealtek RTL8111/8168/8411 + Intel 3165Ubuntu 19.105.9.0-050900rc7daily20201004-generic (x86_64) 20201003GNOME Shell 3.34.1X Server 1.20.53.3 Mesa 19.2.8 (LLVM 9.0.0)GCC 9.2.1 20191008ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x800820bJava Details- OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-2ubuntu219.10)Python Details- Python 2.7.17 + Python 3.7.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

1234Result OverviewPhoronix Test Suite100%103%106%109%112%RenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceQuantum ESPRESSOI.M.D.SScala DottyA.S.PApache Spark BayesG.A.U.J.FSavina Reactors.IOA.U.C.TRand ForestApache Spark ALSALS Movie LensF.H.RAUSURF112

2700 augustqe: AUSURF112renaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futuresdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitopenvkl: vklBenchmark ISPCopenvkl: vklBenchmark Scalarbuild-gcc: Time To Compilebuild-linux-kernel: Time To Compileyafaray: Total Time For Sample Scenetachyon: Total Timesynthmark: VoiceMark_100keydb: mnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mtnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1onnx: yolov4 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUonnx: fcn-resnet101-11 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUnatron: Spaceship12341358.681134.7973.310271.22795.62108.19913.64902.13082.74651.714253.12961.2374.41116.7339.46255.333181611.266128.752193.765122.3051585.342444494.043.489746.0211.0015.7495.04864.89928.018.927.576.818.0111.763.3123.2373.2919.5514.5338.2841.3631.1417.73657.056326.18774.019269.117205310391361523142.11357.091103.9958.110283.22771.32143.89872.74711.13115.75170.414241.72922.8373.99116.56338.14253.6833181668.163116.801194.625122.6522587.881444193.613.6307.29546.47911.3435.9245.05066.10727.538.807.576.777.7311.382.6521.5274.4918.9514.5639.5641.2631.1217.393651.623324.56273.616269.457205310391378423472.11361.041024.5957.010301.32789.32223.59864.54738.23115.35064.014386.63007.91355.461042.2974.210395.12816.92226.210263.14985.43102.94630.214527.73045.0372.16115.87337.46251.0833181656.889134.247192.901122.5364582.897449476.093.8037.36446.51111.4936.0555.13666.02727.689.427.636.777.5811.433.2822.8873.0320.1414.5538.5740.5831.1717.573655.989324.38973.298270.877205311391375523242.1OpenBenchmarking.org

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.8Input: AUSURF112432130060090012001500SE +/- 5.94, N = 3SE +/- 5.66, N = 31355.461361.041357.091358.681. (F9X) gfortran options: -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.8Input: AUSURF11243212004006008001000Min: 1350.77 / Avg: 1361.04 / Max: 1371.36Min: 1347.14 / Avg: 1357.09 / Max: 1366.751. (F9X) gfortran options: -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty43212004006008001000SE +/- 8.47, N = 3SE +/- 15.97, N = 151042.21024.51103.91134.7MIN: 842.98 / MAX: 2237.87MIN: 826.33 / MAX: 2287.3MIN: 820.44 / MAX: 2541.02MIN: 851.79 / MAX: 2126.23
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty43212004006008001000Min: 1015.75 / Avg: 1024.54 / Max: 1041.48Min: 1022.54 / Avg: 1103.94 / Max: 1172.92

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest43212004006008001000SE +/- 3.48, N = 3SE +/- 7.20, N = 3974.2957.0958.1973.3MIN: 887.09 / MAX: 1190.87MIN: 857.47 / MAX: 1207.37MIN: 879.86 / MAX: 1189.49MIN: 889.42 / MAX: 1158.35
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest43212004006008001000Min: 952.41 / Avg: 957 / Max: 963.81Min: 944.97 / Avg: 958.12 / Max: 969.79

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens43212K4K6K8K10KSE +/- 41.79, N = 3SE +/- 9.95, N = 310395.110301.310283.210271.2MAX: 11540.63MIN: 10218.5 / MAX: 11569.96MIN: 10268.99 / MAX: 11345.94MAX: 11292.71
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens43212K4K6K8K10KMin: 10218.5 / Avg: 10301.29 / Max: 10352.62Min: 10268.99 / Avg: 10283.16 / Max: 10302.35

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS43216001200180024003000SE +/- 33.66, N = 3SE +/- 16.68, N = 32816.92789.32771.32795.6MIN: 2617.33 / MAX: 3046.89MIN: 2579.81 / MAX: 3193.59MIN: 2560.86 / MAX: 3077.84MIN: 2681.41 / MAX: 2996.2
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS43215001000150020002500Min: 2736.53 / Avg: 2789.28 / Max: 2851.88Min: 2739.65 / Avg: 2771.3 / Max: 2796.26

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes43215001000150020002500SE +/- 17.69, N = 3SE +/- 16.65, N = 32226.22223.52143.82108.1MIN: 1538.66 / MAX: 10685.31MIN: 1610.95 / MAX: 2982.19MIN: 1562.4 / MAX: 3586.17MIN: 1569.46
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes4321400800120016002000Min: 2196.99 / Avg: 2223.48 / Max: 2257.04Min: 2120.14 / Avg: 2143.83 / Max: 2175.94

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO43212K4K6K8K10KSE +/- 123.24, N = 5SE +/- 146.25, N = 1210263.19864.59872.79913.6MAX: 15575.43MIN: 9577.17 / MAX: 14858.63MIN: 9023.31 / MAX: 15624.79MIN: 9913.57 / MAX: 15583.01
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO43212K4K6K8K10KMin: 9577.17 / Avg: 9864.46 / Max: 10260.88Min: 9023.31 / Avg: 9872.67 / Max: 10814.17

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank432111002200330044005500SE +/- 79.83, N = 12SE +/- 56.31, N = 124985.44738.24711.14902.1MIN: 4509.87 / MAX: 5089.15MIN: 3868.09 / MAX: 5249.23MIN: 3934.16 / MAX: 5235.62MIN: 4490.93 / MAX: 5076.83
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank43219001800270036004500Min: 4262.62 / Avg: 4738.16 / Max: 5179.89Min: 4366.24 / Avg: 4711.12 / Max: 4956.55

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests43217001400210028003500SE +/- 7.79, N = 3SE +/- 13.89, N = 33102.93115.33115.73082.7MIN: 2873.39 / MAX: 3119.64MIN: 2842.24 / MAX: 3228.02MIN: 2849.84 / MAX: 3181.27MIN: 2849.13 / MAX: 3094.87
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests43215001000150020002500Min: 3101.08 / Avg: 3115.34 / Max: 3127.92Min: 3087.93 / Avg: 3115.68 / Max: 3130.78

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout432111002200330044005500SE +/- 79.58, N = 15SE +/- 54.68, N = 154630.25064.05170.44651.7MIN: 4002.29 / MAX: 5453.22MIN: 3897.41 / MAX: 9465.37MIN: 3875.97 / MAX: 7892.95MIN: 4050.47 / MAX: 5449.92
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout43219001800270036004500Min: 4489.68 / Avg: 5063.99 / Max: 5616.6Min: 4712.14 / Avg: 5170.4 / Max: 5460.16

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree43213K6K9K12K15KSE +/- 182.97, N = 5SE +/- 28.57, N = 314527.714386.614241.714253.1MIN: 11587.58MIN: 11130.48 / MAX: 15106.91MIN: 11182.38 / MAX: 14293.9MIN: 11211.62 / MAX: 14253.14
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree43213K6K9K12K15KMin: 14115.25 / Avg: 14386.6 / Max: 15106.91Min: 14195.48 / Avg: 14241.65 / Max: 14293.9

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures43217001400210028003500SE +/- 39.12, N = 3SE +/- 15.18, N = 33045.03007.92922.82961.2MIN: 3011.23 / MAX: 3079.39MIN: 2883.99 / MAX: 3102.87MIN: 2839.55 / MAX: 2977.66MIN: 2918.73 / MAX: 3008.48
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures43215001000150020002500Min: 2936.8 / Avg: 3007.94 / Max: 3071.73Min: 2892.51 / Avg: 2922.84 / Max: 2939.05

dav1d

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p42180160240320400SE +/- 0.60, N = 3372.16373.99374.41MIN: 283.06 / MAX: 561.9MIN: 279.97 / MAX: 590.77MIN: 283.67 / MAX: 588.261. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p42170140210280350Min: 372.8 / Avg: 373.99 / Max: 374.71. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 4K421306090120150SE +/- 0.08, N = 3115.87116.56116.70MIN: 101.07 / MAX: 121.87MIN: 102.19 / MAX: 122.9MIN: 103.05 / MAX: 123.081. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 4K42120406080100Min: 116.44 / Avg: 116.56 / Max: 116.711. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 1080p42170140210280350SE +/- 0.83, N = 3337.46338.14339.46MIN: 267.01 / MAX: 363.69MIN: 281.14 / MAX: 365.6MIN: 273.75 / MAX: 364.241. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 1080p42160120180240300Min: 337.14 / Avg: 338.14 / Max: 339.791. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p 10-bit42160120180240300SE +/- 0.21, N = 3251.08253.68255.30MIN: 186.85 / MAX: 416.3MIN: 187.17 / MAX: 442.3MIN: 187.72 / MAX: 440.181. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p 10-bit42150100150200250Min: 253.25 / Avg: 253.68 / Max: 253.931. (CC) gcc options: -pthread -lm

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPC421816243240333333MIN: 3 / MAX: 454MIN: 3 / MAX: 450MIN: 3 / MAX: 449

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark Scalar42148121620181818MIN: 1 / MAX: 437MIN: 1 / MAX: 435MIN: 1 / MAX: 435

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 11.2.0Time To Compile421400800120016002000SE +/- 1.31, N = 31656.891668.161611.27
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 11.2.0Time To Compile42130060090012001500Min: 1665.54 / Avg: 1668.16 / Max: 1669.59

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile421306090120150SE +/- 1.11, N = 9134.25116.80128.75
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile421306090120150Min: 114.74 / Avg: 116.8 / Max: 125.49

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene4214080120160200SE +/- 0.44, N = 3192.90194.63193.771. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene4214080120160200Min: 194.12 / Avg: 194.63 / Max: 195.491. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total Time421306090120150SE +/- 0.10, N = 3122.54122.65122.311. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total Time42120406080100Min: 122.49 / Avg: 122.65 / Max: 122.851. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100421130260390520650SE +/- 0.91, N = 3582.90587.88585.341. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100421100200300400500Min: 586.74 / Avg: 587.88 / Max: 589.671. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0421100K200K300K400K500KSE +/- 1668.14, N = 3449476.09444193.61444494.041. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.042180K160K240K320K400KMin: 441354.84 / Avg: 444193.61 / Max: 447130.941. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV34210.85571.71142.56713.42284.2785SE +/- 0.053, N = 43.8033.6303.489MIN: 3.72 / MAX: 4.94MIN: 3.47 / MAX: 6.22MIN: 3.45 / MAX: 3.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3421246810Min: 3.49 / Avg: 3.63 / Max: 3.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1421246810SE +/- 0.094, N = 47.3647.2957.000MIN: 7.29 / MAX: 7.67MIN: 7.01 / MAX: 67.92MIN: 6.95 / MAX: 8.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.14213691215Min: 7.1 / Avg: 7.29 / Max: 7.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-504211122334455SE +/- 0.22, N = 446.5146.4846.02MIN: 46.18 / MAX: 58.6MIN: 45.62 / MAX: 118.04MIN: 45.61 / MAX: 70.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50421918273645Min: 46.03 / Avg: 46.48 / Max: 47.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.04213691215SE +/- 0.11, N = 411.4911.3411.00MIN: 11.38 / MAX: 14.47MIN: 11.05 / MAX: 25.3MIN: 10.94 / MAX: 11.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.04213691215Min: 11.16 / Avg: 11.34 / Max: 11.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224421246810SE +/- 0.040, N = 46.0555.9245.749MIN: 6.01 / MAX: 7.07MIN: 5.8 / MAX: 9.65MIN: 5.72 / MAX: 6.811. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224421246810Min: 5.85 / Avg: 5.92 / Max: 6.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.04211.15562.31123.46684.62245.778SE +/- 0.018, N = 45.1365.0505.048MIN: 5.08 / MAX: 19.52MIN: 4.95 / MAX: 27.22MIN: 4.91 / MAX: 54.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0421246810Min: 5.02 / Avg: 5.05 / Max: 5.11. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v34211530456075SE +/- 0.89, N = 466.0366.1164.90MIN: 65.71 / MAX: 134.73MIN: 64.47 / MAX: 179.62MIN: 64.59 / MAX: 87.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v34211326395265Min: 64.76 / Avg: 66.11 / Max: 68.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet421714212835SE +/- 0.15, N = 327.6827.5328.01MIN: 26.13 / MAX: 75.15MIN: 25.73 / MAX: 29.94MIN: 26.35 / MAX: 106.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet421612182430Min: 27.27 / Avg: 27.53 / Max: 27.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v24213691215SE +/- 0.07, N = 39.428.808.92MIN: 8.86 / MAX: 72.29MIN: 8.06 / MAX: 65.16MIN: 8.02 / MAX: 9.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v24213691215Min: 8.66 / Avg: 8.8 / Max: 8.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3421246810SE +/- 0.13, N = 37.637.577.57MIN: 7.54 / MAX: 9.23MIN: 7.16 / MAX: 62.55MIN: 7.47 / MAX: 8.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v34213691215Min: 7.31 / Avg: 7.57 / Max: 7.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2421246810SE +/- 0.07, N = 36.776.776.81MIN: 6.71 / MAX: 6.97MIN: 6.65 / MAX: 7.82MIN: 6.77 / MAX: 6.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v24213691215Min: 6.69 / Avg: 6.77 / Max: 6.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet421246810SE +/- 0.12, N = 37.587.738.01MIN: 7.46 / MAX: 9.68MIN: 7.33 / MAX: 83.07MIN: 7.8 / MAX: 24.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet4213691215Min: 7.48 / Avg: 7.73 / Max: 7.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b04213691215SE +/- 0.14, N = 311.4311.3811.76MIN: 11.37 / MAX: 11.88MIN: 10.89 / MAX: 61.71MIN: 11.68 / MAX: 12.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b04213691215Min: 11.17 / Avg: 11.38 / Max: 11.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface4210.74481.48962.23442.97923.724SE +/- 0.01, N = 33.282.653.31MIN: 2.74 / MAX: 117.12MIN: 2.59 / MAX: 7.17MIN: 2.63 / MAX: 133.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface421246810Min: 2.64 / Avg: 2.65 / Max: 2.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet421612182430SE +/- 0.29, N = 322.8821.5223.23MIN: 22.23 / MAX: 25.12MIN: 20.42 / MAX: 74.82MIN: 22.12 / MAX: 89.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet421510152025Min: 21.22 / Avg: 21.52 / Max: 22.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg1642120406080100SE +/- 0.28, N = 373.0374.4973.29MIN: 72.42 / MAX: 78.52MIN: 72.08 / MAX: 125.89MIN: 72.42 / MAX: 76.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg164211428425670Min: 74.12 / Avg: 74.49 / Max: 75.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18421510152025SE +/- 0.56, N = 320.1418.9519.55MIN: 18.41 / MAX: 40.62MIN: 18.26 / MAX: 37.54MIN: 18.23 / MAX: 86.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18421510152025Min: 18.36 / Avg: 18.95 / Max: 20.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet42148121620SE +/- 0.01, N = 314.5514.5614.53MIN: 14.48 / MAX: 15.15MIN: 14.42 / MAX: 25.41MIN: 14.41 / MAX: 16.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet42148121620Min: 14.54 / Avg: 14.56 / Max: 14.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50421918273645SE +/- 0.50, N = 338.5739.5638.28MIN: 38.16 / MAX: 41.89MIN: 37.96 / MAX: 74.13MIN: 38.03 / MAX: 50.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50421816243240Min: 38.57 / Avg: 39.56 / Max: 40.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny421918273645SE +/- 0.24, N = 340.5841.2641.36MIN: 36.73 / MAX: 46.52MIN: 35.15 / MAX: 50.66MIN: 37.18 / MAX: 105.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny421918273645Min: 40.86 / Avg: 41.26 / Max: 41.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd421714212835SE +/- 0.07, N = 331.1731.1231.14MIN: 29.79 / MAX: 47.18MIN: 29.61 / MAX: 90.13MIN: 29.83 / MAX: 38.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd421714212835Min: 31.01 / Avg: 31.12 / Max: 31.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m42148121620SE +/- 0.05, N = 317.5717.3917.70MIN: 17.28 / MAX: 72.79MIN: 16.94 / MAX: 111.71MIN: 17.04 / MAX: 82.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m42148121620Min: 17.33 / Avg: 17.39 / Max: 17.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet4218001600240032004000SE +/- 1.74, N = 33655.993651.623657.06MIN: 3540.19 / MAX: 4008.21MIN: 3534.09 / MAX: 3733.22MIN: 3556.01 / MAX: 3721.821. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet4216001200180024003000Min: 3648.63 / Avg: 3651.62 / Max: 3654.661. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v242170140210280350SE +/- 0.81, N = 3324.39324.56326.19MIN: 310.19 / MAX: 340MIN: 301.98 / MAX: 368.9MIN: 305.03 / MAX: 347.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v242160120180240300Min: 323.4 / Avg: 324.56 / Max: 326.121. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v24211632486480SE +/- 0.16, N = 373.3073.6274.02MIN: 72.84 / MAX: 74.36MIN: 72.83 / MAX: 74.91MIN: 72.81 / MAX: 76.671. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v24211428425670Min: 73.35 / Avg: 73.62 / Max: 73.91. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.142160120180240300SE +/- 0.13, N = 3270.88269.46269.12MIN: 270.32 / MAX: 274.21MIN: 268.7 / MAX: 273.37MIN: 268.69 / MAX: 269.541. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.142150100150200250Min: 269.29 / Avg: 269.46 / Max: 269.721. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: yolov4 - Device: OpenMP CPU4214080120160200SE +/- 0.17, N = 32052052051. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: yolov4 - Device: OpenMP CPU4214080120160200Min: 204.5 / Avg: 204.67 / Max: 2051. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: bertsquad-10 - Device: OpenMP CPU42170140210280350SE +/- 0.58, N = 33113103101. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: bertsquad-10 - Device: OpenMP CPU42160120180240300Min: 308.5 / Avg: 309.5 / Max: 310.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: fcn-resnet101-11 - Device: OpenMP CPU421918273645SE +/- 0.00, N = 33939391. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: fcn-resnet101-11 - Device: OpenMP CPU421816243240Min: 38.5 / Avg: 38.5 / Max: 38.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: shufflenet-v2-10 - Device: OpenMP CPU4213K6K9K12K15KSE +/- 48.97, N = 31375513784136151. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: shufflenet-v2-10 - Device: OpenMP CPU4212K4K6K8K10KMin: 13689 / Avg: 13783.5 / Max: 138531. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: super-resolution-10 - Device: OpenMP CPU4215001000150020002500SE +/- 3.98, N = 32324234723141. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: super-resolution-10 - Device: OpenMP CPU421400800120016002000Min: 2339.5 / Avg: 2347.33 / Max: 2352.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship4210.47250.9451.41751.892.3625SE +/- 0.03, N = 32.12.12.1
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship421246810Min: 2.1 / Avg: 2.13 / Max: 2.2