2970WX July

AMD Ryzen Threadripper 2970WX 24-Core testing with a Gigabyte X399 AORUS Gaming 7 (F12h BIOS) and Sapphire AMD Radeon RX 550 640SP / 560/560X 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2108019-IB-2970WXJUL61
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

Creator Workloads 2 Tests
Database Test Suite 2 Tests
HPC - High Performance Computing 3 Tests
Java Tests 2 Tests
Machine Learning 3 Tests
Multi-Core 3 Tests
Renderers 2 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
July 31 2021
  2 Hours, 22 Minutes
2
July 31 2021
  2 Hours, 19 Minutes
3
July 31 2021
  2 Hours, 19 Minutes
4
August 01 2021
  2 Hours, 18 Minutes
5
August 01 2021
  2 Hours, 20 Minutes
Invert Hiding All Results Option
  2 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2970WX JulyProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution12345AMD Ryzen Threadripper 2970WX 24-Core @ 3.00GHz (24 Cores / 48 Threads)Gigabyte X399 AORUS Gaming 7 (F12h BIOS)AMD 17h16GB120GB Corsair Force MP500Sapphire AMD Radeon RX 550 640SP / 560/560X 4GB (1300/1750MHz)Realtek ALC1220VA2431Qualcomm Atheros Killer E2500 + 2 x QLogic cLOM8214 1/10GbE + Intel 8265 / 8275Ubuntu 20.045.9.0-050900rc6daily20200926-generic (x86_64) 20200925GNOME Shell 3.36.4X Server 1.20.94.6 Mesa 20.2.6 (LLVM 11.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x800820dGraphics Details- GLAMORJava Details- OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)Python Details- 1: Python 3.8.5- 2: Python 3.8.10- 3: Python 3.8.10- 4: Python 3.8.10- 5: Python 3.8.10Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

12345Result OverviewPhoronix Test Suite100%105%111%116%NatronApache CassandraFacebook RocksDBRenaissanceMobile Neural NetworkNCNNYafaRayC-BloscTNNGravityMark

2970WX Julyncnn: CPU - resnet18rocksdb: Rand Fill Synccassandra: Readsncnn: CPU - alexnetncnn: CPU - resnet50mnn: squeezenetv1.1ncnn: CPU - vgg16mnn: mobilenetV3natron: Spaceshiprenaissance: Apache Spark PageRankrocksdb: Rand Fillncnn: CPU - squeezenet_ssdrenaissance: Scala Dottycassandra: Mixed 1:3mnn: SqueezeNetV1.0ncnn: CPU - googlenetncnn: Vulkan GPU - blazefacemnn: resnet-v2-50rocksdb: Seq Fillncnn: CPU - yolov4-tinymnn: MobileNetV2_224cassandra: Mixed 1:1ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU - blazefacencnn: CPU-v3-v3 - mobilenet-v3renaissance: Savina Reactors.IOncnn: CPU - mobilenetmnn: inception-v3ncnn: CPU - regnety_400mrenaissance: Apache Spark ALSncnn: Vulkan GPU - squeezenet_ssdncnn: CPU - efficientnet-b0renaissance: Apache Spark Bayesrocksdb: Read While Writingcassandra: Writesrenaissance: In-Memory Database Shootoutrenaissance: Finagle HTTP Requestsrocksdb: Rand Readrocksdb: Update Randrocksdb: Read Rand Write Randncnn: CPU-v2-v2 - mobilenet-v2mnn: mobilenet-v1-1.0ncnn: Vulkan GPU-v3-v3 - mobilenet-v3renaissance: Rand Forestrenaissance: Genetic Algorithm Using Jenetics + Futuresyafaray: Total Time For Sample Scenerenaissance: Akka Unbalanced Cobwebbed Treencnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - regnety_400mtnn: CPU - SqueezeNet v2ncnn: Vulkan GPU - mobilenetrenaissance: ALS Movie Lensblosc: blosclzncnn: Vulkan GPU - efficientnet-b0tnn: CPU - SqueezeNet v1.1ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - vgg16tnn: CPU - MobileNet v2tnn: CPU - DenseNetgravitymark: 1920 x 1080 - Vulkan1234544.83163659194324.5668.549.86897.155.1653.45088.755950642.911132.99718112.28135.873.1144.868629253527.70710779512.19145.7913.0112292.629.5959.5338.72359.613.4717.031209.136844421371906178.54099.4105202520516006192929113.385.1147.35920.02665.996.08621703.75.567.0511.3110.1972.11513.129836.815950.715.48256.3555.8510.29204.9715.3228.93306.2663034.733.256.061770110552526.9353.958.95487.614.3243.25443.762900136.431111.510508611.02430.93.150.6759331647.088.0469899012.8612.785.3313.6512326.531.0861.55236.842279.613.1816.061184.237958751301536439.53906.7106197835518649200769213.534.9487.43892.72700.498.40321813.45.67.1111.3410.3772.51312.929878.21573315.27256.4855.8310.4320.244.9715.1629.1305.2783023.79133.247.961238013840922.9950.18.02480.575.2253.35041.355856038.831038.311283711.36331.942.9746.24365409549.517.61510503311.8413.565.5812.7411693.428.7159.59636.12305.214.0917.121180.935936141311136178.34055.7107926092497709198577213.025.1097.59902.92679.798.82521476.35.616.9911.5810.1271.3813.1710001.315895.615.26256.3215.9110.39204.9715.3228.8306.1893026.05833.251.532028210266627.8354.610.29998.934.3593.24485.856386141.15968.09627112.68433.673.0346.47558667949.127.5910906512.2913.815.6112.5912667.528.8259.24936.252202.113.1916.681162.136560071318466142.04040.6110355654498715198679013.285.0117.39911.12620.098.45322068.25.56.9211.3810.3471.21712.989814.816025.315.21256.3035.8810.3319.984.9315.229.06305.5683035.01333.231.221946510620132.9367.929.97483.424.2662.85102.052403141.54965.411161412.80631.593.4446.21766221548.677.29210568211.7113.535.5313.5611704.228.7557.15636.552220.613.2516.521231.936342781309206457.24006.9108371044515847198664013.54.9967.35891.32660.096.46721886.55.467.0611.2810.1572.7212.979857.015930.815.3260.2885.8610.3820.014.9915.2328.9304.273035.69233.2OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet1851342132639526531.2244.8347.9651.5356.06MIN: 23.82 / MAX: 211.9MIN: 23.94 / MAX: 207.98MIN: 21.37 / MAX: 192.24MIN: 22.9 / MAX: 210.3MIN: 23.61 / MAX: 202.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill Sync452134K8K12K16K20K20282194651770116365123801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Reads3524130K60K90K120K150K13840910620110552510266691943

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet3124581624324022.9924.5626.9327.8332.93MIN: 15.58 / MAX: 92.05MIN: 15.88 / MAX: 92.56MIN: 19.47 / MAX: 85.7MIN: 18.52 / MAX: 91.86MIN: 19.24 / MAX: 89.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet5032451153045607550.1053.9554.6067.9268.54MIN: 37.92 / MAX: 520.46MIN: 37.14 / MAX: 491.26MIN: 36.7 / MAX: 497.36MIN: 38.45 / MAX: 507.97MIN: 39.32 / MAX: 510.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.13215436912158.0248.9549.8689.97410.299MIN: 7.58 / MAX: 11.85MIN: 8.01 / MAX: 11.74MIN: 9.54 / MAX: 23.89MIN: 7.53 / MAX: 12.21MIN: 9.73 / MAX: 18.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16352142040608010080.5783.4287.6197.1598.93MIN: 62.68 / MAX: 197.8MIN: 66.24 / MAX: 193.76MIN: 68.85 / MAX: 190.59MIN: 64.38 / MAX: 202.68MIN: 74.18 / MAX: 190.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3524131.17562.35123.52684.70245.8784.2664.3244.3595.1655.225MIN: 4.19 / MAX: 4.34MIN: 4.26 / MAX: 4.41MIN: 4.19 / MAX: 4.5MIN: 5.09 / MAX: 5.32MIN: 5.17 / MAX: 5.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship134250.7651.532.2953.063.8253.43.33.23.22.8

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank43152120024003600480060004485.85041.35088.75102.05443.7MIN: 3933.91 / MAX: 4772.53MIN: 4424.93 / MAX: 5742.67MIN: 4448.56 / MAX: 6210.19MIN: 4315.23 / MAX: 5711.89MIN: 4391.93 / MAX: 6021.74

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill24135130K260K390K520K650K6290015638615595065585605240311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd23451102030405036.4338.8341.1541.5442.91MIN: 30.42 / MAX: 410.51MIN: 30.73 / MAX: 374.79MIN: 31.18 / MAX: 416.67MIN: 30.67 / MAX: 413.78MIN: 30.87 / MAX: 416.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty543212004006008001000965.4968.01038.31111.51132.9MIN: 822.35 / MAX: 1656.11MIN: 797.44 / MAX: 1715.9MIN: 789.12 / MAX: 1519.26MIN: 825.61 / MAX: 1673.04MIN: 869.98 / MAX: 1652.27

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:33521420K40K60K80K100K1128371116141050869718196271

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.023145369121511.0211.3612.2812.6812.81MIN: 10.22 / MAX: 21.7MIN: 11.04 / MAX: 29.01MIN: 11.38 / MAX: 19.14MIN: 12.08 / MAX: 18.74MIN: 11.94 / MAX: 21.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet2534181624324030.9031.5931.9433.6735.87MIN: 27.29 / MAX: 397.95MIN: 28.02 / MAX: 460.48MIN: 27.72 / MAX: 426.31MIN: 27.61 / MAX: 462.78MIN: 28.45 / MAX: 481.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: blazeface342150.7741.5482.3223.0963.872.973.033.103.113.44MIN: 2.58 / MAX: 3.52MIN: 2.87 / MAX: 3.67MIN: 2.87 / MAX: 5.62MIN: 2.56 / MAX: 5.4MIN: 2.52 / MAX: 6.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-5015342112233445544.8746.2246.2446.4850.67MIN: 41.23 / MAX: 54.09MIN: 41.32 / MAX: 52.99MIN: 42.25 / MAX: 53.65MIN: 43.11 / MAX: 50.9MIN: 44.15 / MAX: 55.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Sequential Fill53124140K280K420K560K700K6622156540956292535933165866791. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny25431122436486047.0848.6749.1249.5152.00MIN: 42.27 / MAX: 198.04MIN: 40.87 / MAX: 205.92MIN: 42.66 / MAX: 205.45MIN: 41.22 / MAX: 202.49MIN: 40.2 / MAX: 208.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224543122468107.2927.5907.6157.7078.046MIN: 7.11 / MAX: 7.43MIN: 7.48 / MAX: 7.71MIN: 7.29 / MAX: 12.92MIN: 7.59 / MAX: 7.99MIN: 7.62 / MAX: 8.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:14153220K40K60K80K100K10906510779510568210503398990

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet53142369121511.7111.8412.1912.2912.86MIN: 11.35 / MAX: 33.67MIN: 11.45 / MAX: 23.8MIN: 11.38 / MAX: 68.71MIN: 11.63 / MAX: 72.91MIN: 10.99 / MAX: 95.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2253414812162012.7813.5313.5613.8114.00MIN: 12.67 / MAX: 14.53MIN: 13.47 / MAX: 15.05MIN: 13.3 / MAX: 14.22MIN: 13.54 / MAX: 53.61MIN: 13.87 / MAX: 19.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface253411.30282.60563.90845.21126.5145.335.535.585.615.79MIN: 5.25 / MAX: 5.58MIN: 5.45 / MAX: 7.37MIN: 5.47 / MAX: 5.72MIN: 5.5 / MAX: 7.92MIN: 5.69 / MAX: 6.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3431524812162012.5912.7413.0113.5613.65MIN: 12.09 / MAX: 33.96MIN: 11.95 / MAX: 65MIN: 12.47 / MAX: 57.66MIN: 12 / MAX: 333.01MIN: 11.77 / MAX: 337.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO351243K6K9K12K15K11693.411704.212292.612326.512667.5MAX: 22224.71MAX: 16979.22MIN: 12292.59 / MAX: 18810.77MAX: 24642.39MAX: 18487.31

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet3541271421283528.7128.7528.8229.5931.08MIN: 27.73 / MAX: 39.91MIN: 26.67 / MAX: 185.23MIN: 27.42 / MAX: 44.83MIN: 27.53 / MAX: 185.96MIN: 26.18 / MAX: 458.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v354132142842567057.1659.2559.5359.6061.55MIN: 54.43 / MAX: 135.59MIN: 52.71 / MAX: 69.65MIN: 56.78 / MAX: 99.98MIN: 55.24 / MAX: 97.8MIN: 51.43 / MAX: 102.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m3452191827364536.1036.2536.5536.8438.70MIN: 35.69 / MAX: 56.2MIN: 35.89 / MAX: 36.77MIN: 35.32 / MAX: 206.92MIN: 34.73 / MAX: 162.06MIN: 37.97 / MAX: 85.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS4523150010001500200025002202.12220.62279.62305.22359.6MIN: 2007.07 / MAX: 2378.82MIN: 1994.96 / MAX: 2449.6MIN: 2114.19 / MAX: 2739.91MIN: 2098.45 / MAX: 2612.05MIN: 1999.13 / MAX: 2897.65

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: squeezenet_ssd245134812162013.1813.1913.2513.4714.09MIN: 12.01 / MAX: 18.7MIN: 12.04 / MAX: 20.01MIN: 12.14 / MAX: 17.67MIN: 11.98 / MAX: 19.27MIN: 12.14 / MAX: 19.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0254134812162016.0616.5216.6817.0317.12MIN: 15.75 / MAX: 30.09MIN: 16.25 / MAX: 18.02MIN: 16.15 / MAX: 50.41MIN: 16.22 / MAX: 51.28MIN: 16.12 / MAX: 152.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes43215300600900120015001162.11180.91184.21209.11231.9MIN: 828.94 / MAX: 1413.63MIN: 838.22 / MAX: 1318.56MIN: 844.46 / MAX: 1454.51MIN: 844.99 / MAX: 1362.04MIN: 895.93 / MAX: 1323.17

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read While Writing21453800K1600K2400K3200K4000K379587536844423656007363427835936141. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes1435230K60K90K120K150K137190131846131113130920130153

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout43125140028004200560070006142.06178.36178.56439.56457.2MIN: 5851.56 / MAX: 7455.54MIN: 5620.9 / MAX: 6868.2MIN: 5704.5 / MAX: 7336.04MIN: 6072.23 / MAX: 7632.21MIN: 5915.55 / MAX: 8134.07

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests2543190018002700360045003906.74006.94040.64055.74099.4MIN: 3579.93 / MAX: 4123.16MIN: 3646.54 / MAX: 4121.13MIN: 3751.67 / MAX: 4040.65MIN: 3691.81 / MAX: 4318.24MIN: 3762.19 / MAX: 4201.46

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Read4532120M40M60M80M100M1103556541083710441079260921061978351052025201. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Update Random21543110K220K330K440K550K5186495160065158474987154977091. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write Random24531400K800K1200K1600K2000K200769219867901986640198577219292911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v234152369121513.0213.2813.3813.5013.53MIN: 12.64 / MAX: 48.95MIN: 12.8 / MAX: 35.47MIN: 13 / MAX: 38.43MIN: 12.07 / MAX: 135.13MIN: 13.01 / MAX: 42.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0254311.15072.30143.45214.60285.75354.9484.9965.0115.1095.114MIN: 4.79 / MAX: 25.01MIN: 4.88 / MAX: 35.84MIN: 4.8 / MAX: 22.32MIN: 4.71 / MAX: 29.42MIN: 4.89 / MAX: 30.051. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3154232468107.357.357.397.437.59MIN: 6.43 / MAX: 9.99MIN: 6.45 / MAX: 9.02MIN: 6.27 / MAX: 9.03MIN: 6.48 / MAX: 9.01MIN: 6.31 / MAX: 10.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest523412004006008001000891.3892.7902.9911.1920.0MIN: 822.7 / MAX: 1078.04MIN: 815.43 / MAX: 1071.54MIN: 822.24 / MAX: 1077.32MIN: 823.41 / MAX: 1100.48MIN: 857.45 / MAX: 1097.17

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures4513260012001800240030002620.02660.02665.92679.72700.4MIN: 2515.57 / MAX: 2711.91MIN: 2578.14 / MAX: 2737.39MIN: 2578.84 / MAX: 2774.18MIN: 2563.63 / MAX: 2741.28MIN: 2545.29 / MAX: 2761.15

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene152432040608010096.0996.4798.4098.4598.831. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree312545K10K15K20K25K21476.321703.721813.421886.522068.2MIN: 16811.84MIN: 17470.92MIN: 17692.66MIN: 17290.55 / MAX: 21886.51MIN: 17854.31

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2541231.26232.52463.78695.04926.31155.465.505.565.605.61MIN: 4.93 / MAX: 6.8MIN: 4.94 / MAX: 7.18MIN: 4.93 / MAX: 7.18MIN: 4.95 / MAX: 7.16MIN: 4.95 / MAX: 7.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet18431522468106.926.997.057.067.11MIN: 6.38 / MAX: 7.48MIN: 6.38 / MAX: 7.85MIN: 6.43 / MAX: 8.03MIN: 6.39 / MAX: 7.63MIN: 6.36 / MAX: 8.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: googlenet51243369121511.2811.3111.3411.3811.58MIN: 10.25 / MAX: 12.75MIN: 10.2 / MAX: 13.65MIN: 10.3 / MAX: 12.72MIN: 10.43 / MAX: 12.85MIN: 10.33 / MAX: 13.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: regnety_400m35142369121510.1210.1510.1910.3410.37MIN: 8.01 / MAX: 14.31MIN: 8.04 / MAX: 15.31MIN: 8.08 / MAX: 13.58MIN: 8.08 / MAX: 13.86MIN: 8 / MAX: 13.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v243125163248648071.2271.3872.1272.5172.72MIN: 70.73 / MAX: 73.6MIN: 70.98 / MAX: 72.07MIN: 71.88 / MAX: 72.89MIN: 72.19 / MAX: 72.84MIN: 72.49 / MAX: 73.011. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mobilenet25413369121512.9212.9712.9813.1213.17MIN: 11.38 / MAX: 15.07MIN: 11.33 / MAX: 18.32MIN: 11.47 / MAX: 14.98MIN: 11.31 / MAX: 18.96MIN: 11.48 / MAX: 18.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens415232K4K6K8K10K9814.89836.89857.09878.210001.3MAX: 10837.56MIN: 9836.79 / MAX: 10913.95MAX: 10953.76MIN: 9878.16 / MAX: 10881.78MAX: 10664.39

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz415323K6K9K12K15K16025.315950.715930.815895.615733.01. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: efficientnet-b0432514812162015.2115.2615.2715.3015.48MIN: 14.37 / MAX: 21.09MIN: 13.95 / MAX: 21.41MIN: 14.21 / MAX: 18.36MIN: 14.03 / MAX: 20.69MIN: 13.52 / MAX: 20.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.14312560120180240300256.30256.32256.36256.49260.29MIN: 256.1 / MAX: 256.7MIN: 256.09 / MAX: 256.97MIN: 256.1 / MAX: 260.81MIN: 256.23 / MAX: 256.87MIN: 259.71 / MAX: 261.251. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mnasnet215431.32982.65963.98945.31926.6495.835.855.865.885.91MIN: 5.18 / MAX: 8.39MIN: 5.13 / MAX: 8.18MIN: 5.15 / MAX: 7.6MIN: 5.16 / MAX: 7.25MIN: 5.17 / MAX: 10.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: alexnet14532369121510.2910.3310.3810.3910.43MIN: 9.88 / MAX: 11.31MIN: 10.03 / MAX: 11.3MIN: 9.96 / MAX: 12.15MIN: 10.04 / MAX: 12.5MIN: 10.1 / MAX: 15.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: yolov4-tiny4135251015202519.9820.0020.0020.0120.24MIN: 16.44 / MAX: 21.44MIN: 17.77 / MAX: 21.15MIN: 18.08 / MAX: 22.14MIN: 18.09 / MAX: 21.97MIN: 17.97 / MAX: 24.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: shufflenet-v2412351.12282.24563.36844.49125.6144.934.974.974.974.99MIN: 4.41 / MAX: 6.3MIN: 4.45 / MAX: 6.44MIN: 4.43 / MAX: 7.23MIN: 4.46 / MAX: 6.42MIN: 4.45 / MAX: 6.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet50245134812162015.1615.2015.2315.3215.32MIN: 14.32 / MAX: 21.23MIN: 14.25 / MAX: 18.36MIN: 14.32 / MAX: 18.03MIN: 14.2 / MAX: 17.18MIN: 14.18 / MAX: 17.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: vgg163514271421283528.8028.9028.9329.0629.10MIN: 28.26 / MAX: 37.6MIN: 28.29 / MAX: 32.91MIN: 28.31 / MAX: 34.95MIN: 28.36 / MAX: 37.22MIN: 28.36 / MAX: 31.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v25243170140210280350304.27305.28305.57306.19306.27MIN: 290.3 / MAX: 318.91MIN: 285.8 / MAX: 332.42MIN: 292.7 / MAX: 327.33MIN: 287.51 / MAX: 329.75MIN: 286.26 / MAX: 326.581. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet2314570014002100280035003023.793026.063034.703035.013035.69MIN: 2920.77 / MAX: 3156.22MIN: 2928.6 / MAX: 3128.17MIN: 2912.24 / MAX: 3124.53MIN: 2955.47 / MAX: 3126.05MIN: 2892.85 / MAX: 3147.551. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GravityMark

GravityMark is a cross-API, cross-platform GPU accelerated benchmark developed by Tellusim. GravityMark aims to exploit the performance of modern GPus and render hundreds of thousands of objects in real-time all using GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.2Resolution: 1920 x 1080 - Renderer: Vulkan5432181624324033.233.233.233.233.2