2970WX July

AMD Ryzen Threadripper 2970WX 24-Core testing with a Gigabyte X399 AORUS Gaming 7 (F12h BIOS) and Sapphire AMD Radeon RX 550 640SP / 560/560X 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2108019-IB-2970WXJUL61
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Creator Workloads 2 Tests
Database Test Suite 2 Tests
HPC - High Performance Computing 3 Tests
Java Tests 2 Tests
Machine Learning 3 Tests
Multi-Core 3 Tests
Renderers 2 Tests
Server 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
July 31 2021
  2 Hours, 22 Minutes
2
July 31 2021
  2 Hours, 19 Minutes
3
July 31 2021
  2 Hours, 19 Minutes
4
August 01 2021
  2 Hours, 18 Minutes
5
August 01 2021
  2 Hours, 20 Minutes
Invert Hiding All Results Option
  2 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2970WX JulyProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution12345AMD Ryzen Threadripper 2970WX 24-Core @ 3.00GHz (24 Cores / 48 Threads)Gigabyte X399 AORUS Gaming 7 (F12h BIOS)AMD 17h16GB120GB Corsair Force MP500Sapphire AMD Radeon RX 550 640SP / 560/560X 4GB (1300/1750MHz)Realtek ALC1220VA2431Qualcomm Atheros Killer E2500 + 2 x QLogic cLOM8214 1/10GbE + Intel 8265 / 8275Ubuntu 20.045.9.0-050900rc6daily20200926-generic (x86_64) 20200925GNOME Shell 3.36.4X Server 1.20.94.6 Mesa 20.2.6 (LLVM 11.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x800820dGraphics Details- GLAMORJava Details- OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)Python Details- 1: Python 3.8.5- 2: Python 3.8.10- 3: Python 3.8.10- 4: Python 3.8.10- 5: Python 3.8.10Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

12345Result OverviewPhoronix Test Suite100%105%111%116%NatronApache CassandraFacebook RocksDBRenaissanceMobile Neural NetworkNCNNYafaRayC-BloscTNNGravityMark

2970WX Julyncnn: CPU - resnet18rocksdb: Rand Fill Synccassandra: Readsncnn: CPU - alexnetncnn: CPU - resnet50mnn: squeezenetv1.1ncnn: CPU - vgg16mnn: mobilenetV3natron: Spaceshiprenaissance: Apache Spark PageRankrocksdb: Rand Fillncnn: CPU - squeezenet_ssdrenaissance: Scala Dottycassandra: Mixed 1:3mnn: SqueezeNetV1.0ncnn: CPU - googlenetncnn: Vulkan GPU - blazefacemnn: resnet-v2-50rocksdb: Seq Fillncnn: CPU - yolov4-tinymnn: MobileNetV2_224cassandra: Mixed 1:1ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU - blazefacencnn: CPU-v3-v3 - mobilenet-v3renaissance: Savina Reactors.IOncnn: CPU - mobilenetmnn: inception-v3ncnn: CPU - regnety_400mrenaissance: Apache Spark ALSncnn: Vulkan GPU - squeezenet_ssdncnn: CPU - efficientnet-b0renaissance: Apache Spark Bayesrocksdb: Read While Writingcassandra: Writesrenaissance: In-Memory Database Shootoutrenaissance: Finagle HTTP Requestsrocksdb: Rand Readrocksdb: Update Randrocksdb: Read Rand Write Randncnn: CPU-v2-v2 - mobilenet-v2mnn: mobilenet-v1-1.0ncnn: Vulkan GPU-v3-v3 - mobilenet-v3renaissance: Rand Forestrenaissance: Genetic Algorithm Using Jenetics + Futuresyafaray: Total Time For Sample Scenerenaissance: Akka Unbalanced Cobwebbed Treencnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - regnety_400mtnn: CPU - SqueezeNet v2ncnn: Vulkan GPU - mobilenetrenaissance: ALS Movie Lensblosc: blosclzncnn: Vulkan GPU - efficientnet-b0tnn: CPU - SqueezeNet v1.1ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - vgg16tnn: CPU - MobileNet v2tnn: CPU - DenseNetgravitymark: 1920 x 1080 - Vulkan1234544.83163659194324.5668.549.86897.155.1653.45088.755950642.911132.99718112.28135.873.1144.868629253527.70710779512.19145.7913.0112292.629.5959.5338.72359.613.4717.031209.136844421371906178.54099.4105202520516006192929113.385.1147.35920.02665.996.08621703.75.567.0511.3110.1972.11513.129836.815950.715.48256.3555.8510.29204.9715.3228.93306.2663034.733.256.061770110552526.9353.958.95487.614.3243.25443.762900136.431111.510508611.02430.93.150.6759331647.088.0469899012.8612.785.3313.6512326.531.0861.55236.842279.613.1816.061184.237958751301536439.53906.7106197835518649200769213.534.9487.43892.72700.498.40321813.45.67.1111.3410.3772.51312.929878.21573315.27256.4855.8310.4320.244.9715.1629.1305.2783023.79133.247.961238013840922.9950.18.02480.575.2253.35041.355856038.831038.311283711.36331.942.9746.24365409549.517.61510503311.8413.565.5812.7411693.428.7159.59636.12305.214.0917.121180.935936141311136178.34055.7107926092497709198577213.025.1097.59902.92679.798.82521476.35.616.9911.5810.1271.3813.1710001.315895.615.26256.3215.9110.39204.9715.3228.8306.1893026.05833.251.532028210266627.8354.610.29998.934.3593.24485.856386141.15968.09627112.68433.673.0346.47558667949.127.5910906512.2913.815.6112.5912667.528.8259.24936.252202.113.1916.681162.136560071318466142.04040.6110355654498715198679013.285.0117.39911.12620.098.45322068.25.56.9211.3810.3471.21712.989814.816025.315.21256.3035.8810.3319.984.9315.229.06305.5683035.01333.231.221946510620132.9367.929.97483.424.2662.85102.052403141.54965.411161412.80631.593.4446.21766221548.677.29210568211.7113.535.5313.5611704.228.7557.15636.552220.613.2516.521231.936342781309206457.24006.9108371044515847198664013.54.9967.35891.32660.096.46721886.55.467.0611.2810.1572.7212.979857.015930.815.3260.2885.8610.3820.014.9915.2328.9304.273035.69233.2OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet1812345132639526544.8356.0647.9651.5331.22MIN: 23.94 / MAX: 207.98MIN: 23.61 / MAX: 202.81MIN: 21.37 / MAX: 192.24MIN: 22.9 / MAX: 210.3MIN: 23.82 / MAX: 211.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill Sync123454K8K12K16K20K16365177011238020282194651. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Reads1234530K60K90K120K150K91943105525138409102666106201

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet1234581624324024.5626.9322.9927.8332.93MIN: 15.88 / MAX: 92.56MIN: 19.47 / MAX: 85.7MIN: 15.58 / MAX: 92.05MIN: 18.52 / MAX: 91.86MIN: 19.24 / MAX: 89.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet5012345153045607568.5453.9550.1054.6067.92MIN: 39.32 / MAX: 510.25MIN: 37.14 / MAX: 491.26MIN: 37.92 / MAX: 520.46MIN: 36.7 / MAX: 497.36MIN: 38.45 / MAX: 507.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.11234536912159.8688.9548.02410.2999.974MIN: 9.54 / MAX: 23.89MIN: 8.01 / MAX: 11.74MIN: 7.58 / MAX: 11.85MIN: 9.73 / MAX: 18.57MIN: 7.53 / MAX: 12.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16123452040608010097.1587.6180.5798.9383.42MIN: 64.38 / MAX: 202.68MIN: 68.85 / MAX: 190.59MIN: 62.68 / MAX: 197.8MIN: 74.18 / MAX: 190.37MIN: 66.24 / MAX: 193.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3123451.17562.35123.52684.70245.8785.1654.3245.2254.3594.266MIN: 5.09 / MAX: 5.32MIN: 4.26 / MAX: 4.41MIN: 5.17 / MAX: 5.3MIN: 4.19 / MAX: 4.5MIN: 4.19 / MAX: 4.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship123450.7651.532.2953.063.8253.43.23.33.22.8

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank12345120024003600480060005088.75443.75041.34485.85102.0MIN: 4448.56 / MAX: 6210.19MIN: 4391.93 / MAX: 6021.74MIN: 4424.93 / MAX: 5742.67MIN: 3933.91 / MAX: 4772.53MIN: 4315.23 / MAX: 5711.89

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill12345130K260K390K520K650K5595066290015585605638615240311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd12345102030405042.9136.4338.8341.1541.54MIN: 30.87 / MAX: 416.32MIN: 30.42 / MAX: 410.51MIN: 30.73 / MAX: 374.79MIN: 31.18 / MAX: 416.67MIN: 30.67 / MAX: 413.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty1234520040060080010001132.91111.51038.3968.0965.4MIN: 869.98 / MAX: 1652.27MIN: 825.61 / MAX: 1673.04MIN: 789.12 / MAX: 1519.26MIN: 797.44 / MAX: 1715.9MIN: 822.35 / MAX: 1656.11

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:31234520K40K60K80K100K9718110508611283796271111614

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.012345369121512.2811.0211.3612.6812.81MIN: 11.38 / MAX: 19.14MIN: 10.22 / MAX: 21.7MIN: 11.04 / MAX: 29.01MIN: 12.08 / MAX: 18.74MIN: 11.94 / MAX: 21.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet1234581624324035.8730.9031.9433.6731.59MIN: 28.45 / MAX: 481.26MIN: 27.29 / MAX: 397.95MIN: 27.72 / MAX: 426.31MIN: 27.61 / MAX: 462.78MIN: 28.02 / MAX: 460.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: blazeface123450.7741.5482.3223.0963.873.113.102.973.033.44MIN: 2.56 / MAX: 5.4MIN: 2.87 / MAX: 5.62MIN: 2.58 / MAX: 3.52MIN: 2.87 / MAX: 3.67MIN: 2.52 / MAX: 6.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-5012345112233445544.8750.6746.2446.4846.22MIN: 41.23 / MAX: 54.09MIN: 44.15 / MAX: 55.02MIN: 42.25 / MAX: 53.65MIN: 43.11 / MAX: 50.9MIN: 41.32 / MAX: 52.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Sequential Fill12345140K280K420K560K700K6292535933166540955866796622151. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny12345122436486052.0047.0849.5149.1248.67MIN: 40.2 / MAX: 208.8MIN: 42.27 / MAX: 198.04MIN: 41.22 / MAX: 202.49MIN: 42.66 / MAX: 205.45MIN: 40.87 / MAX: 205.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224123452468107.7078.0467.6157.5907.292MIN: 7.59 / MAX: 7.99MIN: 7.62 / MAX: 8.45MIN: 7.29 / MAX: 12.92MIN: 7.48 / MAX: 7.71MIN: 7.11 / MAX: 7.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:11234520K40K60K80K100K10779598990105033109065105682

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet12345369121512.1912.8611.8412.2911.71MIN: 11.38 / MAX: 68.71MIN: 10.99 / MAX: 95.85MIN: 11.45 / MAX: 23.8MIN: 11.63 / MAX: 72.91MIN: 11.35 / MAX: 33.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2123454812162014.0012.7813.5613.8113.53MIN: 13.87 / MAX: 19.94MIN: 12.67 / MAX: 14.53MIN: 13.3 / MAX: 14.22MIN: 13.54 / MAX: 53.61MIN: 13.47 / MAX: 15.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface123451.30282.60563.90845.21126.5145.795.335.585.615.53MIN: 5.69 / MAX: 6.07MIN: 5.25 / MAX: 5.58MIN: 5.47 / MAX: 5.72MIN: 5.5 / MAX: 7.92MIN: 5.45 / MAX: 7.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3123454812162013.0113.6512.7412.5913.56MIN: 12.47 / MAX: 57.66MIN: 11.77 / MAX: 337.64MIN: 11.95 / MAX: 65MIN: 12.09 / MAX: 33.96MIN: 12 / MAX: 333.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO123453K6K9K12K15K12292.612326.511693.412667.511704.2MIN: 12292.59 / MAX: 18810.77MAX: 24642.39MAX: 22224.71MAX: 18487.31MAX: 16979.22

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet1234571421283529.5931.0828.7128.8228.75MIN: 27.53 / MAX: 185.96MIN: 26.18 / MAX: 458.09MIN: 27.73 / MAX: 39.91MIN: 27.42 / MAX: 44.83MIN: 26.67 / MAX: 185.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v312345142842567059.5361.5559.6059.2557.16MIN: 56.78 / MAX: 99.98MIN: 51.43 / MAX: 102.58MIN: 55.24 / MAX: 97.8MIN: 52.71 / MAX: 69.65MIN: 54.43 / MAX: 135.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m1234591827364538.7036.8436.1036.2536.55MIN: 37.97 / MAX: 85.56MIN: 34.73 / MAX: 162.06MIN: 35.69 / MAX: 56.2MIN: 35.89 / MAX: 36.77MIN: 35.32 / MAX: 206.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS1234550010001500200025002359.62279.62305.22202.12220.6MIN: 1999.13 / MAX: 2897.65MIN: 2114.19 / MAX: 2739.91MIN: 2098.45 / MAX: 2612.05MIN: 2007.07 / MAX: 2378.82MIN: 1994.96 / MAX: 2449.6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: squeezenet_ssd123454812162013.4713.1814.0913.1913.25MIN: 11.98 / MAX: 19.27MIN: 12.01 / MAX: 18.7MIN: 12.14 / MAX: 19.02MIN: 12.04 / MAX: 20.01MIN: 12.14 / MAX: 17.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0123454812162017.0316.0617.1216.6816.52MIN: 16.22 / MAX: 51.28MIN: 15.75 / MAX: 30.09MIN: 16.12 / MAX: 152.21MIN: 16.15 / MAX: 50.41MIN: 16.25 / MAX: 18.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes12345300600900120015001209.11184.21180.91162.11231.9MIN: 844.99 / MAX: 1362.04MIN: 844.46 / MAX: 1454.51MIN: 838.22 / MAX: 1318.56MIN: 828.94 / MAX: 1413.63MIN: 895.93 / MAX: 1323.17

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read While Writing12345800K1600K2400K3200K4000K368444237958753593614365600736342781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes1234530K60K90K120K150K137190130153131113131846130920

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout12345140028004200560070006178.56439.56178.36142.06457.2MIN: 5704.5 / MAX: 7336.04MIN: 6072.23 / MAX: 7632.21MIN: 5620.9 / MAX: 6868.2MIN: 5851.56 / MAX: 7455.54MIN: 5915.55 / MAX: 8134.07

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests1234590018002700360045004099.43906.74055.74040.64006.9MIN: 3762.19 / MAX: 4201.46MIN: 3579.93 / MAX: 4123.16MIN: 3691.81 / MAX: 4318.24MIN: 3751.67 / MAX: 4040.65MIN: 3646.54 / MAX: 4121.13

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Read1234520M40M60M80M100M1052025201061978351079260921103556541083710441. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Update Random12345110K220K330K440K550K5160065186494977094987155158471. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write Random12345400K800K1200K1600K2000K192929120076921985772198679019866401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v212345369121513.3813.5313.0213.2813.50MIN: 13 / MAX: 38.43MIN: 13.01 / MAX: 42.06MIN: 12.64 / MAX: 48.95MIN: 12.8 / MAX: 35.47MIN: 12.07 / MAX: 135.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0123451.15072.30143.45214.60285.75355.1144.9485.1095.0114.996MIN: 4.89 / MAX: 30.05MIN: 4.79 / MAX: 25.01MIN: 4.71 / MAX: 29.42MIN: 4.8 / MAX: 22.32MIN: 4.88 / MAX: 35.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3123452468107.357.437.597.397.35MIN: 6.43 / MAX: 9.99MIN: 6.48 / MAX: 9.01MIN: 6.31 / MAX: 10.66MIN: 6.27 / MAX: 9.03MIN: 6.45 / MAX: 9.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest123452004006008001000920.0892.7902.9911.1891.3MIN: 857.45 / MAX: 1097.17MIN: 815.43 / MAX: 1071.54MIN: 822.24 / MAX: 1077.32MIN: 823.41 / MAX: 1100.48MIN: 822.7 / MAX: 1078.04

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures1234560012001800240030002665.92700.42679.72620.02660.0MIN: 2578.84 / MAX: 2774.18MIN: 2545.29 / MAX: 2761.15MIN: 2563.63 / MAX: 2741.28MIN: 2515.57 / MAX: 2711.91MIN: 2578.14 / MAX: 2737.39

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene123452040608010096.0998.4098.8398.4596.471. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree123455K10K15K20K25K21703.721813.421476.322068.221886.5MIN: 17470.92MIN: 17692.66MIN: 16811.84MIN: 17854.31MIN: 17290.55 / MAX: 21886.51

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2123451.26232.52463.78695.04926.31155.565.605.615.505.46MIN: 4.93 / MAX: 7.18MIN: 4.95 / MAX: 7.16MIN: 4.95 / MAX: 7.34MIN: 4.94 / MAX: 7.18MIN: 4.93 / MAX: 6.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet18123452468107.057.116.996.927.06MIN: 6.43 / MAX: 8.03MIN: 6.36 / MAX: 8.36MIN: 6.38 / MAX: 7.85MIN: 6.38 / MAX: 7.48MIN: 6.39 / MAX: 7.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: googlenet12345369121511.3111.3411.5811.3811.28MIN: 10.2 / MAX: 13.65MIN: 10.3 / MAX: 12.72MIN: 10.33 / MAX: 13.46MIN: 10.43 / MAX: 12.85MIN: 10.25 / MAX: 12.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: regnety_400m12345369121510.1910.3710.1210.3410.15MIN: 8.08 / MAX: 13.58MIN: 8 / MAX: 13.62MIN: 8.01 / MAX: 14.31MIN: 8.08 / MAX: 13.86MIN: 8.04 / MAX: 15.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v212345163248648072.1272.5171.3871.2272.72MIN: 71.88 / MAX: 72.89MIN: 72.19 / MAX: 72.84MIN: 70.98 / MAX: 72.07MIN: 70.73 / MAX: 73.6MIN: 72.49 / MAX: 73.011. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mobilenet12345369121513.1212.9213.1712.9812.97MIN: 11.31 / MAX: 18.96MIN: 11.38 / MAX: 15.07MIN: 11.48 / MAX: 18.65MIN: 11.47 / MAX: 14.98MIN: 11.33 / MAX: 18.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens123452K4K6K8K10K9836.89878.210001.39814.89857.0MIN: 9836.79 / MAX: 10913.95MIN: 9878.16 / MAX: 10881.78MAX: 10664.39MAX: 10837.56MAX: 10953.76

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz123453K6K9K12K15K15950.715733.015895.616025.315930.81. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: efficientnet-b0123454812162015.4815.2715.2615.2115.30MIN: 13.52 / MAX: 20.06MIN: 14.21 / MAX: 18.36MIN: 13.95 / MAX: 21.41MIN: 14.37 / MAX: 21.09MIN: 14.03 / MAX: 20.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.11234560120180240300256.36256.49256.32256.30260.29MIN: 256.1 / MAX: 260.81MIN: 256.23 / MAX: 256.87MIN: 256.09 / MAX: 256.97MIN: 256.1 / MAX: 256.7MIN: 259.71 / MAX: 261.251. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mnasnet123451.32982.65963.98945.31926.6495.855.835.915.885.86MIN: 5.13 / MAX: 8.18MIN: 5.18 / MAX: 8.39MIN: 5.17 / MAX: 10.71MIN: 5.16 / MAX: 7.25MIN: 5.15 / MAX: 7.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: alexnet12345369121510.2910.4310.3910.3310.38MIN: 9.88 / MAX: 11.31MIN: 10.1 / MAX: 15.52MIN: 10.04 / MAX: 12.5MIN: 10.03 / MAX: 11.3MIN: 9.96 / MAX: 12.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: yolov4-tiny1234551015202520.0020.2420.0019.9820.01MIN: 17.77 / MAX: 21.15MIN: 17.97 / MAX: 24.64MIN: 18.08 / MAX: 22.14MIN: 16.44 / MAX: 21.44MIN: 18.09 / MAX: 21.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: shufflenet-v2123451.12282.24563.36844.49125.6144.974.974.974.934.99MIN: 4.45 / MAX: 6.44MIN: 4.43 / MAX: 7.23MIN: 4.46 / MAX: 6.42MIN: 4.41 / MAX: 6.3MIN: 4.45 / MAX: 6.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet50123454812162015.3215.1615.3215.2015.23MIN: 14.2 / MAX: 17.18MIN: 14.32 / MAX: 21.23MIN: 14.18 / MAX: 17.18MIN: 14.25 / MAX: 18.36MIN: 14.32 / MAX: 18.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: vgg161234571421283528.9329.1028.8029.0628.90MIN: 28.31 / MAX: 34.95MIN: 28.36 / MAX: 31.55MIN: 28.26 / MAX: 37.6MIN: 28.36 / MAX: 37.22MIN: 28.29 / MAX: 32.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v21234570140210280350306.27305.28306.19305.57304.27MIN: 286.26 / MAX: 326.58MIN: 285.8 / MAX: 332.42MIN: 287.51 / MAX: 329.75MIN: 292.7 / MAX: 327.33MIN: 290.3 / MAX: 318.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet1234570014002100280035003034.703023.793026.063035.013035.69MIN: 2912.24 / MAX: 3124.53MIN: 2920.77 / MAX: 3156.22MIN: 2928.6 / MAX: 3128.17MIN: 2955.47 / MAX: 3126.05MIN: 2892.85 / MAX: 3147.551. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GravityMark

GravityMark is a cross-API, cross-platform GPU accelerated benchmark developed by Tellusim. GravityMark aims to exploit the performance of modern GPus and render hundreds of thousands of objects in real-time all using GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.2Resolution: 1920 x 1080 - Renderer: Vulkan1234581624324033.233.233.233.233.2