tr 3970x july

AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Sapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2107314-IB-TR3970XJU60
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
Database Test Suite 3 Tests
HPC - High Performance Computing 4 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 4 Tests
Renderers 2 Tests
Server 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
July 30 2021
  5 Hours, 27 Minutes
2
July 30 2021
  16 Hours, 33 Minutes
3
July 31 2021
  12 Hours, 11 Minutes
4
July 31 2021
  29 Minutes
Invert Hiding All Results Option
  8 Hours, 40 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


tr 3970x julyProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution1234AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads)MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS)AMD Starship/Matisse32GB1000GB Sabrent Rocket 4.0 1TBSapphire AMD Radeon RX 5500/5500M / Pro 5500M 4GB (1900/875MHz)AMD Navi 10 HDMI AudioVA2431Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.045.12.0-051200rc2daily20210307-generic (x86_64) 20210306GNOME Shell 3.36.4X Server 1.20.84.6 Mesa 20.0.8 (LLVM 10.0.0)1.2.128GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301025Java Details- OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)Python Details- 1: Python 3.8.5- 2: Python 3.8.5- 3: Python 3.8.5- 4: Python 3.8.10Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

1234Result OverviewPhoronix Test Suite100%102%103%105%C-BloscQuantum ESPRESSOGravityMark

tr 3970x julypgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writepgbench: 1 - 1 - Read Writepgbench: 1 - 1 - Read Write - Average Latencyncnn: Vulkan GPU - mobilenetmnn: inception-v3rocksdb: Rand Fillncnn: CPU - resnet18ncnn: CPU - googlenetblosc: blosclzmnn: SqueezeNetV1.0renaissance: In-Memory Database Shootoutcassandra: Mixed 1:1ncnn: CPU - resnet50ncnn: CPU - alexnetrenaissance: Savina Reactors.IOncnn: CPU - yolov4-tinyrenaissance: Apache Spark PageRankpgbench: 1 - 1 - Read Only - Average Latencyncnn: Vulkan GPU - yolov4-tinymnn: MobileNetV2_224cassandra: Writesrenaissance: Apache Spark ALSncnn: CPU - regnety_400mrenaissance: Genetic Algorithm Using Jenetics + Futuresncnn: CPU - squeezenet_ssdpgbench: 1 - 100 - Read Only - Average Latencyncnn: CPU - mobilenetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU-v3-v3 - mobilenet-v3natron: Spaceshiprenaissance: Apache Spark Bayespgbench: 1 - 50 - Read Only - Average Latencyrocksdb: Rand Readpgbench: 100 - 1 - Read Onlymnn: resnet-v2-50renaissance: ALS Movie Lenspgbench: 1 - 100 - Read Onlymnn: mobilenet-v1-1.0pgbench: 1 - 50 - Read Onlyrocksdb: Read While Writingncnn: Vulkan GPU - blazefacerocksdb: Seq Fillncnn: Vulkan GPU - resnet50cassandra: Mixed 1:3pgbench: 100 - 50 - Read Only - Average Latencyrenaissance: Akka Unbalanced Cobwebbed Treencnn: CPU - vgg16ncnn: CPU-v3-v3 - mobilenet-v3yafaray: Total Time For Sample Scenetnn: CPU - SqueezeNet v2ncnn: CPU-v2-v2 - mobilenet-v2rocksdb: Read Rand Write Randncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - regnety_400mpgbench: 100 - 50 - Read Onlyncnn: Vulkan GPU - alexnetrenaissance: Finagle HTTP Requestspgbench: 1 - 1 - Read Onlyrocksdb: Update Randncnn: Vulkan GPU - mnasnetpgbench: 100 - 100 - Read Onlygravitymark: 1920 x 1080 - Vulkanqe: AUSURF112ncnn: Vulkan GPU - resnet18ncnn: CPU - efficientnet-b0pgbench: 100 - 100 - Read Only - Average Latencyncnn: Vulkan GPU-v2-v2 - mobilenet-v2tnn: CPU - MobileNet v2renaissance: Rand Forestncnn: CPU - blazefacencnn: CPU - mnasnettnn: CPU - SqueezeNet v1.1ncnn: Vulkan GPU - googlenettnn: CPU - DenseNetncnn: Vulkan GPU - vgg16ncnn: CPU - shufflenet-v2pgbench: 100 - 1 - Read Only - Average Latencyunvanquished: 1920 x 1080 - Highrocksdb: Rand Fill Synccassandra: Readsncnn: Vulkan GPU - squeezenet_ssdmnn: squeezenetv1.1mnn: mobilenetV3pgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Writepgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 1 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writerenaissance: Scala Dotty12342.4294117015930.62814.0826.53294505313.4216.1427338.86.9744313.419055622.639.418864.525.773379.80.02823.434.8542342831580.718.761681.719.420.12216.7911.648.085.11036.30.0531386585423053023.9256679.98175853.42294379847209572.4999186811.221849180.07213979.138.616.6766.19166.0467.3428153613.945.0569076128.482434.4363227640955.1762185371.1362.994.69.320.1615.08260.836712.93.23235.4467.92550.4775.197.610.033295.6255572242889.245.6893.5381.606311420.639156669.434144131.4051592841.98.1281230615890.62913.1128.83688020614.4817.1226319.67.3874089.318609223.809.899171.626.943530.40.02722.604.9082293361555.519.301726.319.900.12517.1911.438.1751029.70.0541372272633103524.2246785.28038993.46092843547664242.5198441511.381828180.07314148.038.146.7466.21665.2947.3027849553.955.0968413828.472455.4366337577545.1761714371.6361.684.589.380.1625.10259.321709.13.246.60236.0927.912550.54675.217.610.033255682078558.775.4403.0824.816121200.943114669.275144431.3871593791.98.7181149514370.69612.8927.94387569514.0317.0425800.67.3144161.219598023.459.469307.626.583418.00.02722.855.0212266411533.319.061685.519.760.12317.1611.408.255.11016.00.0531398081553047324.3386794.88130843.48094029047984972.4799879211.221854100.07314154.538.496.7565.44265.9187.3828022083.985.1068863428.732451.6365107581805.2161742471.5360.534.579.380.1625.11259.921711.63.246.62235.5857.922544.61075.367.610.033211062308838.255.6292.9585.50490971.07395385.172118936.6761371797.425802.071.6361.01OpenBenchmarking.org

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency123246810SE +/- 0.049, N = 3SE +/- 0.103, N = 152.4298.1288.7181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency1233691215Min: 8.06 / Avg: 8.13 / Max: 8.22Min: 8.02 / Avg: 8.72 / Max: 9.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write1239K18K27K36K45KSE +/- 74.54, N = 3SE +/- 132.49, N = 154117012306114951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write1237K14K21K28K35KMin: 12162.24 / Avg: 12305.76 / Max: 12412.48Min: 10305.93 / Avg: 11494.83 / Max: 12470.871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write12330060090012001500SE +/- 3.23, N = 3SE +/- 16.63, N = 31593158914371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write12330060090012001500Min: 1583.96 / Avg: 1589.15 / Max: 1595.08Min: 1405.18 / Avg: 1437.29 / Max: 1460.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency1230.15660.31320.46980.62640.783SE +/- 0.001, N = 3SE +/- 0.008, N = 30.6280.6290.6961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency123246810Min: 0.63 / Avg: 0.63 / Max: 0.63Min: 0.69 / Avg: 0.7 / Max: 0.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mobilenet12348121620SE +/- 0.11, N = 15SE +/- 0.12, N = 314.0813.1112.89MIN: 10.52 / MAX: 28.28MIN: 9.77 / MAX: 36MIN: 9.93 / MAX: 23.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mobilenet12348121620Min: 12.22 / Avg: 13.11 / Max: 13.87Min: 12.7 / Avg: 12.89 / Max: 13.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3123714212835SE +/- 0.17, N = 15SE +/- 0.47, N = 326.5328.8427.94MIN: 26.01 / MAX: 28.29MIN: 26.98 / MAX: 31.12MIN: 25.54 / MAX: 29.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3123612182430Min: 27.76 / Avg: 28.84 / Max: 29.9Min: 27.11 / Avg: 27.94 / Max: 28.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill123200K400K600K800K1000KSE +/- 4989.19, N = 3SE +/- 5726.69, N = 39450538802068756951. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill123160K320K480K640K800KMin: 872900 / Avg: 880206.33 / Max: 889745Min: 864287 / Avg: 875695 / Max: 8822811. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet1812348121620SE +/- 0.16, N = 3SE +/- 0.16, N = 313.4214.4814.03MIN: 13.26 / MAX: 23.59MIN: 14 / MAX: 15.52MIN: 13.55 / MAX: 17.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet1812348121620Min: 14.18 / Avg: 14.48 / Max: 14.74Min: 13.71 / Avg: 14.03 / Max: 14.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet12348121620SE +/- 0.21, N = 3SE +/- 0.25, N = 316.1417.1217.04MIN: 15.89 / MAX: 18.84MIN: 16.4 / MAX: 18.58MIN: 16.38 / MAX: 19.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet12348121620Min: 16.71 / Avg: 17.12 / Max: 17.39Min: 16.61 / Avg: 17.04 / Max: 17.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz12346K12K18K24K30KSE +/- 222.52, N = 3SE +/- 138.53, N = 3SE +/- 268.15, N = 327338.826319.625800.625802.01. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz12345K10K15K20K25KMin: 25988.3 / Avg: 26319.63 / Max: 26742.6Min: 25617.9 / Avg: 25800.57 / Max: 26072.3Min: 25343.3 / Avg: 25802.03 / Max: 262721. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0123246810SE +/- 0.070, N = 15SE +/- 0.084, N = 36.9747.3877.314MIN: 6.92 / MAX: 7.14MIN: 6.88 / MAX: 7.93MIN: 7.02 / MAX: 7.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.01233691215Min: 6.95 / Avg: 7.39 / Max: 7.78Min: 7.15 / Avg: 7.31 / Max: 7.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout1239001800270036004500SE +/- 60.18, N = 3SE +/- 43.29, N = 34313.44089.34161.2MIN: 3994.18 / MAX: 5060.28MIN: 3698.76 / MAX: 4582.21MIN: 3720.55 / MAX: 4919.46
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout1237001400210028003500Min: 4001.11 / Avg: 4089.32 / Max: 4204.34Min: 4095.26 / Avg: 4161.19 / Max: 4242.75

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:112340K80K120K160K200KSE +/- 1560.41, N = 12SE +/- 2034.97, N = 12190556186092195980
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:112330K60K90K120K150KMin: 183007 / Avg: 186091.83 / Max: 203015Min: 183813 / Avg: 195980.25 / Max: 207602

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50123612182430SE +/- 0.15, N = 3SE +/- 0.19, N = 322.6323.8023.45MIN: 22.48 / MAX: 23.83MIN: 23.28 / MAX: 25.56MIN: 22.96 / MAX: 34.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50123612182430Min: 23.5 / Avg: 23.8 / Max: 23.96Min: 23.13 / Avg: 23.45 / Max: 23.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet1233691215SE +/- 0.22, N = 3SE +/- 0.04, N = 39.419.899.46MIN: 9.29 / MAX: 13.08MIN: 9.52 / MAX: 19.47MIN: 9.27 / MAX: 13.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet1233691215Min: 9.64 / Avg: 9.89 / Max: 10.33Min: 9.4 / Avg: 9.46 / Max: 9.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO1232K4K6K8K10KSE +/- 120.92, N = 12SE +/- 36.95, N = 38864.59171.69307.6MIN: 8864.48 / MAX: 13425.87MIN: 8507.95 / MAX: 15517.95MIN: 9240.12 / MAX: 13609.07
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO12316003200480064008000Min: 8507.95 / Avg: 9171.55 / Max: 9856.57Min: 9240.12 / Avg: 9307.58 / Max: 9367.43

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny123612182430SE +/- 0.09, N = 3SE +/- 0.16, N = 325.7726.9426.58MIN: 25.6 / MAX: 30.39MIN: 26.58 / MAX: 30.94MIN: 26.08 / MAX: 31.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny123612182430Min: 26.75 / Avg: 26.94 / Max: 27.04Min: 26.3 / Avg: 26.58 / Max: 26.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank1238001600240032004000SE +/- 40.90, N = 3SE +/- 24.47, N = 33379.83530.43418.0MIN: 3143.08 / MAX: 3492.6MIN: 3216.94 / MAX: 3727.47MIN: 3099.91 / MAX: 3479.07
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank1236001200180024003000Min: 3482.9 / Avg: 3530.39 / Max: 3611.82Min: 3383.55 / Avg: 3417.99 / Max: 3465.32

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency1230.00630.01260.01890.02520.0315SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0280.0270.0271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency12312345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: yolov4-tiny123612182430SE +/- 0.13, N = 15SE +/- 0.08, N = 323.4322.6022.85MIN: 19.2 / MAX: 35.06MIN: 18.96 / MAX: 50.47MIN: 19.29 / MAX: 48.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: yolov4-tiny123510152025Min: 21.96 / Avg: 22.6 / Max: 23.45Min: 22.72 / Avg: 22.85 / Max: 231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2241231.12972.25943.38914.51885.6485SE +/- 0.027, N = 15SE +/- 0.050, N = 34.8544.9085.021MIN: 4.81 / MAX: 5.02MIN: 4.68 / MAX: 5.22MIN: 4.92 / MAX: 5.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224123246810Min: 4.75 / Avg: 4.91 / Max: 5.14Min: 4.97 / Avg: 5.02 / Max: 5.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes12350K100K150K200K250KSE +/- 2314.84, N = 3SE +/- 2210.58, N = 3234283229336226641
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes12340K80K120K160K200KMin: 226381 / Avg: 229336 / Max: 233900Min: 224001 / Avg: 226640.67 / Max: 231032

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS12330060090012001500SE +/- 13.20, N = 3SE +/- 19.28, N = 41580.71555.51533.3MIN: 1485.71 / MAX: 1663.51MIN: 1434.41 / MAX: 1734.21MIN: 1376.97 / MAX: 1746.07
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS12330060090012001500Min: 1529.32 / Avg: 1555.47 / Max: 1571.68Min: 1504.41 / Avg: 1533.29 / Max: 1587.96

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m123510152025SE +/- 0.12, N = 3SE +/- 0.25, N = 318.7619.3019.06MIN: 18.63 / MAX: 19.48MIN: 18.44 / MAX: 20.59MIN: 18.22 / MAX: 20.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m123510152025Min: 19.07 / Avg: 19.3 / Max: 19.42Min: 18.73 / Avg: 19.06 / Max: 19.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures123400800120016002000SE +/- 26.29, N = 3SE +/- 5.20, N = 31681.71726.31685.5MIN: 1620.58 / MAX: 1751.35MIN: 1551.64 / MAX: 1865.95MIN: 1576.44 / MAX: 1744.32
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures12330060090012001500Min: 1693.82 / Avg: 1726.32 / Max: 1778.37Min: 1675.52 / Avg: 1685.54 / Max: 1692.93

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd123510152025SE +/- 0.08, N = 3SE +/- 0.07, N = 319.4219.9019.76MIN: 19.18 / MAX: 20.2MIN: 19.52 / MAX: 22.73MIN: 19.44 / MAX: 22.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd123510152025Min: 19.75 / Avg: 19.9 / Max: 20.04Min: 19.65 / Avg: 19.76 / Max: 19.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency1230.02810.05620.08430.11240.1405SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1220.1250.1231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency12312345Min: 0.12 / Avg: 0.12 / Max: 0.13Min: 0.12 / Avg: 0.12 / Max: 0.121. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet12348121620SE +/- 0.11, N = 3SE +/- 0.04, N = 316.7917.1917.16MIN: 16.6 / MAX: 17.56MIN: 16.8 / MAX: 18.65MIN: 16.87 / MAX: 17.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet12348121620Min: 16.98 / Avg: 17.19 / Max: 17.35Min: 17.11 / Avg: 17.16 / Max: 17.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: efficientnet-b01233691215SE +/- 0.04, N = 15SE +/- 0.04, N = 311.6411.4311.40MIN: 10.69 / MAX: 33.64MIN: 10.71 / MAX: 32.96MIN: 10.73 / MAX: 32.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: efficientnet-b01233691215Min: 11.13 / Avg: 11.43 / Max: 11.68Min: 11.34 / Avg: 11.4 / Max: 11.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.02, N = 15SE +/- 0.09, N = 38.088.178.25MIN: 7.75 / MAX: 8.34MIN: 7.69 / MAX: 29.16MIN: 7.74 / MAX: 24.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31233691215Min: 8.06 / Avg: 8.17 / Max: 8.4Min: 8.09 / Avg: 8.25 / Max: 8.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship1231.14752.2953.44254.595.7375SE +/- 0.06, N = 35.15.05.1
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship123246810Min: 5 / Avg: 5.1 / Max: 5.2

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes1232004006008001000SE +/- 3.06, N = 3SE +/- 2.71, N = 31036.31029.71016.0MIN: 767.35 / MAX: 1036.35MIN: 758.83 / MAX: 1034.69MIN: 745.98 / MAX: 1019.92
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes1232004006008001000Min: 1024.13 / Avg: 1029.72 / Max: 1034.69Min: 1010.78 / Avg: 1015.96 / Max: 1019.92

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency1230.01220.02440.03660.04880.061SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0530.0540.0531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency12312345Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Read12330M60M90M120M150MSE +/- 1671441.20, N = 3SE +/- 454135.98, N = 31386585421372272631398081551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Read12320M40M60M80M100MMin: 134803983 / Avg: 137227263.33 / Max: 140433128Min: 138938563 / Avg: 139808155 / Max: 1404700541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only1237K14K21K28K35KSE +/- 317.30, N = 3SE +/- 74.95, N = 33053031035304731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only1235K10K15K20K25KMin: 30703.62 / Avg: 31034.74 / Max: 31669.14Min: 30323.67 / Avg: 30472.6 / Max: 30561.741. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50123612182430SE +/- 0.23, N = 15SE +/- 0.25, N = 323.9324.2224.34MIN: 21.61 / MAX: 25.2MIN: 21.87 / MAX: 26.77MIN: 21.56 / MAX: 26.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50123612182430Min: 22.75 / Avg: 24.22 / Max: 25.58Min: 23.84 / Avg: 24.34 / Max: 24.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens12315003000450060007500SE +/- 43.81, N = 3SE +/- 94.62, N = 36679.96785.26794.8MIN: 6679.04 / MAX: 7166.09MIN: 6735.48 / MAX: 7349.09MIN: 6606.43 / MAX: 7204.36
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens12312002400360048006000Min: 6735.48 / Avg: 6785.23 / Max: 6872.57Min: 6608.92 / Avg: 6794.76 / Max: 6918.65

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only123200K400K600K800K1000KSE +/- 2009.26, N = 3SE +/- 3408.71, N = 38175858038998130841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only123140K280K420K560K700KMin: 800969.45 / Avg: 803898.76 / Max: 807745.79Min: 806286.46 / Avg: 813083.79 / Max: 816935.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.01230.7831.5662.3493.1323.915SE +/- 0.012, N = 15SE +/- 0.044, N = 33.4223.4603.480MIN: 3.32 / MAX: 3.84MIN: 3.33 / MAX: 4.01MIN: 3.36 / MAX: 3.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0123246810Min: 3.37 / Avg: 3.46 / Max: 3.55Min: 3.41 / Avg: 3.48 / Max: 3.561. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only123200K400K600K800K1000KSE +/- 8132.24, N = 3SE +/- 2718.51, N = 39437989284359402901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only123160K320K480K640K800KMin: 917504.19 / Avg: 928435.13 / Max: 944330.66Min: 935450 / Avg: 940289.6 / Max: 944855.231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read While Writing1231000K2000K3000K4000K5000KSE +/- 49320.63, N = 8SE +/- 70915.08, N = 34720957476642447984971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read While Writing123800K1600K2400K3200K4000KMin: 4641817 / Avg: 4766423.88 / Max: 5047410Min: 4707248 / Avg: 4798497.33 / Max: 49381541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: blazeface1230.56481.12961.69442.25922.824SE +/- 0.02, N = 15SE +/- 0.01, N = 32.492.512.47MIN: 2.45 / MAX: 3.09MIN: 2.44 / MAX: 5.94MIN: 2.44 / MAX: 2.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: blazeface123246810Min: 2.46 / Avg: 2.51 / Max: 2.8Min: 2.46 / Avg: 2.47 / Max: 2.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Sequential Fill123200K400K600K800K1000KSE +/- 11133.01, N = 3SE +/- 7278.03, N = 39918689844159987921. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Sequential Fill123200K400K600K800K1000KMin: 969677 / Avg: 984414.67 / Max: 1006238Min: 984239 / Avg: 998791.67 / Max: 10063401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet501233691215SE +/- 0.03, N = 15SE +/- 0.02, N = 311.2211.3811.22MIN: 10.77 / MAX: 15.9MIN: 10.7 / MAX: 38.16MIN: 10.77 / MAX: 24.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet501233691215Min: 11.19 / Avg: 11.38 / Max: 11.58Min: 11.18 / Avg: 11.22 / Max: 11.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:312340K80K120K160K200KSE +/- 2616.16, N = 4SE +/- 1816.24, N = 9184918182818185410
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:312330K60K90K120K150KMin: 175161 / Avg: 182818.25 / Max: 186308Min: 176316 / Avg: 185410.22 / Max: 195410

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency1230.01640.03280.04920.06560.082SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0720.0730.0731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency12312345Min: 0.07 / Avg: 0.07 / Max: 0.07Min: 0.07 / Avg: 0.07 / Max: 0.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree1233K6K9K12K15KSE +/- 118.05, N = 3SE +/- 75.72, N = 313979.114148.014154.5MIN: 11025.28 / MAX: 13979.11MIN: 11016.39 / MAX: 14308.31MIN: 11158.7 / MAX: 14288.02
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree1232K4K6K8K10KMin: 13917.71 / Avg: 14147.96 / Max: 14308.31Min: 14025.86 / Avg: 14154.48 / Max: 14288.02

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16123918273645SE +/- 0.25, N = 3SE +/- 0.24, N = 338.6138.1438.49MIN: 37.92 / MAX: 42.08MIN: 37.52 / MAX: 48.37MIN: 37.85 / MAX: 48.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16123816243240Min: 37.75 / Avg: 38.14 / Max: 38.61Min: 38.07 / Avg: 38.49 / Max: 38.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.02, N = 3SE +/- 0.02, N = 36.676.746.75MIN: 6.55 / MAX: 7.43MIN: 6.43 / MAX: 18.18MIN: 6.52 / MAX: 7.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v31233691215Min: 6.7 / Avg: 6.74 / Max: 6.77Min: 6.71 / Avg: 6.75 / Max: 6.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene1231530456075SE +/- 0.88, N = 5SE +/- 0.95, N = 466.1966.2265.441. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene1231326395265Min: 63.09 / Avg: 66.22 / Max: 67.88Min: 63.79 / Avg: 65.44 / Max: 67.521. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21231530456075SE +/- 0.70, N = 3SE +/- 0.54, N = 366.0565.2965.92MIN: 65.2 / MAX: 66.76MIN: 63.44 / MAX: 79.18MIN: 64.36 / MAX: 68.061. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21231326395265Min: 64.07 / Avg: 65.29 / Max: 66.5Min: 64.88 / Avg: 65.92 / Max: 66.681. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.03, N = 3SE +/- 0.03, N = 37.347.307.38MIN: 7.05 / MAX: 7.96MIN: 6.94 / MAX: 8.52MIN: 7.12 / MAX: 11.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 7.26 / Avg: 7.3 / Max: 7.36Min: 7.35 / Avg: 7.38 / Max: 7.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write Random123600K1200K1800K2400K3000KSE +/- 5858.77, N = 3SE +/- 7766.59, N = 32815361278495528022081. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write Random123500K1000K1500K2000K2500KMin: 2777679 / Avg: 2784954.67 / Max: 2796547Min: 2793766 / Avg: 2802208 / Max: 28177211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: shufflenet-v21230.89551.7912.68653.5824.4775SE +/- 0.01, N = 15SE +/- 0.03, N = 33.943.953.98MIN: 3.87 / MAX: 4.28MIN: 3.86 / MAX: 5.07MIN: 3.86 / MAX: 15.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: shufflenet-v2123246810Min: 3.93 / Avg: 3.95 / Max: 4Min: 3.93 / Avg: 3.98 / Max: 4.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: regnety_400m1231.14752.2953.44254.595.7375SE +/- 0.01, N = 12SE +/- 0.03, N = 35.055.095.10MIN: 5.01 / MAX: 5.43MIN: 5.02 / MAX: 15.97MIN: 5.01 / MAX: 15.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: regnety_400m123246810Min: 5.06 / Avg: 5.09 / Max: 5.15Min: 5.06 / Avg: 5.1 / Max: 5.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only123150K300K450K600K750KSE +/- 1352.75, N = 3SE +/- 3202.49, N = 36907616841386886341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only123120K240K360K480K600KMin: 681800.53 / Avg: 684138.35 / Max: 686486.55Min: 683538.07 / Avg: 688634.39 / Max: 694542.41. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: alexnet123714212835SE +/- 0.07, N = 15SE +/- 0.03, N = 328.4828.4728.73MIN: 25.89 / MAX: 56.02MIN: 25.65 / MAX: 59.24MIN: 25.85 / MAX: 59.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: alexnet123612182430Min: 28.1 / Avg: 28.47 / Max: 29.02Min: 28.7 / Avg: 28.73 / Max: 28.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests1235001000150020002500SE +/- 9.69, N = 3SE +/- 12.15, N = 32434.42455.42451.6MIN: 2261.23 / MAX: 2528.66MIN: 2283.18 / MAX: 2613.86MIN: 2258.6 / MAX: 2584.88
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests123400800120016002000Min: 2437.36 / Avg: 2455.4 / Max: 2470.54Min: 2437.01 / Avg: 2451.57 / Max: 2475.71

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only1238K16K24K32K40KSE +/- 310.79, N = 3SE +/- 321.06, N = 33632236633365101. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only1236K12K18K24K30KMin: 36011.6 / Avg: 36632.62 / Max: 36966.08Min: 36000.85 / Avg: 36509.77 / Max: 37103.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Update Random123160K320K480K640K800KSE +/- 5284.99, N = 3SE +/- 2429.28, N = 37640957577547581801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Update Random123130K260K390K520K650KMin: 749524 / Avg: 757754 / Max: 767613Min: 753338 / Avg: 758180 / Max: 7609481. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mnasnet1231.17232.34463.51694.68925.8615SE +/- 0.00, N = 14SE +/- 0.03, N = 35.175.175.21MIN: 5.02 / MAX: 5.44MIN: 5 / MAX: 5.81MIN: 5.01 / MAX: 20.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mnasnet123246810Min: 5.16 / Avg: 5.17 / Max: 5.18Min: 5.18 / Avg: 5.21 / Max: 5.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only123130K260K390K520K650KSE +/- 1719.24, N = 3SE +/- 640.10, N = 36218536171436174241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only123110K220K330K440K550KMin: 613706.19 / Avg: 617142.56 / Max: 618965.47Min: 616483.41 / Avg: 617424.34 / Max: 618646.571. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

GravityMark

GravityMark is a cross-API, cross-platform GPU accelerated benchmark developed by Tellusim. GravityMark aims to exploit the performance of modern GPus and render hundreds of thousands of objects in real-time all using GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.2Resolution: 1920 x 1080 - Renderer: Vulkan12341632486480SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 371.171.671.571.6
OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.2Resolution: 1920 x 1080 - Renderer: Vulkan12341428425670Min: 71.5 / Avg: 71.57 / Max: 71.6Min: 71.4 / Avg: 71.53 / Max: 71.7Min: 71.5 / Avg: 71.57 / Max: 71.7

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.8Input: AUSURF112123480160240320400SE +/- 0.38, N = 3SE +/- 0.51, N = 3SE +/- 1.10, N = 3362.99361.68360.53361.011. (F9X) gfortran options: -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.8Input: AUSURF112123460120180240300Min: 360.97 / Avg: 361.68 / Max: 362.27Min: 359.51 / Avg: 360.53 / Max: 361.08Min: 359.39 / Avg: 361.01 / Max: 363.111. (F9X) gfortran options: -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet181231.0352.073.1054.145.175SE +/- 0.01, N = 15SE +/- 0.01, N = 34.604.584.57MIN: 4.47 / MAX: 8.12MIN: 4.48 / MAX: 13.13MIN: 4.47 / MAX: 9.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet18123246810Min: 4.54 / Avg: 4.58 / Max: 4.66Min: 4.55 / Avg: 4.57 / Max: 4.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b01233691215SE +/- 0.04, N = 3SE +/- 0.03, N = 39.329.389.38MIN: 9.25 / MAX: 9.96MIN: 9.17 / MAX: 11.89MIN: 9.22 / MAX: 10.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b01233691215Min: 9.31 / Avg: 9.38 / Max: 9.46Min: 9.32 / Avg: 9.38 / Max: 9.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency1230.03650.0730.10950.1460.1825SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1610.1620.1621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency12312345Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v2-v2 - Model: mobilenet-v21231.14982.29963.44944.59925.749SE +/- 0.01, N = 15SE +/- 0.03, N = 35.085.105.11MIN: 4.9 / MAX: 5.51MIN: 4.87 / MAX: 22.63MIN: 4.89 / MAX: 16.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2123246810Min: 5.07 / Avg: 5.1 / Max: 5.18Min: 5.08 / Avg: 5.11 / Max: 5.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212360120180240300SE +/- 0.95, N = 3SE +/- 0.36, N = 3260.84259.32259.92MIN: 257.26 / MAX: 271.47MIN: 252.06 / MAX: 272.27MIN: 255.74 / MAX: 290.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212350100150200250Min: 257.66 / Avg: 259.32 / Max: 260.94Min: 259.25 / Avg: 259.92 / Max: 260.511. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest123150300450600750SE +/- 3.23, N = 3SE +/- 2.33, N = 3712.9709.1711.6MIN: 669.44 / MAX: 815.76MIN: 659.19 / MAX: 841.74MIN: 665.95 / MAX: 824.78
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest123130260390520650Min: 703.13 / Avg: 709.08 / Max: 714.25Min: 707.03 / Avg: 711.61 / Max: 714.63

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface1230.7291.4582.1872.9163.645SE +/- 0.02, N = 3SE +/- 0.02, N = 33.233.243.24MIN: 3.15 / MAX: 3.86MIN: 3.12 / MAX: 4.56MIN: 3.1 / MAX: 3.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface123246810Min: 3.21 / Avg: 3.24 / Max: 3.27Min: 3.2 / Avg: 3.24 / Max: 3.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet23246810SE +/- 0.03, N = 3SE +/- 0.03, N = 36.606.62MIN: 6.43 / MAX: 7.35MIN: 6.31 / MAX: 7.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet233691215Min: 6.56 / Avg: 6.6 / Max: 6.66Min: 6.57 / Avg: 6.62 / Max: 6.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.112350100150200250SE +/- 2.38, N = 3SE +/- 0.56, N = 3235.45236.09235.59MIN: 234.47 / MAX: 238.08MIN: 229.68 / MAX: 275.56MIN: 233.3 / MAX: 238.081. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.11234080120160200Min: 231.42 / Avg: 236.09 / Max: 239.24Min: 235.01 / Avg: 235.59 / Max: 236.71. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: googlenet123246810SE +/- 0.03, N = 15SE +/- 0.05, N = 37.907.917.92MIN: 7.66 / MAX: 12.7MIN: 7.63 / MAX: 28.47MIN: 7.66 / MAX: 22.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: googlenet1233691215Min: 7.8 / Avg: 7.91 / Max: 8.18Min: 7.83 / Avg: 7.92 / Max: 8.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet1235001000150020002500SE +/- 4.36, N = 3SE +/- 0.45, N = 32550.472550.552544.61MIN: 2517.62 / MAX: 2586.38MIN: 2473.98 / MAX: 2596.7MIN: 2504.79 / MAX: 2585.111. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet123400800120016002000Min: 2542.02 / Avg: 2550.55 / Max: 2556.42Min: 2543.9 / Avg: 2544.61 / Max: 2545.461. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: vgg1612320406080100SE +/- 0.07, N = 15SE +/- 0.13, N = 375.1975.2175.36MIN: 70.75 / MAX: 99.18MIN: 70.37 / MAX: 104.78MIN: 70.85 / MAX: 99.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: vgg161231428425670Min: 74.76 / Avg: 75.21 / Max: 75.69Min: 75.17 / Avg: 75.36 / Max: 75.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2123246810SE +/- 0.04, N = 3SE +/- 0.03, N = 37.617.617.61MIN: 7.37 / MAX: 8.66MIN: 7.3 / MAX: 8.35MIN: 7.29 / MAX: 8.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v21233691215Min: 7.55 / Avg: 7.61 / Max: 7.69Min: 7.56 / Avg: 7.61 / Max: 7.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency1230.00740.01480.02220.02960.037SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0330.0330.0331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency12312345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: High160120180240300295.6

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill Sync1235K10K15K20K25KSE +/- 32.08, N = 3SE +/- 1696.77, N = 122555725568211061. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill Sync1234K8K12K16K20KMin: 25514 / Avg: 25568 / Max: 25625Min: 4976 / Avg: 21105.75 / Max: 254761. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Reads12350K100K150K200K250KSE +/- 6743.65, N = 12SE +/- 6536.49, N = 12224288207855230883
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Reads12340K80K120K160K200KMin: 185439 / Avg: 207854.92 / Max: 253965Min: 189185 / Avg: 230883.33 / Max: 256569

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: squeezenet_ssd1233691215SE +/- 0.26, N = 15SE +/- 0.11, N = 39.248.778.25MIN: 7.56 / MAX: 19.45MIN: 7.36 / MAX: 20.4MIN: 7.54 / MAX: 19.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: squeezenet_ssd1233691215Min: 7.97 / Avg: 8.77 / Max: 12.12Min: 8.04 / Avg: 8.25 / Max: 8.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.11231.282.563.845.126.4SE +/- 0.087, N = 15SE +/- 0.191, N = 35.6895.4405.629MIN: 5.59 / MAX: 5.76MIN: 4.86 / MAX: 6.04MIN: 5.25 / MAX: 6.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1123246810Min: 4.96 / Avg: 5.44 / Max: 5.95Min: 5.33 / Avg: 5.63 / Max: 5.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV31230.79611.59222.38833.18443.9805SE +/- 0.067, N = 15SE +/- 0.050, N = 33.5383.0822.958MIN: 3.51 / MAX: 3.69MIN: 2.85 / MAX: 3.69MIN: 2.83 / MAX: 3.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3123246810Min: 2.88 / Avg: 3.08 / Max: 3.64Min: 2.86 / Avg: 2.96 / Max: 3.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency1231.23842.47683.71524.95366.192SE +/- 0.333, N = 15SE +/- 0.054, N = 151.6064.8165.5041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency123246810Min: 1.59 / Avg: 4.82 / Max: 5.54Min: 5.1 / Avg: 5.5 / Max: 6.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write1237K14K21K28K35KSE +/- 1817.49, N = 15SE +/- 87.29, N = 15311421212090971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write1235K10K15K20K25KMin: 9028.82 / Avg: 12120.2 / Max: 31372.75Min: 8235.83 / Avg: 9097.37 / Max: 9808.291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency1230.24140.48280.72420.96561.207SE +/- 0.081, N = 12SE +/- 0.045, N = 150.6390.9431.0731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency123246810Min: 0.65 / Avg: 0.94 / Max: 1.35Min: 0.89 / Avg: 1.07 / Max: 1.431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write12330060090012001500SE +/- 91.72, N = 12SE +/- 36.13, N = 15156611469531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write12330060090012001500Min: 738.4 / Avg: 1146.48 / Max: 1542.07Min: 700.22 / Avg: 952.91 / Max: 1129.71. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency12320406080100SE +/- 0.12, N = 3SE +/- 2.57, N = 1569.4369.2885.171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency1231632486480Min: 69.03 / Avg: 69.28 / Max: 69.42Min: 69.82 / Avg: 85.17 / Max: 100.941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write12330060090012001500SE +/- 2.60, N = 3SE +/- 34.48, N = 151441144411891. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write12330060090012001500Min: 1440.65 / Avg: 1443.8 / Max: 1448.96Min: 990.93 / Avg: 1188.8 / Max: 1432.641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency123816243240SE +/- 0.04, N = 3SE +/- 0.76, N = 1531.4131.3936.681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency123816243240Min: 31.34 / Avg: 31.39 / Max: 31.47Min: 32.87 / Avg: 36.68 / Max: 44.931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write12330060090012001500SE +/- 2.20, N = 3SE +/- 26.27, N = 151592159313711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write12330060090012001500Min: 1588.69 / Avg: 1593.09 / Max: 1595.53Min: 1112.93 / Avg: 1370.88 / Max: 1521.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty1232004006008001000SE +/- 12.03, N = 15SE +/- 12.75, N = 15841.9791.9797.4MIN: 658.36 / MAX: 1132.87MIN: 633.97 / MAX: 1178.55MIN: 637.18 / MAX: 1163.81
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty123150300450600750Min: 738.08 / Avg: 791.85 / Max: 854.94Min: 739.3 / Avg: 797.37 / Max: 851.92

93 Results Shown

PostgreSQL pgbench:
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
  1 - 1 - Read Write
  1 - 1 - Read Write - Average Latency
NCNN
Mobile Neural Network
Facebook RocksDB
NCNN:
  CPU - resnet18
  CPU - googlenet
C-Blosc
Mobile Neural Network
Renaissance
Apache Cassandra
NCNN:
  CPU - resnet50
  CPU - alexnet
Renaissance
NCNN
Renaissance
PostgreSQL pgbench
NCNN
Mobile Neural Network
Apache Cassandra
Renaissance
NCNN
Renaissance
NCNN
PostgreSQL pgbench
NCNN:
  CPU - mobilenet
  Vulkan GPU - efficientnet-b0
  Vulkan GPU-v3-v3 - mobilenet-v3
Natron
Renaissance
PostgreSQL pgbench
Facebook RocksDB
PostgreSQL pgbench
Mobile Neural Network
Renaissance
PostgreSQL pgbench
Mobile Neural Network
PostgreSQL pgbench
Facebook RocksDB
NCNN
Facebook RocksDB
NCNN
Apache Cassandra
PostgreSQL pgbench
Renaissance
NCNN:
  CPU - vgg16
  CPU-v3-v3 - mobilenet-v3
YafaRay
TNN
NCNN
Facebook RocksDB
NCNN:
  Vulkan GPU - shufflenet-v2
  Vulkan GPU - regnety_400m
PostgreSQL pgbench
NCNN
Renaissance
PostgreSQL pgbench
Facebook RocksDB
NCNN
PostgreSQL pgbench
GravityMark
Quantum ESPRESSO
NCNN:
  Vulkan GPU - resnet18
  CPU - efficientnet-b0
PostgreSQL pgbench
NCNN
TNN
Renaissance
NCNN:
  CPU - blazeface
  CPU - mnasnet
TNN
NCNN
TNN
NCNN:
  Vulkan GPU - vgg16
  CPU - shufflenet-v2
PostgreSQL pgbench
Unvanquished
Facebook RocksDB
Apache Cassandra
NCNN
Mobile Neural Network:
  squeezenetv1.1
  mobilenetV3
PostgreSQL pgbench:
  100 - 50 - Read Write - Average Latency
  100 - 50 - Read Write
  100 - 1 - Read Write - Average Latency
  100 - 1 - Read Write
  1 - 100 - Read Write - Average Latency
  1 - 100 - Read Write
  1 - 50 - Read Write - Average Latency
  1 - 50 - Read Write
Renaissance