1275 july

Intel Xeon E3-1275 v6 testing with a ASUS P10S-M WS (4401 BIOS) and Intel HD P630 on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2108010-IB-1275JULY424
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
Database Test Suite 3 Tests
HPC - High Performance Computing 4 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 4 Tests
Renderers 2 Tests
Server 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
August 01 2021
  3 Hours, 36 Minutes
2
August 01 2021
  3 Hours, 25 Minutes
3
August 01 2021
  3 Hours, 30 Minutes
Invert Hiding All Results Option
  3 Hours, 30 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1275 julyProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerOpenCLCompilerFile-SystemScreen Resolution123Intel Xeon E3-1275 v6 @ 4.20GHz (4 Cores / 8 Threads)ASUS P10S-M WS (4401 BIOS)Intel Xeon E3-1200 v6/7th16GBSamsung SSD 970 EVO Plus 500GBIntel HD P630 (1150MHz)Realtek ALC1150VA24312 x Intel I210Ubuntu 20.045.9.0-050900rc8daily20201007-generic (x86_64) 20201006X Server 1.20.8OpenCL 2.1GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xde - Thermald 1.9.1 Java Details- OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)Python Details- Python 3.8.5Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

123Result OverviewPhoronix Test Suite100%101%102%102%103%RenaissanceApache CassandraQuantum ESPRESSOMobile Neural NetworkFacebook RocksDBC-BloscPostgreSQL pgbenchNCNNTNNYafaRay

1275 julyncnn: Vulkan GPU - blazefacerenaissance: Apache Spark PageRankrenaissance: Scala Dottyncnn: Vulkan GPU-v3-v3 - mobilenet-v3renaissance: Apache Spark Bayesrocksdb: Read While Writingrenaissance: Genetic Algorithm Using Jenetics + Futuresrenaissance: Savina Reactors.IOrenaissance: In-Memory Database Shootoutcassandra: Mixed 1:3cassandra: Mixed 1:1pgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 1 - Read Writepgbench: 1 - 50 - Read Only - Average Latencyrocksdb: Rand Readrenaissance: Rand Forestcassandra: Readspgbench: 1 - 50 - Read Onlyrenaissance: Apache Spark ALSmnn: resnet-v2-50pgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlymnn: SqueezeNetV1.0ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - alexnetmnn: squeezenetv1.1renaissance: ALS Movie Lensrocksdb: Seq Fillrocksdb: Rand Fillpgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 1 - Read Writencnn: Vulkan GPU-v2-v2 - mobilenet-v2qe: AUSURF112pgbench: 1 - 1 - Read Onlyrocksdb: Update Randpgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writemnn: MobileNetV2_224pgbench: 1 - 100 - Read Writecassandra: Writespgbench: 1 - 100 - Read Write - Average Latencyncnn: Vulkan GPU - regnety_400mrocksdb: Rand Fill Syncncnn: Vulkan GPU - mnasnetpgbench: 100 - 1 - Read Onlypgbench: 100 - 100 - Read Onlyncnn: CPU - yolov4-tinypgbench: 100 - 100 - Read Only - Average Latencymnn: mobilenet-v1-1.0ncnn: CPU - blazefacemnn: inception-v3pgbench: 100 - 50 - Read Onlyblosc: blosclzmnn: mobilenetV3ncnn: Vulkan GPU - yolov4-tinyrenaissance: Finagle HTTP Requestspgbench: 100 - 50 - Read Only - Average Latencyncnn: CPU - mnasnetncnn: CPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - efficientnet-b0tnn: CPU - MobileNet v2ncnn: CPU - alexnetncnn: CPU - vgg16renaissance: Akka Unbalanced Cobwebbed Treencnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - mobilenetpgbench: 100 - 50 - Read Writeyafaray: Total Time For Sample Scenepgbench: 100 - 50 - Read Write - Average Latencyncnn: CPU - resnet18ncnn: CPU - efficientnet-b0ncnn: Vulkan GPU - resnet18tnn: CPU - DenseNetrocksdb: Read Rand Write Randncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - resnet50ncnn: CPU - resnet50ncnn: CPU - regnety_400mtnn: CPU - SqueezeNet v2ncnn: CPU - mobilenetncnn: CPU - googlenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - squeezenet_ssdpgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writetnn: CPU - SqueezeNet v1.1ncnn: Vulkan GPU - squeezenet_ssdnatron: Spaceshipncnn: CPU - shufflenet-v2pgbench: 100 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Only - Average Latency1233.013841.61125.913.342958.98139141471.311334.03913.336248356091.9445140.40520984927777.7370411235263588.839.390.8361195626.0711.5547.034.1548239.18618006217291.75656912.161151265252850077.632131034.10472144115138.7116.66120012.662327910793632.870.9274.0451.7245.9831090019718.92.3468.142288.70.4595.075.1924.87367.13917.7880.5112080.8196.1637.4412815334.9153.90219.678.4729.924252.25680768133.8772.0836.3310.7876.85722.818.086.0827.4963.432788324.61142.271.24.360.0430.0383.084178.21115.013.682950.31549.911449.43805.736891369181.9555120.40421645059788.6382111237063641.238.6440.821219676.16111.4746.194.1328150.38484896306711.78456112.321164.49261972822127.707129774.06971744466139.40216.7712.712311710723933.090.9334.0231.7145.8831084259770.12.33667.82277.40.4615.055.1724.87368.44617.7280.5712119.5196.2237.4912786334.0533.91119.688.4629.984261.72633.9472.1736.410.7676.96822.8418.16.0727.4563.404789324.79842.274.360.0430.0382.694389.31018.914.72709.77576391520.610882.73730.535431355121.895290.39221668676764.3376871273963557.638.50.8381193716.03311.71474.0848101.28625976206511.78256112.21165.43263652827527.665130474.06771544099139.84116.79120912.752314010719932.960.9334.021.7245.721084879728.22.32867.892286.30.4615.075.1924.96367.85717.7380.312106.3196.7437.3812780334.6123.91219.728.4529.994258.14580938233.8972.0336.410.7676.82922.8218.116.0727.4563.486788324.6142.251.24.360.0430.038OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: blazeface2130.6931.3862.0792.7723.4653.083.012.69MIN: 2.67 / MAX: 3.56MIN: 2.71 / MAX: 3.15MIN: 2.4 / MAX: 2.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank32190018002700360045004389.34178.23841.6MIN: 4087.7 / MAX: 4887MIN: 3851.34 / MAX: 4283.16MIN: 3454.46 / MAX: 4005.15

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty12320040060080010001125.91115.01018.9MIN: 768.22 / MAX: 2081.71MIN: 750.2 / MAX: 1881.38MIN: 778.48 / MAX: 1902.8

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v3-v3 - Model: mobilenet-v33214812162014.7013.6813.34MIN: 13.53 / MAX: 15.31MIN: 13.37 / MAX: 14.66MIN: 13.13 / MAX: 14.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes12360012001800240030002958.92950.32709.7MIN: 2259.94 / MAX: 2984.18MIN: 2241.3 / MAX: 3010.26MIN: 2045.25

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read While Writing31200K400K600K800K1000K7576398139141. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures231300600900120015001549.91520.61471.3MIN: 1516.52 / MAX: 1585.28MIN: 1469.89 / MAX: 1567.48MIN: 1428.33 / MAX: 1504.42

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO2132K4K6K8K10K11449.411334.010882.7MIN: 11449.36 / MAX: 16202.06MIN: 11333.98 / MAX: 16045.04MIN: 9266.45 / MAX: 16152.83

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout12380016002400320040003913.33805.73730.5MIN: 3586.4 / MAX: 4320.19MIN: 3411.39 / MAX: 4056.93MIN: 3348.58 / MAX: 4182.71

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:33128K16K24K32K40K354313624836891

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Mixed 1:13128K16K24K32K40K355123560936918

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency2130.43990.87981.31971.75962.19951.9551.9441.8901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write2131102203304405505125145291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency1230.09110.18220.27330.36440.45550.4050.4040.3921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Read1235M10M15M20M25M2098492721645059216686761. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest2132004006008001000788.6777.7764.3MIN: 682.27 / MAX: 1127.79MIN: 683.93 / MAX: 1059.47MIN: 658.93 / MAX: 1003.21

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Reads1328K16K24K32K40K370413768738211

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only12330K60K90K120K150K1235261237061273961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS21380016002400320040003641.23588.83557.6MIN: 3450.93 / MAX: 3854.7MIN: 3383.02 / MAX: 3924.72MIN: 3366.37 / MAX: 3817.66

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-5012391827364539.3938.6438.50MIN: 39.21 / MAX: 42.54MIN: 38.53 / MAX: 39.14MIN: 36.34 / MAX: 43.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency3120.18860.37720.56580.75440.9430.8380.8360.8201. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only31230K60K90K120K150K1193711195621219671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.02132468106.1616.0706.033MIN: 6.04 / MAX: 40.98MIN: 6.01 / MAX: 6.32MIN: 5.97 / MAX: 6.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: shufflenet-v2312369121511.7111.5511.47MIN: 11.52 / MAX: 14.55MIN: 11.27 / MAX: 12.89MIN: 11.33 / MAX: 12.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: alexnet132112233445547.0347.0046.19MIN: 44.85 / MAX: 50.32MIN: 44.55 / MAX: 49.56MIN: 44 / MAX: 49.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.11230.93471.86942.80413.73884.67354.1544.1324.084MIN: 4.11 / MAX: 4.35MIN: 4.08 / MAX: 4.57MIN: 4.05 / MAX: 4.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens1232K4K6K8K10K8239.18150.38101.2MIN: 8231.27 / MAX: 10058.75MIN: 8085.97 / MAX: 10009.51MIN: 7976.69 / MAX: 9959.3

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Sequential Fill213200K400K600K800K1000K8484898618008625971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill312140K280K420K560K700K6206516217296306711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency2310.40140.80281.20421.60562.0071.7841.7821.7561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write2311202403604806005615615691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2231369121512.3212.2012.16MIN: 12.13 / MAX: 13.58MIN: 11.99 / MAX: 13.36MIN: 11.94 / MAX: 13.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.8Input: AUSURF112321300600900120015001165.431164.491151.001. (F9X) gfortran options: -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only2316K12K18K24K30K2619726365265251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Update Random23160K120K180K240K300K2822122827522850071. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency2312468107.7077.6657.6321. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write2313K6K9K12K15K1297713047131031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2241230.92341.84682.77023.69364.6174.1044.0694.067MIN: 4.07 / MAX: 4.62MIN: 4.04 / MAX: 4.28MIN: 4.03 / MAX: 4.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write3211603204806408007157177211. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes31210K20K30K40K50K440994411544466

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency321306090120150139.84139.40138.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: regnety_400m3214812162016.7916.7716.66MIN: 16.54 / MAX: 16.96MIN: 16.56 / MAX: 16.92MIN: 16.46 / MAX: 16.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Fill Sync1330060090012001500120012091. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mnasnet321369121512.7512.7112.66MIN: 12.31 / MAX: 16.55MIN: 12.36 / MAX: 15.53MIN: 12.35 / MAX: 15.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only2315K10K15K20K25K2311723140232791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only32120K40K60K80K100K1071991072391079361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny23181624324033.0932.9632.87MIN: 32.99 / MAX: 33.43MIN: 32.86 / MAX: 33.24MIN: 32.77 / MAX: 33.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency3210.20990.41980.62970.83961.04950.9330.9330.9271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.01230.91011.82022.73033.64044.55054.0454.0234.020MIN: 4.01 / MAX: 4.57MIN: 4 / MAX: 5.03MIN: 3.99 / MAX: 4.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface3120.3870.7741.1611.5481.9351.721.721.71MIN: 1.7 / MAX: 1.9MIN: 1.7 / MAX: 1.87MIN: 1.69 / MAX: 1.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3123102030405045.9845.8845.72MIN: 45.86 / MAX: 51.05MIN: 45.75 / MAX: 50.06MIN: 45.6 / MAX: 46.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only23120K40K60K80K100K1084251084871090011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclz1322K4K6K8K10K9718.99728.29770.11. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV31230.52651.0531.57952.1062.63252.3402.3362.328MIN: 2.32 / MAX: 2.56MIN: 2.31 / MAX: 2.55MIN: 2.3 / MAX: 2.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: yolov4-tiny132153045607568.1467.8967.80MIN: 65.67 / MAX: 69.66MIN: 65.31 / MAX: 69.44MIN: 65.3 / MAX: 69.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests13250010001500200025002288.72286.32277.4MIN: 2051.63 / MAX: 2471.02MIN: 2077.63 / MAX: 2392.2MIN: 2071.53 / MAX: 2334.76

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency3210.10370.20740.31110.41480.51850.4610.4610.4591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet3121.14082.28163.42244.56325.7045.075.075.05MIN: 5.03 / MAX: 5.24MIN: 5.03 / MAX: 6.09MIN: 5.01 / MAX: 5.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v33121.16782.33563.50344.67125.8395.195.195.17MIN: 5.15 / MAX: 5.33MIN: 5.14 / MAX: 5.3MIN: 5.13 / MAX: 5.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: efficientnet-b032161218243024.9624.8724.87MIN: 24.6 / MAX: 25.14MIN: 24.69 / MAX: 25.06MIN: 24.66 / MAX: 25.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v223180160240320400368.45367.86367.14MIN: 367.96 / MAX: 369.21MIN: 367.2 / MAX: 368.57MIN: 366.54 / MAX: 367.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet1324812162017.7817.7317.72MIN: 17.71 / MAX: 17.95MIN: 17.64 / MAX: 17.9MIN: 17.63 / MAX: 17.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg162132040608010080.5780.5180.30MIN: 80.36 / MAX: 80.91MIN: 80.31 / MAX: 80.96MIN: 80.08 / MAX: 80.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree2313K6K9K12K15K12119.512106.312080.8MIN: 9055.85 / MAX: 12119.51MIN: 9311.92MIN: 9051.07 / MAX: 12080.82

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: vgg163214080120160200196.74196.22196.16MIN: 194.97 / MAX: 198.3MIN: 194.81 / MAX: 198.02MIN: 194.08 / MAX: 198.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: mobilenet21391827364537.4937.4437.38MIN: 37.06 / MAX: 41.5MIN: 37.02 / MAX: 38.79MIN: 37.04 / MAX: 37.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write3213K6K9K12K15K1278012786128151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene13270140210280350334.92334.61334.051. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency3210.88021.76042.64063.52084.4013.9123.9113.9021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet1832151015202519.7219.6819.67MIN: 19.63 / MAX: 21.24MIN: 19.6 / MAX: 19.96MIN: 19.59 / MAX: 19.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b01232468108.478.468.45MIN: 8.44 / MAX: 8.7MIN: 8.42 / MAX: 8.64MIN: 8.42 / MAX: 8.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet1832171421283529.9929.9829.92MIN: 29.69 / MAX: 30.22MIN: 29.77 / MAX: 30.13MIN: 29.71 / MAX: 30.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet23190018002700360045004261.734258.154252.26MIN: 4249 / MAX: 4287.82MIN: 4245.32 / MAX: 4480.78MIN: 4241.67 / MAX: 4260.631. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write Random13200K400K600K800K1000K8076818093821. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: googlenet23181624324033.9433.8933.87MIN: 33.59 / MAX: 34.16MIN: 33.58 / MAX: 34.14MIN: 33.65 / MAX: 34.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: resnet50213163248648072.1772.0872.03MIN: 71.81 / MAX: 72.48MIN: 71.78 / MAX: 72.34MIN: 71.67 / MAX: 72.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet5032181624324036.4036.4036.33MIN: 36.29 / MAX: 37.16MIN: 36.3 / MAX: 36.75MIN: 36.21 / MAX: 38.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m132369121510.7810.7610.76MIN: 10.73 / MAX: 10.98MIN: 10.73 / MAX: 10.9MIN: 10.72 / MAX: 11.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v22132040608010076.9776.8676.83MIN: 76.61 / MAX: 77.5MIN: 76.57 / MAX: 77.17MIN: 76.49 / MAX: 77.131. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet23151015202522.8422.8222.80MIN: 22.74 / MAX: 23.15MIN: 22.74 / MAX: 23.11MIN: 22.72 / MAX: 23.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet3214812162018.1118.1018.08MIN: 18.05 / MAX: 18.49MIN: 18.03 / MAX: 18.29MIN: 18.01 / MAX: 18.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v21322468106.086.076.07MIN: 6.03 / MAX: 6.29MIN: 6.01 / MAX: 6.19MIN: 6.01 / MAX: 6.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd13261218243027.4927.4527.45MIN: 27.27 / MAX: 27.93MIN: 27.21 / MAX: 27.72MIN: 27.22 / MAX: 27.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency312142842567063.4963.4363.401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write13220040060080010007887887891. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.121370140210280350324.80324.61324.61MIN: 324.14 / MAX: 325.69MIN: 323.8 / MAX: 325.59MIN: 323.84 / MAX: 325.481. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: Vulkan GPU - Model: squeezenet_ssd213102030405042.2742.2742.25MIN: 42.04 / MAX: 42.39MIN: 41.99 / MAX: 42.45MIN: 41.95 / MAX: 42.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship130.270.540.811.081.351.21.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v23210.9811.9622.9433.9244.9054.364.364.36MIN: 4.33 / MAX: 4.49MIN: 4.33 / MAX: 4.68MIN: 4.33 / MAX: 4.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency3210.00970.01940.02910.03880.04850.0430.0430.0431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency3210.00860.01720.02580.03440.0430.0380.0380.0381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

91 Results Shown

NCNN
Renaissance:
  Apache Spark PageRank
  Scala Dotty
NCNN
Renaissance
Facebook RocksDB
Renaissance:
  Genetic Algorithm Using Jenetics + Futures
  Savina Reactors.IO
  In-Memory Database Shootout
Apache Cassandra:
  Mixed 1:3
  Mixed 1:1
PostgreSQL pgbench:
  100 - 1 - Read Write - Average Latency
  100 - 1 - Read Write
  1 - 50 - Read Only - Average Latency
Facebook RocksDB
Renaissance
Apache Cassandra
PostgreSQL pgbench
Renaissance
Mobile Neural Network
PostgreSQL pgbench:
  1 - 100 - Read Only - Average Latency
  1 - 100 - Read Only
Mobile Neural Network
NCNN:
  Vulkan GPU - shufflenet-v2
  Vulkan GPU - alexnet
Mobile Neural Network
Renaissance
Facebook RocksDB:
  Seq Fill
  Rand Fill
PostgreSQL pgbench:
  1 - 1 - Read Write - Average Latency
  1 - 1 - Read Write
NCNN
Quantum ESPRESSO
PostgreSQL pgbench
Facebook RocksDB
PostgreSQL pgbench:
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
Mobile Neural Network
PostgreSQL pgbench
Apache Cassandra
PostgreSQL pgbench
NCNN
Facebook RocksDB
NCNN
PostgreSQL pgbench:
  100 - 1 - Read Only
  100 - 100 - Read Only
NCNN
PostgreSQL pgbench
Mobile Neural Network
NCNN
Mobile Neural Network
PostgreSQL pgbench
C-Blosc
Mobile Neural Network
NCNN
Renaissance
PostgreSQL pgbench
NCNN:
  CPU - mnasnet
  CPU-v3-v3 - mobilenet-v3
  Vulkan GPU - efficientnet-b0
TNN
NCNN:
  CPU - alexnet
  CPU - vgg16
Renaissance
NCNN:
  Vulkan GPU - vgg16
  Vulkan GPU - mobilenet
PostgreSQL pgbench
YafaRay
PostgreSQL pgbench
NCNN:
  CPU - resnet18
  CPU - efficientnet-b0
  Vulkan GPU - resnet18
TNN
Facebook RocksDB
NCNN:
  Vulkan GPU - googlenet
  Vulkan GPU - resnet50
  CPU - resnet50
  CPU - regnety_400m
TNN
NCNN:
  CPU - mobilenet
  CPU - googlenet
  CPU-v2-v2 - mobilenet-v2
  CPU - squeezenet_ssd
PostgreSQL pgbench:
  1 - 50 - Read Write - Average Latency
  1 - 50 - Read Write
TNN
NCNN
Natron
NCNN
PostgreSQL pgbench:
  100 - 1 - Read Only - Average Latency
  1 - 1 - Read Only - Average Latency