3970x sep 2023

Tests for a future article. AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2309184-NE-3970XSEP248
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
September 17 2023
  23 Hours, 4 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3970x sep 2023OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads)ASUS ROG ZENITH II EXTREME (1603 BIOS)AMD Starship/Matisse64GBSamsung SSD 980 PRO 500GBAMD Radeon RX 5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioASUS VP28UAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.19.0-051900rc7-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47)1.2.204GCC 11.4.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution3970x Sep 2023 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830104d- OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)- Python 3.10.12- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

3970x sep 2023hadoop: Rename - 20 - 10000000hadoop: Delete - 20 - 10000000hadoop: Open - 20 - 10000000hadoop: File Status - 20 - 10000000hadoop: Delete - 50 - 10000000hadoop: Rename - 50 - 10000000hadoop: Create - 20 - 10000000hadoop: Open - 50 - 10000000hadoop: File Status - 50 - 10000000hadoop: Rename - 100 - 10000000hadoop: Delete - 100 - 10000000couchdb: 500 - 3000 - 30hadoop: File Status - 100 - 10000000hadoop: Open - 100 - 10000000hadoop: Create - 50 - 10000000build-gcc: Time To Compilecouchdb: 300 - 3000 - 30hadoop: Delete - 20 - 1000000brl-cad: VGR Performance Metrichadoop: Rename - 20 - 1000000hadoop: Create - 100 - 10000000openradioss: Chrysler Neon 1Mhadoop: Open - 20 - 1000000hadoop: File Status - 20 - 1000000couchdb: 100 - 3000 - 30couchdb: 500 - 1000 - 30hadoop: Rename - 50 - 1000000hadoop: Delete - 50 - 1000000hadoop: Create - 20 - 1000000couchdb: 300 - 1000 - 30hadoop: Open - 50 - 1000000hadoop: File Status - 50 - 1000000pgbench: 1000 - 100 - Read Only - Average Latencypgbench: 1000 - 100 - Read Onlypgbench: 1000 - 800 - Read Only - Average Latencypgbench: 1000 - 800 - Read Onlypgbench: 1000 - 500 - Read Only - Average Latencypgbench: 1000 - 500 - Read Onlypgbench: 1000 - 250 - Read Only - Average Latencypgbench: 1000 - 250 - Read Onlypgbench: 1000 - 800 - Read Write - Average Latencypgbench: 1000 - 800 - Read Writepgbench: 1000 - 1000 - Read Write - Average Latencypgbench: 1000 - 1000 - Read Writepgbench: 1000 - 1000 - Read Only - Average Latencypgbench: 1000 - 1000 - Read Onlypgbench: 1000 - 500 - Read Write - Average Latencypgbench: 1000 - 500 - Read Writepgbench: 1000 - 100 - Read Write - Average Latencypgbench: 1000 - 100 - Read Writepgbench: 1000 - 250 - Read Write - Average Latencypgbench: 1000 - 250 - Read Writeopenradioss: INIVOL and Fluid Structure Interaction Drop Containerhadoop: Rename - 100 - 1000000hadoop: Delete - 100 - 1000000apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 800 - 100deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamhadoop: Open - 100 - 1000000hadoop: File Status - 100 - 1000000hadoop: Create - 50 - 1000000couchdb: 100 - 1000 - 30pgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlyapache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 400pgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlyapache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400pgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 800 - Read Onlypgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 800 - Read Writepgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlypgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writeopenradioss: Bird Strike on Windshieldpgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlyapache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 800 - 100pgbench: 1 - 1000 - Read Write - Average Latencypgbench: 1 - 1000 - Read Writepgbench: 1 - 800 - Read Write - Average Latencypgbench: 1 - 800 - Read Writepgbench: 1 - 500 - Read Write - Average Latencypgbench: 1 - 500 - Read Writepgbench: 1 - 250 - Read Write - Average Latencypgbench: 1 - 250 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 800 - Read Only - Average Latencypgbench: 1 - 800 - Read Onlypgbench: 1 - 1000 - Read Only - Average Latencypgbench: 1 - 1000 - Read Onlypgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlypgbench: 1 - 500 - Read Only - Average Latencypgbench: 1 - 500 - Read Onlycassandra: Writeshadoop: Rename - 20 - 100000hadoop: Delete - 20 - 100000apache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 400vvenc: Bosphorus 4K - Fastapache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 500 - 100deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamapache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 400hadoop: Create - 100 - 1000000apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 800 - 100 - 200 - 100openradioss: Bumper Beamhadoop: File Status - 20 - 100000hadoop: Open - 20 - 100000apache-iotdb: 200 - 100 - 800 - 100apache-iotdb: 200 - 100 - 800 - 100ncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetapache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 800 - 1 - 800 - 400apache-iotdb: 800 - 1 - 800 - 400apache-iotdb: 200 - 100 - 500 - 100apache-iotdb: 200 - 100 - 500 - 100apache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 500 - 100 - 200 - 100avifenc: 0apache-iotdb: 800 - 1 - 800 - 100apache-iotdb: 800 - 1 - 800 - 100apache-iotdb: 500 - 1 - 800 - 400apache-iotdb: 500 - 1 - 800 - 400apache-iotdb: 100 - 100 - 800 - 100apache-iotdb: 100 - 100 - 800 - 100apache-iotdb: 800 - 1 - 500 - 400apache-iotdb: 800 - 1 - 500 - 400apache-iotdb: 800 - 1 - 500 - 100apache-iotdb: 800 - 1 - 500 - 100apache-iotdb: 500 - 1 - 200 - 400apache-iotdb: 500 - 1 - 200 - 400apache-iotdb: 800 - 1 - 200 - 400apache-iotdb: 800 - 1 - 200 - 400hadoop: Rename - 50 - 100000openradioss: Rubber O-Ring Seal Installationapache-iotdb: 800 - 1 - 200 - 100apache-iotdb: 800 - 1 - 200 - 100apache-iotdb: 500 - 1 - 500 - 400apache-iotdb: 500 - 1 - 500 - 400hadoop: Delete - 50 - 100000apache-iotdb: 100 - 100 - 500 - 100apache-iotdb: 100 - 100 - 500 - 100apache-iotdb: 200 - 100 - 200 - 100apache-iotdb: 200 - 100 - 200 - 100apache-iotdb: 500 - 1 - 800 - 100apache-iotdb: 500 - 1 - 800 - 100memtier-benchmark: Redis - 100 - 1:5apache-iotdb: 500 - 1 - 500 - 100apache-iotdb: 500 - 1 - 500 - 100memtier-benchmark: Redis - 100 - 1:10apache-iotdb: 500 - 1 - 200 - 100apache-iotdb: 500 - 1 - 200 - 100memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 50 - 1:1memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 50 - 1:5apache-iotdb: 100 - 100 - 200 - 100apache-iotdb: 100 - 100 - 200 - 100apache-iotdb: 200 - 1 - 200 - 100apache-iotdb: 200 - 1 - 200 - 100hadoop: Create - 20 - 100000apache-iotdb: 200 - 1 - 500 - 100apache-iotdb: 200 - 1 - 500 - 100apache-iotdb: 100 - 1 - 500 - 100apache-iotdb: 100 - 1 - 500 - 100apache-iotdb: 100 - 1 - 800 - 100apache-iotdb: 100 - 1 - 800 - 100apache-iotdb: 200 - 1 - 800 - 100apache-iotdb: 200 - 1 - 800 - 100deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamapache-iotdb: 100 - 1 - 200 - 100apache-iotdb: 100 - 1 - 200 - 100deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamhadoop: Open - 50 - 100000deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamhadoop: File Status - 50 - 100000deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamvvenc: Bosphorus 4K - Fasterhadoop: Delete - 100 - 100000hadoop: Rename - 100 - 100000hadoop: File Status - 100 - 100000hadoop: Open - 100 - 100000hadoop: Create - 50 - 100000openradioss: Cell Phone Drop Testsvt-av1: Preset 4 - Bosphorus 4Kvvenc: Bosphorus 1080p - Fasthadoop: Create - 100 - 100000avifenc: 2deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamstress-ng: IO_uringstress-ng: Mallocstress-ng: MEMFDstress-ng: Vector Mathstress-ng: Cloningstress-ng: MMAPstress-ng: x86_64 RdRandstress-ng: Atomicstress-ng: Zlibstress-ng: CPU Cachestress-ng: AVL Treestress-ng: Pthreadstress-ng: Pipestress-ng: Context Switchingstress-ng: SENDFILEstress-ng: NUMAstress-ng: Matrix 3D Mathstress-ng: Vector Floating Pointstress-ng: Socket Activitystress-ng: Futexstress-ng: Mixed Schedulerstress-ng: Vector Shufflestress-ng: Floating Pointstress-ng: Function Callstress-ng: System V Message Passingstress-ng: Glibc Qsort Data Sortingstress-ng: Memory Copyingstress-ng: AVX-512 VNNIstress-ng: Matrix Mathstress-ng: Semaphoresstress-ng: CPU Stressstress-ng: Forkingstress-ng: Cryptostress-ng: Mutexstress-ng: Pollstress-ng: Glibc C String Functionsstress-ng: Wide Vector Mathstress-ng: Hashstress-ng: Fused Multiply-Addvvenc: Bosphorus 1080p - Fastersvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 1080pavifenc: 6, Losslesssvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Kavifenc: 10, Losslessavifenc: 6svt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pa378939268963548923796358889375215899530747515992180861252.18614905351531288861984.347872.9443712537121373316006547.8112269941960784485.932405.3658910110883685279.30211371422883300.09710265690.78310220770.47310564530.226110773669.6961147882.731120870.986101453747.6871048511.066903722.52511099216.491606418208355.258123117695.15798448548.0261987.8228469.020134.109528433320746898839147.5980.068146689860.1277801658339.42771312110.6981433045233.73784512720.554144393272.31106589.767111400.335149324445.5881096824.0711038612.0728284139.130.169148116893.94796881653762.8712662523.2653171517.751329630.711396175.9425680.52215330210.65615244970.06515401370.15915768690.319156752226363835713735248.51663021525.46865.936905292645.988347.2985130.54538289471626034.765375680194.37149925436681112.37607480767.6152.8119.0112.4323.2817.137.749.6929.2414.22.868.074.977.055.215.5214.02160.294175832987.3333334298.484491609042.194328231579.61423.053256331120.742307967157.623980196878.33229235820.782250548109.863166473953899820366.0319.61952128115.2515204879132147.222761577777.252280720432.2622948422147272.6330.8314777002191452.3328.326500571943330.631962532.282127101.532085593.65128.151272002565.05266795356765.67675295115.63344892122.2854677069.251044528189.988284.1963117.16139562490.661832.35442016862.6834255.124772463824.6833647.5511129.9008123.0023550.248328.9402553.187628.805210.9931759926157396825502513859342.113.8113.9241537342.34568.1472234.6932101.8795156.9096104.6882152.607849.8469320.847149.5813322.5404442330.4398198159.73393.05224058.273222.12446.684453.62479.353502.51617463.87373.47128780.2813851265.3310942678.54530872.63754.612795.6894009.979538.61444103534353.1322868.2711278.9924195.9610638598.45946.6212458.081396743.12199130.4107860946.0582381.0951559.378148.3418884754.774086609.4631979929.51498606.487624024.1933465006.2825.0358.94264.24581.0576.746126.784129.9864.8823.333316.793368.88OpenBenchmarking.org

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 10000000a80016002400320040003789

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 10000000a80016002400320040003926

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 10000000a20K40K60K80K100K89635

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 10000000a100K200K300K400K500K489237

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 10000000a2K4K6K8K10K9635

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 10000000a2K4K6K8K10K8889

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 10000000a80016002400320040003752

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 10000000a30K60K90K120K150K158995

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 10000000a70K140K210K280K350K307475

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 10000000a3K6K9K12K15K15992

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 10000000a4K8K12K16K20K18086

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 3000 - Rounds: 30a300600900120015001252.191. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 10000000a300K600K900K1200K1500K1490535

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 10000000a30K60K90K120K150K153128

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 10000000a2K4K6K8K10K8861

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compilea2004006008001000984.35

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30a2004006008001000872.941. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 1000000a80016002400320040003712

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metrica120K240K360K480K600K5371211. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 1000000a80016002400320040003733

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 10000000a3K6K9K12K15K16006

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1Ma120240360480600547.81

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 1000000a300K600K900K1200K1500K1226994

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 1000000a400K800K1200K1600K2000K1960784

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30a110220330440550485.931. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 1000 - Rounds: 30a90180270360450405.371. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000a2K4K6K8K10K8910

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000a2K4K6K8K10K11088

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 1000000a80016002400320040003685

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30a60120180240300279.301. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000a20K40K60K80K100K113714

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000a500K1000K1500K2000K2500K2288330

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 100 - Mode: Read Only - Average Latencya0.02180.04360.06540.08720.1090.0971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 100 - Mode: Read Onlya200K400K600K800K1000K10265691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latencya0.17620.35240.52860.70480.8810.7831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Onlya200K400K600K800K1000K10220771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latencya0.10640.21280.31920.42560.5320.4731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 500 - Mode: Read Onlya200K400K600K800K1000K10564531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 250 - Mode: Read Only - Average Latencya0.05090.10180.15270.20360.25450.2261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 250 - Mode: Read Onlya200K400K600K800K1000K11077361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latencya163248648069.701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Writea2K4K6K8K10K114781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latencya2040608010082.731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Writea3K6K9K12K15K120871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latencya0.22190.44380.66570.88761.10950.9861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Onlya200K400K600K800K1000K10145371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latencya112233445547.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 500 - Mode: Read Writea2K4K6K8K10K104851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 100 - Mode: Read Write - Average Latencya369121511.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 100 - Mode: Read Writea2K4K6K8K10K90371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 250 - Mode: Read Write - Average Latencya51015202522.531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 250 - Mode: Read Writea2K4K6K8K10K110991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Containera50100150200250216.49

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000a3K6K9K12K15K16064

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000a4K8K12K16K20K18208

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a80160240320400355.25MAX: 36343.06

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a20M40M60M80M100M81231176

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a2040608010095.15MAX: 24703.91

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a20M40M60M80M100M79844854

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streama2468108.026

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streama4008001200160020001987.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streama100200300400500469.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streama81624324034.11

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000a60K120K180K240K300K284333

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000a400K800K1200K1600K2000K2074689

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000a2K4K6K8K10K8839

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30a306090120150147.601. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latencya0.01530.03060.04590.06120.07650.0681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 100 - Mode: Read Onlya300K600K900K1200K1500K14668981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a132639526560.12MAX: 25378.76

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a17M34M51M68M85M77801658

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a70140210280350339.42MAX: 30587.68

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a17M34M51M68M85M77131211

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latencya0.15710.31420.47130.62840.78550.6981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Onlya300K600K900K1200K1500K14330451. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a50100150200250233.73MAX: 29440.62

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a20M40M60M80M100M78451272

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencya0.12470.24940.37410.49880.62350.5541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Onlya300K600K900K1200K1500K14439321. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latencya163248648072.31. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Writea2K4K6K8K10K110651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latencya2040608010089.771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Writea2K4K6K8K10K111401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latencya0.07540.15080.22620.30160.3770.3351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Onlya300K600K900K1200K1500K14932441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latencya102030405045.591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Writea2K4K6K8K10K109681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latencya61218243024.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Writea2K4K6K8K10K103861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latencya369121512.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 100 - Mode: Read Writea2K4K6K8K10K82841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshielda306090120150139.13

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latencya0.0380.0760.1140.1520.190.1691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Onlya300K600K900K1200K1500K14811681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a2040608010093.94MAX: 10678

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a20M40M60M80M100M79688165

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latencya80016002400320040003762.871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Writea601201802403002661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latencya50010001500200025002523.271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Writea701402102803503171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latencya300600900120015001517.751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Writea701402102803503291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latencya140280420560700630.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Writea901802703604503961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latencya4080120160200175.941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 100 - Mode: Read Writea1202403604806005681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latencya0.11750.2350.35250.470.58750.5221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Onlya300K600K900K1200K1500K15330211. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latencya0.14760.29520.44280.59040.7380.6561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Onlya300K600K900K1200K1500K15244971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latencya0.01460.02920.04380.05840.0730.0651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 100 - Mode: Read Onlya300K600K900K1200K1500K15401371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latencya0.03580.07160.10740.14320.1790.1591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Onlya300K600K900K1200K1500K15768691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latencya0.07180.14360.21540.28720.3590.3191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Onlya300K600K900K1200K1500K15675221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesa60K120K180K240K300K263638

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 100000a80016002400320040003571

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 100000a80016002400320040003735

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a50100150200250248.51MAX: 29910.26

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a14M28M42M56M70M66302152

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasta1.23032.46063.69094.92126.15155.4681. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a153045607565.93MAX: 11163.91

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a15M30M45M60M75M69052926

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streama102030405045.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streama80160240320400347.30

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a306090120150130.54MAX: 30310.14

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a12M24M36M48M60M53828947

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000a3K6K9K12K15K16260

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a81624324034.76MAX: 26220.14

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a12M24M36M48M60M53756801

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beama2040608010094.37

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 100000a30K60K90K120K150K149925

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 100000a90K180K270K360K450K436681

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a306090120150112.37MAX: 24952.79

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a13M26M39M52M65M60748076

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDeta2468107.61MIN: 7.49 / MAX: 8.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformera122436486052.81MIN: 52.34 / MAX: 56.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400ma51015202519.01MIN: 18.5 / MAX: 19.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssda369121512.43MIN: 12.27 / MAX: 14.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinya61218243023.28MIN: 21.99 / MAX: 102.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50a4812162017.13MIN: 16.86 / MAX: 17.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexneta2468107.74MIN: 7.22 / MAX: 8.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18a36912159.69MIN: 9.48 / MAX: 11.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16a71421283529.24MIN: 28.45 / MAX: 31.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googleneta4812162014.2MIN: 14.03 / MAX: 16.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacea0.64351.2871.93052.5743.21752.86MIN: 2.78 / MAX: 3.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0a2468108.07MIN: 7.98 / MAX: 10.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasneta1.11832.23663.35494.47325.59154.97MIN: 4.88 / MAX: 6.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2a2468107.05MIN: 6.87 / MAX: 7.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3a1.17232.34463.51694.68925.86155.21MIN: 5.15 / MAX: 5.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2a1.2422.4843.7264.9686.215.52MIN: 5.41 / MAX: 8.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobileneta4812162014.02MIN: 13.89 / MAX: 14.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a4080120160200160.29MAX: 30002.47

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a9M18M27M36M45M41758329

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a2040608010087.3MAX: 29090.52

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a700K1400K2100K2800K3500K3333342

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a2040608010098.48MAX: 24817.64

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a10M20M30M40M50M44916090

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a102030405042.19MAX: 13129.29

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a9M18M27M36M45M43282315

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0a2040608010079.611. (CXX) g++ options: -O3 -fPIC -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a61218243023.05MAX: 24688.91

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a700K1400K2100K2800K3500K3256331

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a306090120150120.74MAX: 27670.84

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a500K1000K1500K2000K2500K2307967

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a306090120150157.62MAX: 27019.65

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a9M18M27M36M45M39801968

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a2040608010078.33MAX: 28777.24

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a500K1000K1500K2000K2500K2292358

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a51015202520.78MAX: 24672.59

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a500K1000K1500K2000K2500K2250548

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a20406080100109.8MAX: 27389.67

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a140K280K420K560K700K631664

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a163248648073MAX: 28730.9

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a200K400K600K800K1000K953899

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000a2K4K6K8K10K8203

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal Installationa153045607566.03

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a51015202519.61MAX: 24718.57

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a200K400K600K800K1000K952128

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a306090120150115.25MAX: 27893.69

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a300K600K900K1200K1500K1520487

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5

a: The test run did not produce a result.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

a: The test run did not produce a result.

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000a2K4K6K8K10K9132

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1

a: The test run did not produce a result.

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a306090120150147.22MAX: 26885.03

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a6M12M18M24M30M27615777

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a2040608010077.25MAX: 25157.99

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a5M10M15M20M25M22807204

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a71421283532.26MAX: 13249.76

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a500K1000K1500K2000K2500K2294842

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5a500K1000K1500K2000K2500K2147272.631. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a71421283530.83MAX: 11938.33

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a300K600K900K1200K1500K1477700

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10a500K1000K1500K2000K2500K2191452.331. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a71421283528.32MAX: 13253.97

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a140K280K420K560K700K650057

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1a400K800K1200K1600K2000K1943330.631. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1a400K800K1200K1600K2000K1962532.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10a500K1000K1500K2000K2500K2127101.531. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5a400K800K1200K1600K2000K2085593.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a306090120150128.15MAX: 26464.6

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a3M6M9M12M15M12720025

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a153045607565.05MAX: 24428.16

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a60K120K180K240K300K266795

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 100000a80016002400320040003567

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a153045607565.67MAX: 24490.15

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a140K280K420K560K700K675295

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a306090120150115.63MAX: 26551.59

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a70K140K210K280K350K344892

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a306090120150122.28MAX: 26411.88

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a120K240K360K480K600K546770

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a153045607569.25MAX: 24498

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a200K400K600K800K1000K1044528

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streama4080120160200189.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streama2040608010084.20

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a306090120150117.16MAX: 26308.47

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a30K60K90K120K150K139562

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streama110220330440550490.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streama81624324032.35

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000a90K180K270K360K450K420168

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streama142842567062.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streama60120180240300255.12

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000a160K320K480K640K800K724638

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streama61218243024.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streama140280420560700647.55

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streama306090120150129.90

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streama306090120150123.00

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streama120240360480600550.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streama71421283528.94

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streama120240360480600553.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streama71421283528.81

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastera369121510.991. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000a4K8K12K16K20K17599

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000a6K12K18K24K30K26157

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000a80K160K240K320K400K396825

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000a110K220K330K440K550K502513

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000a2K4K6K8K10K8593

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Testa102030405042.11

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Ka0.85731.71462.57193.42924.28653.811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasta4812162013.921. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000a3K6K9K12K15K15373

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2a102030405042.351. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streama153045607568.15

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streama50100150200250234.69

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streama20406080100101.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streama306090120150156.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streama20406080100104.69

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streama306090120150152.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streama112233445549.85

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streama70140210280350320.85

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streama112233445549.58

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streama70140210280350322.54

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringa90K180K270K360K450K442330.431. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Malloca20M40M60M80M100M98198159.731. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDa90180270360450393.051. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Matha50K100K150K200K250K224058.271. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cloninga70014002100280035003222.121. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPa100200300400500446.681. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRanda100020003000400050004453.621. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Atomica100200300400500479.351. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Zliba80016002400320040003502.51. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Cachea300K600K900K1200K1500K1617463.871. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL Treea80160240320400373.471. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreada30K60K90K120K150K128780.281. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pipea3M6M9M12M15M13851265.331. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context Switchinga2M4M6M8M10M10942678.541. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEa110K220K330K440K550K530872.631. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAa160320480640800754.611. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D Matha60012001800240030002795.681. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating Pointa20K40K60K80K100K94009.971. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket Activitya2K4K6K8K10K9538.611. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Futexa1000K2000K3000K4000K5000K44410351. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed Schedulera7K14K21K28K35K34353.131. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Shufflea5K10K15K20K25K22868.271. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating Pointa2K4K6K8K10K11278.991. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function Calla5K10K15K20K25K24195.961. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message Passinga2M4M6M8M10M10638598.451. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data Sortinga2004006008001000946.621. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory Copyinga3K6K9K12K15K12458.081. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIa300K600K900K1200K1500K1396743.121. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix Matha40K80K120K160K200K199130.41. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Semaphoresa20M40M60M80M100M107860946.051. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Stressa20K40K60K80K100K82381.091. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Forkinga11K22K33K44K55K51559.31. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cryptoa20K40K60K80K100K78148.341. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mutexa4M8M12M16M20M18884754.771. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Polla900K1800K2700K3600K4500K4086609.461. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String Functionsa7M14M21M28M35M31979929.51. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector Matha300K600K900K1200K1500K1498606.481. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Hasha1.6M3.2M4.8M6.4M8M7624024.191. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-Adda7M14M21M28M35M33465006.281. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastera61218243025.041. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pa2468108.9421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Ka142842567064.251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pa2040608010081.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessa2468106.7461. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Ka306090120150126.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Ka306090120150129.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessa1.09852.1973.29554.3945.49254.8821. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6a0.74991.49982.24972.99963.74953.3331. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pa70140210280350316.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pa80160240320400368.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

301 Results Shown

Apache Hadoop:
  Rename - 20 - 10000000
  Delete - 20 - 10000000
  Open - 20 - 10000000
  File Status - 20 - 10000000
  Delete - 50 - 10000000
  Rename - 50 - 10000000
  Create - 20 - 10000000
  Open - 50 - 10000000
  File Status - 50 - 10000000
  Rename - 100 - 10000000
  Delete - 100 - 10000000
Apache CouchDB
Apache Hadoop:
  File Status - 100 - 10000000
  Open - 100 - 10000000
  Create - 50 - 10000000
Timed GCC Compilation
Apache CouchDB
Apache Hadoop
BRL-CAD
Apache Hadoop:
  Rename - 20 - 1000000
  Create - 100 - 10000000
OpenRadioss
Apache Hadoop:
  Open - 20 - 1000000
  File Status - 20 - 1000000
Apache CouchDB:
  100 - 3000 - 30
  500 - 1000 - 30
Apache Hadoop:
  Rename - 50 - 1000000
  Delete - 50 - 1000000
  Create - 20 - 1000000
Apache CouchDB
Apache Hadoop:
  Open - 50 - 1000000
  File Status - 50 - 1000000
PostgreSQL:
  1000 - 100 - Read Only - Average Latency
  1000 - 100 - Read Only
  1000 - 800 - Read Only - Average Latency
  1000 - 800 - Read Only
  1000 - 500 - Read Only - Average Latency
  1000 - 500 - Read Only
  1000 - 250 - Read Only - Average Latency
  1000 - 250 - Read Only
  1000 - 800 - Read Write - Average Latency
  1000 - 800 - Read Write
  1000 - 1000 - Read Write - Average Latency
  1000 - 1000 - Read Write
  1000 - 1000 - Read Only - Average Latency
  1000 - 1000 - Read Only
  1000 - 500 - Read Write - Average Latency
  1000 - 500 - Read Write
  1000 - 100 - Read Write - Average Latency
  1000 - 100 - Read Write
  1000 - 250 - Read Write - Average Latency
  1000 - 250 - Read Write
OpenRadioss
Apache Hadoop:
  Rename - 100 - 1000000
  Delete - 100 - 1000000
Apache IoTDB:
  800 - 100 - 800 - 400:
    Average Latency
    point/sec
  800 - 100 - 800 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop:
  Open - 100 - 1000000
  File Status - 100 - 1000000
  Create - 50 - 1000000
Apache CouchDB
PostgreSQL:
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
Apache IoTDB:
  800 - 100 - 500 - 100:
    Average Latency
    point/sec
  500 - 100 - 800 - 400:
    Average Latency
    point/sec
PostgreSQL:
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
Apache IoTDB:
  800 - 100 - 500 - 400:
    Average Latency
    point/sec
PostgreSQL:
  100 - 800 - Read Only - Average Latency
  100 - 800 - Read Only
  100 - 800 - Read Write - Average Latency
  100 - 800 - Read Write
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
  100 - 500 - Read Only - Average Latency
  100 - 500 - Read Only
  100 - 500 - Read Write - Average Latency
  100 - 500 - Read Write
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
OpenRadioss
PostgreSQL:
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
Apache IoTDB:
  500 - 100 - 800 - 100:
    Average Latency
    point/sec
PostgreSQL:
  1 - 1000 - Read Write - Average Latency
  1 - 1000 - Read Write
  1 - 800 - Read Write - Average Latency
  1 - 800 - Read Write
  1 - 500 - Read Write - Average Latency
  1 - 500 - Read Write
  1 - 250 - Read Write - Average Latency
  1 - 250 - Read Write
  1 - 100 - Read Write - Average Latency
  1 - 100 - Read Write
  1 - 800 - Read Only - Average Latency
  1 - 800 - Read Only
  1 - 1000 - Read Only - Average Latency
  1 - 1000 - Read Only
  1 - 100 - Read Only - Average Latency
  1 - 100 - Read Only
  1 - 250 - Read Only - Average Latency
  1 - 250 - Read Only
  1 - 500 - Read Only - Average Latency
  1 - 500 - Read Only
Apache Cassandra
Apache Hadoop:
  Rename - 20 - 100000
  Delete - 20 - 100000
Apache IoTDB:
  500 - 100 - 500 - 400:
    Average Latency
    point/sec
VVenC
Apache IoTDB:
  500 - 100 - 500 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache IoTDB:
  800 - 100 - 200 - 400:
    Average Latency
    point/sec
Apache Hadoop
Apache IoTDB:
  800 - 100 - 200 - 100:
    Average Latency
    point/sec
OpenRadioss
Apache Hadoop:
  File Status - 20 - 100000
  Open - 20 - 100000
Apache IoTDB:
  200 - 100 - 800 - 100:
    Average Latency
    point/sec
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Apache IoTDB:
  500 - 100 - 200 - 400:
    Average Latency
    point/sec
  800 - 1 - 800 - 400:
    Average Latency
    point/sec
  200 - 100 - 500 - 100:
    Average Latency
    point/sec
  500 - 100 - 200 - 100:
    Average Latency
    point/sec
libavif avifenc
Apache IoTDB:
  800 - 1 - 800 - 100:
    Average Latency
    point/sec
  500 - 1 - 800 - 400:
    Average Latency
    point/sec
  100 - 100 - 800 - 100:
    Average Latency
    point/sec
  800 - 1 - 500 - 400:
    Average Latency
    point/sec
  800 - 1 - 500 - 100:
    Average Latency
    point/sec
  500 - 1 - 200 - 400:
    Average Latency
    point/sec
  800 - 1 - 200 - 400:
    Average Latency
    point/sec
Apache Hadoop
OpenRadioss
Apache IoTDB:
  800 - 1 - 200 - 100:
    Average Latency
    point/sec
  500 - 1 - 500 - 400:
    Average Latency
    point/sec
Apache Hadoop
Apache IoTDB:
  100 - 100 - 500 - 100:
    Average Latency
    point/sec
  200 - 100 - 200 - 100:
    Average Latency
    point/sec
  500 - 1 - 800 - 100:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark
Apache IoTDB:
  500 - 1 - 500 - 100:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark
Apache IoTDB:
  500 - 1 - 200 - 100:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark:
  Redis - 100 - 1:1
  Redis - 50 - 1:1
  Redis - 50 - 1:10
  Redis - 50 - 1:5
Apache IoTDB:
  100 - 100 - 200 - 100:
    Average Latency
    point/sec
  200 - 1 - 200 - 100:
    Average Latency
    point/sec
Apache Hadoop
Apache IoTDB:
  200 - 1 - 500 - 100:
    Average Latency
    point/sec
  100 - 1 - 500 - 100:
    Average Latency
    point/sec
  100 - 1 - 800 - 100:
    Average Latency
    point/sec
  200 - 1 - 800 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache IoTDB:
  100 - 1 - 200 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
VVenC
Apache Hadoop:
  Delete - 100 - 100000
  Rename - 100 - 100000
  File Status - 100 - 100000
  Open - 100 - 100000
  Create - 50 - 100000
OpenRadioss
SVT-AV1
VVenC
Apache Hadoop
libavif avifenc
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Stress-NG:
  IO_uring
  Malloc
  MEMFD
  Vector Math
  Cloning
  MMAP
  x86_64 RdRand
  Atomic
  Zlib
  CPU Cache
  AVL Tree
  Pthread
  Pipe
  Context Switching
  SENDFILE
  NUMA
  Matrix 3D Math
  Vector Floating Point
  Socket Activity
  Futex
  Mixed Scheduler
  Vector Shuffle
  Floating Point
  Function Call
  System V Message Passing
  Glibc Qsort Data Sorting
  Memory Copying
  AVX-512 VNNI
  Matrix Math
  Semaphores
  CPU Stress
  Forking
  Crypto
  Mutex
  Poll
  Glibc C String Functions
  Wide Vector Math
  Hash
  Fused Multiply-Add
VVenC
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 4K
  Preset 8 - Bosphorus 1080p
libavif avifenc
SVT-AV1:
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
libavif avifenc:
  10, Lossless
  6
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p