3970x sep 2023 Tests for a future article. AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2309184-NE-3970XSEP248 .
3970x sep 2023 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution a AMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads) ASUS ROG ZENITH II EXTREME (1603 BIOS) AMD Starship/Matisse 64GB Samsung SSD 980 PRO 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio ASUS VP28U Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.19.0-051900rc7-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47) 1.2.204 GCC 11.4.0 ext4 3840x2160 OpenBenchmarking.org - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830104d - OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04) - Python 3.10.12 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
3970x sep 2023 openradioss: Bumper Beam openradioss: Chrysler Neon 1M openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless build-gcc: Time To Compile couchdb: 100 - 1000 - 30 couchdb: 100 - 3000 - 30 couchdb: 300 - 1000 - 30 couchdb: 300 - 3000 - 30 couchdb: 500 - 1000 - 30 couchdb: 500 - 3000 - 30 apache-iotdb: 100 - 1 - 200 - 100 apache-iotdb: 100 - 1 - 200 - 100 apache-iotdb: 100 - 1 - 500 - 100 apache-iotdb: 100 - 1 - 500 - 100 apache-iotdb: 100 - 1 - 800 - 100 apache-iotdb: 100 - 1 - 800 - 100 apache-iotdb: 200 - 1 - 200 - 100 apache-iotdb: 200 - 1 - 200 - 100 apache-iotdb: 200 - 1 - 500 - 100 apache-iotdb: 200 - 1 - 500 - 100 apache-iotdb: 200 - 1 - 800 - 100 apache-iotdb: 200 - 1 - 800 - 100 apache-iotdb: 500 - 1 - 200 - 100 apache-iotdb: 500 - 1 - 200 - 100 apache-iotdb: 500 - 1 - 200 - 400 apache-iotdb: 500 - 1 - 200 - 400 apache-iotdb: 500 - 1 - 500 - 100 apache-iotdb: 500 - 1 - 500 - 100 apache-iotdb: 500 - 1 - 500 - 400 apache-iotdb: 500 - 1 - 500 - 400 apache-iotdb: 500 - 1 - 800 - 100 apache-iotdb: 500 - 1 - 800 - 100 apache-iotdb: 500 - 1 - 800 - 400 apache-iotdb: 500 - 1 - 800 - 400 apache-iotdb: 800 - 1 - 200 - 100 apache-iotdb: 800 - 1 - 200 - 100 apache-iotdb: 800 - 1 - 200 - 400 apache-iotdb: 800 - 1 - 200 - 400 apache-iotdb: 800 - 1 - 500 - 100 apache-iotdb: 800 - 1 - 500 - 100 apache-iotdb: 800 - 1 - 500 - 400 apache-iotdb: 800 - 1 - 500 - 400 apache-iotdb: 800 - 1 - 800 - 100 apache-iotdb: 800 - 1 - 800 - 100 apache-iotdb: 800 - 1 - 800 - 400 apache-iotdb: 800 - 1 - 800 - 400 apache-iotdb: 100 - 100 - 200 - 100 apache-iotdb: 100 - 100 - 200 - 100 apache-iotdb: 100 - 100 - 500 - 100 apache-iotdb: 100 - 100 - 500 - 100 apache-iotdb: 100 - 100 - 800 - 100 apache-iotdb: 100 - 100 - 800 - 100 apache-iotdb: 200 - 100 - 200 - 100 apache-iotdb: 200 - 100 - 200 - 100 apache-iotdb: 200 - 100 - 500 - 100 apache-iotdb: 200 - 100 - 500 - 100 apache-iotdb: 200 - 100 - 800 - 100 apache-iotdb: 200 - 100 - 800 - 100 apache-iotdb: 500 - 100 - 200 - 100 apache-iotdb: 500 - 100 - 200 - 100 apache-iotdb: 500 - 100 - 200 - 400 apache-iotdb: 500 - 100 - 200 - 400 apache-iotdb: 500 - 100 - 500 - 100 apache-iotdb: 500 - 100 - 500 - 100 apache-iotdb: 500 - 100 - 500 - 400 apache-iotdb: 500 - 100 - 500 - 400 apache-iotdb: 500 - 100 - 800 - 100 apache-iotdb: 500 - 100 - 800 - 100 apache-iotdb: 500 - 100 - 800 - 400 apache-iotdb: 500 - 100 - 800 - 400 apache-iotdb: 800 - 100 - 200 - 100 apache-iotdb: 800 - 100 - 200 - 100 apache-iotdb: 800 - 100 - 200 - 400 apache-iotdb: 800 - 100 - 200 - 400 apache-iotdb: 800 - 100 - 500 - 100 apache-iotdb: 800 - 100 - 500 - 100 apache-iotdb: 800 - 100 - 500 - 400 apache-iotdb: 800 - 100 - 500 - 400 apache-iotdb: 800 - 100 - 800 - 100 apache-iotdb: 800 - 100 - 800 - 100 apache-iotdb: 800 - 100 - 800 - 400 apache-iotdb: 800 - 100 - 800 - 400 pgbench: 1 - 100 - Read Only pgbench: 1 - 100 - Read Only - Average Latency pgbench: 1 - 250 - Read Only pgbench: 1 - 250 - Read Only - Average Latency pgbench: 1 - 500 - Read Only pgbench: 1 - 500 - Read Only - Average Latency pgbench: 1 - 800 - Read Only pgbench: 1 - 800 - Read Only - Average Latency pgbench: 1 - 100 - Read Write pgbench: 1 - 100 - Read Write - Average Latency pgbench: 1 - 1000 - Read Only pgbench: 1 - 1000 - Read Only - Average Latency pgbench: 1 - 250 - Read Write pgbench: 1 - 250 - Read Write - Average Latency pgbench: 1 - 500 - Read Write pgbench: 1 - 500 - Read Write - Average Latency pgbench: 1 - 800 - Read Write pgbench: 1 - 800 - Read Write - Average Latency pgbench: 1 - 1000 - Read Write pgbench: 1 - 1000 - Read Write - Average Latency pgbench: 100 - 100 - Read Only pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 250 - Read Only pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 500 - Read Only pgbench: 100 - 500 - Read Only - Average Latency pgbench: 100 - 800 - Read Only pgbench: 100 - 800 - Read Only - Average Latency pgbench: 100 - 100 - Read Write pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 1000 - Read Only pgbench: 100 - 1000 - Read Only - Average Latency pgbench: 100 - 250 - Read Write pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 500 - Read Write pgbench: 100 - 500 - Read Write - Average Latency pgbench: 100 - 800 - Read Write pgbench: 100 - 800 - Read Write - Average Latency pgbench: 1000 - 100 - Read Only pgbench: 1000 - 100 - Read Only - Average Latency pgbench: 1000 - 250 - Read Only pgbench: 1000 - 250 - Read Only - Average Latency pgbench: 1000 - 500 - Read Only pgbench: 1000 - 500 - Read Only - Average Latency pgbench: 1000 - 800 - Read Only pgbench: 1000 - 800 - Read Only - Average Latency pgbench: 100 - 1000 - Read Write pgbench: 100 - 1000 - Read Write - Average Latency pgbench: 1000 - 100 - Read Write pgbench: 1000 - 100 - Read Write - Average Latency pgbench: 1000 - 1000 - Read Only pgbench: 1000 - 1000 - Read Only - Average Latency pgbench: 1000 - 250 - Read Write pgbench: 1000 - 250 - Read Write - Average Latency pgbench: 1000 - 500 - Read Write pgbench: 1000 - 500 - Read Write - Average Latency pgbench: 1000 - 800 - Read Write pgbench: 1000 - 800 - Read Write - Average Latency pgbench: 1000 - 1000 - Read Write pgbench: 1000 - 1000 - Read Write - Average Latency deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream memtier-benchmark: Redis - 50 - 1:1 memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 100 - 1:1 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 100 - 1:10 stress-ng: Hash stress-ng: MMAP stress-ng: NUMA stress-ng: Pipe stress-ng: Poll stress-ng: Zlib stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Cloning stress-ng: Forking stress-ng: Pthread stress-ng: AVL Tree stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: AVX-512 VNNI stress-ng: Function Call stress-ng: x86_64 RdRand stress-ng: Floating Point stress-ng: Matrix 3D Math stress-ng: Memory Copying stress-ng: Vector Shuffle stress-ng: Mixed Scheduler stress-ng: Socket Activity stress-ng: Wide Vector Math stress-ng: Context Switching stress-ng: Fused Multiply-Add stress-ng: Vector Floating Point stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet cassandra: Writes hadoop: Open - 20 - 100000 hadoop: Open - 50 - 100000 hadoop: Open - 100 - 100000 hadoop: Open - 20 - 1000000 hadoop: Open - 50 - 1000000 hadoop: Create - 20 - 100000 hadoop: Create - 50 - 100000 hadoop: Delete - 20 - 100000 hadoop: Delete - 50 - 100000 hadoop: Open - 100 - 1000000 hadoop: Open - 20 - 10000000 hadoop: Open - 50 - 10000000 hadoop: Rename - 20 - 100000 hadoop: Rename - 50 - 100000 hadoop: Create - 100 - 100000 hadoop: Create - 20 - 1000000 hadoop: Create - 50 - 1000000 hadoop: Delete - 100 - 100000 hadoop: Delete - 20 - 1000000 hadoop: Delete - 50 - 1000000 hadoop: Open - 100 - 10000000 hadoop: Rename - 100 - 100000 hadoop: Rename - 20 - 1000000 hadoop: Rename - 50 - 1000000 hadoop: Create - 100 - 1000000 hadoop: Create - 20 - 10000000 hadoop: Create - 50 - 10000000 hadoop: Delete - 100 - 1000000 hadoop: Delete - 20 - 10000000 hadoop: Delete - 50 - 10000000 hadoop: Rename - 100 - 1000000 hadoop: Rename - 20 - 10000000 hadoop: Rename - 50 - 10000000 hadoop: Create - 100 - 10000000 hadoop: Delete - 100 - 10000000 hadoop: Rename - 100 - 10000000 hadoop: File Status - 20 - 100000 hadoop: File Status - 50 - 100000 hadoop: File Status - 100 - 100000 hadoop: File Status - 20 - 1000000 hadoop: File Status - 50 - 1000000 hadoop: File Status - 100 - 1000000 hadoop: File Status - 20 - 10000000 hadoop: File Status - 50 - 10000000 hadoop: File Status - 100 - 10000000 brl-cad: VGR Performance Metric a 94.37 547.81 42.11 139.13 66.03 216.49 3.81 64.245 126.784 129.986 8.942 81.057 316.793 368.88 5.468 10.993 13.924 25.035 79.614 42.345 3.333 6.746 4.882 984.347 147.598 485.932 279.302 872.944 405.365 1252.186 139562 117.16 344892 115.63 546770 122.28 266795 65.05 675295 65.67 1044528 69.25 650057 28.32 631664 109.8 1477700 30.83 1520487 115.25 2294842 32.26 2307967 120.74 952128 19.61 953899 73 2250548 20.78 2292358 78.33 3256331 23.05 3333342 87.3 12720025 128.15 27615777 147.22 39801968 157.62 22807204 77.25 44916090 98.48 60748076 112.37 43282315 42.19 41758329 160.29 69052926 65.93 66302152 248.51 79688165 93.94 77131211 339.42 53756801 34.76 53828947 130.54 77801658 60.12 78451272 233.73 79844854 95.15 81231176 355.25 1540137 0.065 1576869 0.159 1567522 0.319 1533021 0.522 568 175.942 1524497 0.656 396 630.711 329 1517.751 317 2523.265 266 3762.871 1466898 0.068 1481168 0.169 1493244 0.335 1443932 0.554 8284 12.072 1433045 0.698 10386 24.071 10968 45.588 11065 72.3 1026569 0.097 1107736 0.226 1056453 0.473 1022077 0.783 11140 89.767 9037 11.066 1014537 0.986 11099 22.525 10485 47.687 11478 69.696 12087 82.731 28.9402 550.2483 647.5511 24.6833 255.1247 62.6834 84.1963 189.9882 320.8471 49.8469 1987.8228 8.026 152.6078 104.6882 34.1095 469.0201 322.5404 49.5813 156.9096 101.8795 234.6932 68.1472 32.354 490.6618 347.2985 45.988 123.0023 129.9008 28.8052 553.1876 1962532.28 2085593.65 1943330.63 2147272.63 2127101.53 2191452.33 7624024.19 446.68 754.61 13851265.33 4086609.46 3502.5 4441035 393.05 18884754.77 479.35 78148.34 98198159.73 3222.12 51559.3 128780.28 373.47 442330.43 530872.63 1617463.87 82381.09 107860946.05 199130.4 224058.27 1396743.12 24195.96 4453.62 11278.99 2795.68 12458.08 22868.27 34353.13 9538.61 1498606.48 10942678.54 33465006.28 94009.97 31979929.5 946.62 10638598.45 14.02 5.52 5.21 7.05 4.97 8.07 2.86 14.2 29.24 9.69 7.74 17.13 23.28 12.43 19.01 52.81 7.61 263638 436681 420168 502513 1226994 113714 3567 8593 3735 9132 284333 89635 158995 3571 8203 15373 3685 8839 17599 3712 11088 153128 26157 3733 8910 16260 3752 8861 18208 3926 9635 16064 3789 8889 16006 18086 15992 149925 724638 396825 1960784 2288330 2074689 489237 307475 1490535 537121 OpenBenchmarking.org
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bumper Beam a 20 40 60 80 100 94.37
OpenRadioss Model: Chrysler Neon 1M OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Chrysler Neon 1M a 120 240 360 480 600 547.81
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Cell Phone Drop Test a 10 20 30 40 50 42.11
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bird Strike on Windshield a 30 60 90 120 150 139.13
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Rubber O-Ring Seal Installation a 15 30 45 60 75 66.03
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: INIVOL and Fluid Structure Interaction Drop Container a 50 100 150 200 250 216.49
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a 0.8573 1.7146 2.5719 3.4292 4.2865 3.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K a 14 28 42 56 70 64.25 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K a 30 60 90 120 150 126.78 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K a 30 60 90 120 150 129.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a 2 4 6 8 10 8.942 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a 20 40 60 80 100 81.06 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a 70 140 210 280 350 316.79 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a 80 160 240 320 400 368.88 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
VVenC Video Input: Bosphorus 4K - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a 1.2303 2.4606 3.6909 4.9212 6.1515 5.468 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
VVenC Video Input: Bosphorus 4K - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a 3 6 9 12 15 10.99 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
VVenC Video Input: Bosphorus 1080p - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a 4 8 12 16 20 13.92 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
VVenC Video Input: Bosphorus 1080p - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a 6 12 18 24 30 25.04 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 0 a 20 40 60 80 100 79.61 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 2 a 10 20 30 40 50 42.35 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 6 a 0.7499 1.4998 2.2497 2.9996 3.7495 3.333 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 6, Lossless a 2 4 6 8 10 6.746 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 1.0 Encoder Speed: 10, Lossless a 1.0985 2.197 3.2955 4.394 5.4925 4.882 1. (CXX) g++ options: -O3 -fPIC -lm
Timed GCC Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed GCC Compilation 13.2 Time To Compile a 200 400 600 800 1000 984.35
Apache CouchDB Bulk Size: 100 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 1000 - Rounds: 30 a 30 60 90 120 150 147.60 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache CouchDB Bulk Size: 100 - Inserts: 3000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 3000 - Rounds: 30 a 110 220 330 440 550 485.93 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache CouchDB Bulk Size: 300 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 a 60 120 180 240 300 279.30 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache CouchDB Bulk Size: 300 - Inserts: 3000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 3000 - Rounds: 30 a 200 400 600 800 1000 872.94 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache CouchDB Bulk Size: 500 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 a 90 180 270 360 450 405.37 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache CouchDB Bulk Size: 500 - Inserts: 3000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 3000 - Rounds: 30 a 300 600 900 1200 1500 1252.19 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache IoTDB Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 30K 60K 90K 120K 150K 139562
Apache IoTDB Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 30 60 90 120 150 117.16 MAX: 26308.47
Apache IoTDB Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 70K 140K 210K 280K 350K 344892
Apache IoTDB Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 30 60 90 120 150 115.63 MAX: 26551.59
Apache IoTDB Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 120K 240K 360K 480K 600K 546770
Apache IoTDB Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 30 60 90 120 150 122.28 MAX: 26411.88
Apache IoTDB Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 60K 120K 180K 240K 300K 266795
Apache IoTDB Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 15 30 45 60 75 65.05 MAX: 24428.16
Apache IoTDB Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 140K 280K 420K 560K 700K 675295
Apache IoTDB Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 15 30 45 60 75 65.67 MAX: 24490.15
Apache IoTDB Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 200K 400K 600K 800K 1000K 1044528
Apache IoTDB Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 15 30 45 60 75 69.25 MAX: 24498
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 140K 280K 420K 560K 700K 650057
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 7 14 21 28 35 28.32 MAX: 13253.97
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 a 140K 280K 420K 560K 700K 631664
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 a 20 40 60 80 100 109.8 MAX: 27389.67
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 300K 600K 900K 1200K 1500K 1477700
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 7 14 21 28 35 30.83 MAX: 11938.33
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 a 300K 600K 900K 1200K 1500K 1520487
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 a 30 60 90 120 150 115.25 MAX: 27893.69
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 500K 1000K 1500K 2000K 2500K 2294842
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 7 14 21 28 35 32.26 MAX: 13249.76
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 a 500K 1000K 1500K 2000K 2500K 2307967
Apache IoTDB Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 a 30 60 90 120 150 120.74 MAX: 27670.84
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 200K 400K 600K 800K 1000K 952128
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a 5 10 15 20 25 19.61 MAX: 24718.57
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 a 200K 400K 600K 800K 1000K 953899
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 a 16 32 48 64 80 73 MAX: 28730.9
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 500K 1000K 1500K 2000K 2500K 2250548
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a 5 10 15 20 25 20.78 MAX: 24672.59
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 a 500K 1000K 1500K 2000K 2500K 2292358
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 a 20 40 60 80 100 78.33 MAX: 28777.24
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 700K 1400K 2100K 2800K 3500K 3256331
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a 6 12 18 24 30 23.05 MAX: 24688.91
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 a 700K 1400K 2100K 2800K 3500K 3333342
Apache IoTDB Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 a 20 40 60 80 100 87.3 MAX: 29090.52
Apache IoTDB Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 3M 6M 9M 12M 15M 12720025
Apache IoTDB Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 30 60 90 120 150 128.15 MAX: 26464.6
Apache IoTDB Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 6M 12M 18M 24M 30M 27615777
Apache IoTDB Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 30 60 90 120 150 147.22 MAX: 26885.03
Apache IoTDB Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 9M 18M 27M 36M 45M 39801968
Apache IoTDB Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 30 60 90 120 150 157.62 MAX: 27019.65
Apache IoTDB Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 5M 10M 15M 20M 25M 22807204
Apache IoTDB Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 20 40 60 80 100 77.25 MAX: 25157.99
Apache IoTDB Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 10M 20M 30M 40M 50M 44916090
Apache IoTDB Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 20 40 60 80 100 98.48 MAX: 24817.64
Apache IoTDB Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 13M 26M 39M 52M 65M 60748076
Apache IoTDB Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 30 60 90 120 150 112.37 MAX: 24952.79
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 9M 18M 27M 36M 45M 43282315
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 10 20 30 40 50 42.19 MAX: 13129.29
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 a 9M 18M 27M 36M 45M 41758329
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 a 40 80 120 160 200 160.29 MAX: 30002.47
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 15M 30M 45M 60M 75M 69052926
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 15 30 45 60 75 65.93 MAX: 11163.91
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 a 14M 28M 42M 56M 70M 66302152
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 a 50 100 150 200 250 248.51 MAX: 29910.26
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 20M 40M 60M 80M 100M 79688165
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 20 40 60 80 100 93.94 MAX: 10678
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 a 17M 34M 51M 68M 85M 77131211
Apache IoTDB Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 a 70 140 210 280 350 339.42 MAX: 30587.68
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 12M 24M 36M 48M 60M 53756801
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a 8 16 24 32 40 34.76 MAX: 26220.14
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 a 12M 24M 36M 48M 60M 53828947
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 a 30 60 90 120 150 130.54 MAX: 30310.14
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 17M 34M 51M 68M 85M 77801658
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a 13 26 39 52 65 60.12 MAX: 25378.76
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 a 20M 40M 60M 80M 100M 78451272
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 a 50 100 150 200 250 233.73 MAX: 29440.62
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 20M 40M 60M 80M 100M 79844854
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a 20 40 60 80 100 95.15 MAX: 24703.91
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 a 20M 40M 60M 80M 100M 81231176
Apache IoTDB Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 a 80 160 240 320 400 355.25 MAX: 36343.06
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1540137 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency a 0.0146 0.0292 0.0438 0.0584 0.073 0.065 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1576869 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency a 0.0358 0.0716 0.1074 0.1432 0.179 0.159 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1567522 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latency a 0.0718 0.1436 0.2154 0.2872 0.359 0.319 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1533021 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency a 0.1175 0.235 0.3525 0.47 0.5875 0.522 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Write a 120 240 360 480 600 568 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency a 40 80 120 160 200 175.94 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1524497 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency a 0.1476 0.2952 0.4428 0.5904 0.738 0.656 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Write a 90 180 270 360 450 396 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency a 140 280 420 560 700 630.71 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Write a 70 140 210 280 350 329 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latency a 300 600 900 1200 1500 1517.75 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Write a 70 140 210 280 350 317 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency a 500 1000 1500 2000 2500 2523.27 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write a 60 120 180 240 300 266 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency a 800 1600 2400 3200 4000 3762.87 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1466898 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency a 0.0153 0.0306 0.0459 0.0612 0.0765 0.068 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1481168 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency a 0.038 0.076 0.114 0.152 0.19 0.169 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1493244 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency a 0.0754 0.1508 0.2262 0.3016 0.377 0.335 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1443932 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency a 0.1247 0.2494 0.3741 0.4988 0.6235 0.554 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Write a 2K 4K 6K 8K 10K 8284 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency a 3 6 9 12 15 12.07 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only a 300K 600K 900K 1200K 1500K 1433045 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency a 0.1571 0.3142 0.4713 0.6284 0.7855 0.698 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Write a 2K 4K 6K 8K 10K 10386 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency a 6 12 18 24 30 24.07 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Write a 2K 4K 6K 8K 10K 10968 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency a 10 20 30 40 50 45.59 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Write a 2K 4K 6K 8K 10K 11065 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency a 16 32 48 64 80 72.3 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Only a 200K 400K 600K 800K 1000K 1026569 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Only - Average Latency a 0.0218 0.0436 0.0654 0.0872 0.109 0.097 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Only a 200K 400K 600K 800K 1000K 1107736 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Only - Average Latency a 0.0509 0.1018 0.1527 0.2036 0.2545 0.226 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 500 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Only a 200K 400K 600K 800K 1000K 1056453 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency a 0.1064 0.2128 0.3192 0.4256 0.532 0.473 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Only a 200K 400K 600K 800K 1000K 1022077 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency a 0.1762 0.3524 0.5286 0.7048 0.881 0.783 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write a 2K 4K 6K 8K 10K 11140 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency a 20 40 60 80 100 89.77 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Write a 2K 4K 6K 8K 10K 9037 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Write - Average Latency a 3 6 9 12 15 11.07 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only a 200K 400K 600K 800K 1000K 1014537 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency a 0.2219 0.4438 0.6657 0.8876 1.1095 0.986 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Write a 2K 4K 6K 8K 10K 11099 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Write - Average Latency a 5 10 15 20 25 22.53 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 500 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Write a 2K 4K 6K 8K 10K 10485 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency a 11 22 33 44 55 47.69 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write a 2K 4K 6K 8K 10K 11478 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency a 16 32 48 64 80 69.70 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write a 3K 6K 9K 12K 15K 12087 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency a 20 40 60 80 100 82.73 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a 7 14 21 28 35 28.94
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a 120 240 360 480 600 550.25
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 140 280 420 560 700 647.55
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 6 12 18 24 30 24.68
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a 60 120 180 240 300 255.12
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a 14 28 42 56 70 62.68
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a 20 40 60 80 100 84.20
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a 40 80 120 160 200 189.99
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a 70 140 210 280 350 320.85
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a 11 22 33 44 55 49.85
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 400 800 1200 1600 2000 1987.82
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 2 4 6 8 10 8.026
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a 30 60 90 120 150 152.61
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a 20 40 60 80 100 104.69
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a 8 16 24 32 40 34.11
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a 100 200 300 400 500 469.02
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a 70 140 210 280 350 322.54
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a 11 22 33 44 55 49.58
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 30 60 90 120 150 156.91
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 20 40 60 80 100 101.88
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a 50 100 150 200 250 234.69
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a 15 30 45 60 75 68.15
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a 8 16 24 32 40 32.35
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a 110 220 330 440 550 490.66
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 80 160 240 320 400 347.30
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a 10 20 30 40 50 45.99
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a 30 60 90 120 150 123.00
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a 30 60 90 120 150 129.90
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a 7 14 21 28 35 28.81
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a 120 240 360 480 600 553.19
Redis 7.0.12 + memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 a 400K 800K 1200K 1600K 2000K 1962532.28 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Redis 7.0.12 + memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 a 400K 800K 1200K 1600K 2000K 2085593.65 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Redis 7.0.12 + memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 a 400K 800K 1200K 1600K 2000K 1943330.63 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Redis 7.0.12 + memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 a 500K 1000K 1500K 2000K 2500K 2147272.63 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Redis 7.0.12 + memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 a 500K 1000K 1500K 2000K 2500K 2127101.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Redis 7.0.12 + memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 a 500K 1000K 1500K 2000K 2500K 2191452.33 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Stress-NG Test: Hash OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Hash a 1.6M 3.2M 4.8M 6.4M 8M 7624024.19 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MMAP a 100 200 300 400 500 446.68 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: NUMA a 160 320 480 640 800 754.61 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Pipe OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pipe a 3M 6M 9M 12M 15M 13851265.33 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Poll OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Poll a 900K 1800K 2700K 3600K 4500K 4086609.46 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Zlib OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Zlib a 800 1600 2400 3200 4000 3502.5 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Futex a 1000K 2000K 3000K 4000K 5000K 4441035 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MEMFD a 90 180 270 360 450 393.05 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mutex a 4M 8M 12M 16M 20M 18884754.77 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Atomic a 100 200 300 400 500 479.35 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Crypto a 20K 40K 60K 80K 100K 78148.34 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Malloc a 20M 40M 60M 80M 100M 98198159.73 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Cloning OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Cloning a 700 1400 2100 2800 3500 3222.12 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Forking a 11K 22K 33K 44K 55K 51559.3 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Pthread OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pthread a 30K 60K 90K 120K 150K 128780.28 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: AVL Tree OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVL Tree a 80 160 240 320 400 373.47 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: IO_uring a 90K 180K 270K 360K 450K 442330.43 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: SENDFILE a 110K 220K 330K 440K 550K 530872.63 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache a 300K 600K 900K 1200K 1500K 1617463.87 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Stress a 20K 40K 60K 80K 100K 82381.09 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Semaphores a 20M 40M 60M 80M 100M 107860946.05 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix Math a 40K 80K 120K 160K 200K 199130.4 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Math a 50K 100K 150K 200K 250K 224058.27 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: AVX-512 VNNI OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVX-512 VNNI a 300K 600K 900K 1200K 1500K 1396743.12 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Function Call OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Function Call a 5K 10K 15K 20K 25K 24195.96 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: x86_64 RdRand OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: x86_64 RdRand a 1000 2000 3000 4000 5000 4453.62 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Floating Point OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Floating Point a 2K 4K 6K 8K 10K 11278.99 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Matrix 3D Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix 3D Math a 600 1200 1800 2400 3000 2795.68 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Memory Copying a 3K 6K 9K 12K 15K 12458.08 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Shuffle OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Shuffle a 5K 10K 15K 20K 25K 22868.27 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Mixed Scheduler OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mixed Scheduler a 7K 14K 21K 28K 35K 34353.13 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Socket Activity a 2K 4K 6K 8K 10K 9538.61 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Wide Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math a 300K 600K 900K 1200K 1500K 1498606.48 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching a 2M 4M 6M 8M 10M 10942678.54 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Fused Multiply-Add OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Fused Multiply-Add a 7M 14M 21M 28M 35M 33465006.28 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Vector Floating Point OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Floating Point a 20K 40K 60K 80K 100K 94009.97 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions a 7M 14M 21M 28M 35M 31979929.5 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc Qsort Data Sorting a 200 400 600 800 1000 946.62 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing a 2M 4M 6M 8M 10M 10638598.45 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet a 4 8 12 16 20 14.02 MIN: 13.89 / MAX: 14.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a 1.242 2.484 3.726 4.968 6.21 5.52 MIN: 5.41 / MAX: 8.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a 1.1723 2.3446 3.5169 4.6892 5.8615 5.21 MIN: 5.15 / MAX: 5.79 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a 2 4 6 8 10 7.05 MIN: 6.87 / MAX: 7.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a 1.1183 2.2366 3.3549 4.4732 5.5915 4.97 MIN: 4.88 / MAX: 6.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a 2 4 6 8 10 8.07 MIN: 7.98 / MAX: 10.53 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface a 0.6435 1.287 1.9305 2.574 3.2175 2.86 MIN: 2.78 / MAX: 3.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a 4 8 12 16 20 14.2 MIN: 14.03 / MAX: 16.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a 7 14 21 28 35 29.24 MIN: 28.45 / MAX: 31.57 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 a 3 6 9 12 15 9.69 MIN: 9.48 / MAX: 11.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet a 2 4 6 8 10 7.74 MIN: 7.22 / MAX: 8.71 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 a 4 8 12 16 20 17.13 MIN: 16.86 / MAX: 17.91 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny a 6 12 18 24 30 23.28 MIN: 21.99 / MAX: 102.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a 3 6 9 12 15 12.43 MIN: 12.27 / MAX: 14.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m a 5 10 15 20 25 19.01 MIN: 18.5 / MAX: 19.75 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a 12 24 36 48 60 52.81 MIN: 52.34 / MAX: 56.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet a 2 4 6 8 10 7.61 MIN: 7.49 / MAX: 8.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 4.1.3 Test: Writes a 60K 120K 180K 240K 300K 263638
Apache Hadoop Operation: Open - Threads: 20 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 20 - Files: 100000 a 90K 180K 270K 360K 450K 436681
Apache Hadoop Operation: Open - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 100000 a 90K 180K 270K 360K 450K 420168
Apache Hadoop Operation: Open - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 a 110K 220K 330K 440K 550K 502513
Apache Hadoop Operation: Open - Threads: 20 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 20 - Files: 1000000 a 300K 600K 900K 1200K 1500K 1226994
Apache Hadoop Operation: Open - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 a 20K 40K 60K 80K 100K 113714
Apache Hadoop Operation: Create - Threads: 20 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 20 - Files: 100000 a 800 1600 2400 3200 4000 3567
Apache Hadoop Operation: Create - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 100000 a 2K 4K 6K 8K 10K 8593
Apache Hadoop Operation: Delete - Threads: 20 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 20 - Files: 100000 a 800 1600 2400 3200 4000 3735
Apache Hadoop Operation: Delete - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 100000 a 2K 4K 6K 8K 10K 9132
Apache Hadoop Operation: Open - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 a 60K 120K 180K 240K 300K 284333
Apache Hadoop Operation: Open - Threads: 20 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 20 - Files: 10000000 a 20K 40K 60K 80K 100K 89635
Apache Hadoop Operation: Open - Threads: 50 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 10000000 a 30K 60K 90K 120K 150K 158995
Apache Hadoop Operation: Rename - Threads: 20 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 20 - Files: 100000 a 800 1600 2400 3200 4000 3571
Apache Hadoop Operation: Rename - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 100000 a 2K 4K 6K 8K 10K 8203
Apache Hadoop Operation: Create - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 100000 a 3K 6K 9K 12K 15K 15373
Apache Hadoop Operation: Create - Threads: 20 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 20 - Files: 1000000 a 800 1600 2400 3200 4000 3685
Apache Hadoop Operation: Create - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 1000000 a 2K 4K 6K 8K 10K 8839
Apache Hadoop Operation: Delete - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 100000 a 4K 8K 12K 16K 20K 17599
Apache Hadoop Operation: Delete - Threads: 20 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 20 - Files: 1000000 a 800 1600 2400 3200 4000 3712
Apache Hadoop Operation: Delete - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 a 2K 4K 6K 8K 10K 11088
Apache Hadoop Operation: Open - Threads: 100 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 10000000 a 30K 60K 90K 120K 150K 153128
Apache Hadoop Operation: Rename - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 100000 a 6K 12K 18K 24K 30K 26157
Apache Hadoop Operation: Rename - Threads: 20 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 20 - Files: 1000000 a 800 1600 2400 3200 4000 3733
Apache Hadoop Operation: Rename - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 1000000 a 2K 4K 6K 8K 10K 8910
Apache Hadoop Operation: Create - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 1000000 a 3K 6K 9K 12K 15K 16260
Apache Hadoop Operation: Create - Threads: 20 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 20 - Files: 10000000 a 800 1600 2400 3200 4000 3752
Apache Hadoop Operation: Create - Threads: 50 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 10000000 a 2K 4K 6K 8K 10K 8861
Apache Hadoop Operation: Delete - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 a 4K 8K 12K 16K 20K 18208
Apache Hadoop Operation: Delete - Threads: 20 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 20 - Files: 10000000 a 800 1600 2400 3200 4000 3926
Apache Hadoop Operation: Delete - Threads: 50 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 10000000 a 2K 4K 6K 8K 10K 9635
Apache Hadoop Operation: Rename - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 1000000 a 3K 6K 9K 12K 15K 16064
Apache Hadoop Operation: Rename - Threads: 20 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 20 - Files: 10000000 a 800 1600 2400 3200 4000 3789
Apache Hadoop Operation: Rename - Threads: 50 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 10000000 a 2K 4K 6K 8K 10K 8889
Apache Hadoop Operation: Create - Threads: 100 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 10000000 a 3K 6K 9K 12K 15K 16006
Apache Hadoop Operation: Delete - Threads: 100 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 10000000 a 4K 8K 12K 16K 20K 18086
Apache Hadoop Operation: Rename - Threads: 100 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 10000000 a 3K 6K 9K 12K 15K 15992
Apache Hadoop Operation: File Status - Threads: 20 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 20 - Files: 100000 a 30K 60K 90K 120K 150K 149925
Apache Hadoop Operation: File Status - Threads: 50 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 a 160K 320K 480K 640K 800K 724638
Apache Hadoop Operation: File Status - Threads: 100 - Files: 100000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 a 80K 160K 240K 320K 400K 396825
Apache Hadoop Operation: File Status - Threads: 20 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 20 - Files: 1000000 a 400K 800K 1200K 1600K 2000K 1960784
Apache Hadoop Operation: File Status - Threads: 50 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 a 500K 1000K 1500K 2000K 2500K 2288330
Apache Hadoop Operation: File Status - Threads: 100 - Files: 1000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 a 400K 800K 1200K 1600K 2000K 2074689
Apache Hadoop Operation: File Status - Threads: 20 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 20 - Files: 10000000 a 100K 200K 300K 400K 500K 489237
Apache Hadoop Operation: File Status - Threads: 50 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 10000000 a 70K 140K 210K 280K 350K 307475
Apache Hadoop Operation: File Status - Threads: 100 - Files: 10000000 OpenBenchmarking.org Ops per sec, More Is Better Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 10000000 a 300K 600K 900K 1200K 1500K 1490535
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a 120K 240K 360K 480K 600K 537121 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
Phoronix Test Suite v10.8.5