xe sep

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.30 BIOS) and llvmpipe on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2309209-PTS-XESEP76303
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
September 19 2023
  19 Hours, 39 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xe sepOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBllvmpipeRealtek ALC1220Intel I219-V + Intel I211Ubuntu 22.045.19.0-051900rc7-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.34.5 Mesa 22.0.1 (LLVM 13.0.1 256 bits)1.2.204GCC 11.4.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionXe Sep BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003303- OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)- Python 3.10.12- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

xe sephadoop: Rename - 20 - 10000000hadoop: Delete - 20 - 10000000hadoop: Open - 20 - 10000000hadoop: File Status - 20 - 10000000hadoop: Rename - 50 - 10000000hadoop: Rename - 1000 - 10000000hadoop: Rename - 500 - 10000000hadoop: Delete - 50 - 10000000hadoop: Rename - 100 - 10000000hadoop: Delete - 1000 - 10000000hadoop: Delete - 500 - 10000000hadoop: Delete - 100 - 10000000hadoop: Open - 50 - 10000000hadoop: Create - 20 - 10000000hadoop: Open - 1000 - 10000000hadoop: File Status - 50 - 10000000hadoop: File Status - 1000 - 10000000hadoop: Open - 500 - 10000000hadoop: File Status - 500 - 10000000hadoop: Open - 100 - 10000000build-gcc: Time To Compilehadoop: File Status - 100 - 10000000hadoop: Create - 1000 - 10000000openradioss: Chrysler Neon 1Mhadoop: Create - 500 - 10000000hadoop: Create - 50 - 10000000hadoop: Create - 100 - 10000000brl-cad: VGR Performance Metricopenradioss: INIVOL and Fluid Structure Interaction Drop Containerhadoop: Rename - 20 - 1000000hadoop: Delete - 20 - 1000000apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 800 - 100hadoop: Open - 20 - 1000000hadoop: File Status - 20 - 1000000openradioss: Bird Strike on Windshieldhadoop: Rename - 50 - 1000000hadoop: Rename - 1000 - 1000000hadoop: Delete - 50 - 1000000hadoop: Rename - 500 - 1000000apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 500 - 100hadoop: Rename - 100 - 1000000apache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 800 - 100hadoop: Delete - 1000 - 1000000hadoop: Delete - 500 - 1000000hadoop: Create - 20 - 1000000hadoop: Delete - 100 - 1000000hadoop: Open - 1000 - 1000000hadoop: File Status - 1000 - 1000000deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamhadoop: File Status - 50 - 1000000hadoop: Open - 50 - 1000000openradioss: Rubber O-Ring Seal Installationhadoop: File Status - 500 - 1000000openradioss: Bumper Beamaom-av1: Speed 4 Two-Pass - Bosphorus 4Khadoop: Open - 500 - 1000000vvenc: Bosphorus 4K - Fastpgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlypgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlypgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writepgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 800 - Read Onlypgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 800 - Read Writehadoop: Open - 100 - 1000000pgbench: 1 - 1000 - Read Write - Average Latencypgbench: 1 - 1000 - Read Writepgbench: 1 - 800 - Read Write - Average Latencypgbench: 1 - 800 - Read Writeapache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 400pgbench: 1 - 500 - Read Write - Average Latencypgbench: 1 - 500 - Read Writepgbench: 1 - 250 - Read Write - Average Latencypgbench: 1 - 250 - Read Writepgbench: 1 - 500 - Read Only - Average Latencypgbench: 1 - 500 - Read Onlypgbench: 1 - 1000 - Read Only - Average Latencypgbench: 1 - 1000 - Read Onlypgbench: 1 - 800 - Read Only - Average Latencypgbench: 1 - 800 - Read Onlypgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlyavifenc: 0hadoop: File Status - 100 - 1000000cassandra: Writeshadoop: Create - 1000 - 1000000hadoop: Create - 500 - 1000000apache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 400hadoop: Create - 100 - 1000000hadoop: Create - 50 - 1000000apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 200 - 100 - 800 - 100apache-iotdb: 200 - 100 - 800 - 100apache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 400aom-av1: Speed 0 Two-Pass - Bosphorus 4Kopenradioss: Cell Phone Drop Testaom-av1: Speed 4 Two-Pass - Bosphorus 1080papache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 800 - 1 - 800 - 400apache-iotdb: 800 - 1 - 800 - 400apache-iotdb: 200 - 100 - 500 - 100apache-iotdb: 200 - 100 - 500 - 100apache-iotdb: 800 - 1 - 800 - 100apache-iotdb: 800 - 1 - 800 - 100apache-iotdb: 500 - 1 - 800 - 400apache-iotdb: 500 - 1 - 800 - 400apache-iotdb: 100 - 100 - 800 - 100apache-iotdb: 100 - 100 - 800 - 100hadoop: Rename - 20 - 100000aom-av1: Speed 6 Two-Pass - Bosphorus 4Kvvenc: Bosphorus 4K - Fasterhadoop: Delete - 20 - 100000memtier-benchmark: Redis - 500 - 1:10apache-iotdb: 800 - 1 - 500 - 100apache-iotdb: 800 - 1 - 500 - 100apache-iotdb: 800 - 1 - 500 - 400apache-iotdb: 800 - 1 - 500 - 400memtier-benchmark: Redis - 500 - 1:5apache-iotdb: 500 - 1 - 800 - 100apache-iotdb: 500 - 1 - 800 - 100ncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetapache-iotdb: 800 - 1 - 200 - 400apache-iotdb: 800 - 1 - 200 - 400apache-iotdb: 500 - 1 - 500 - 400apache-iotdb: 500 - 1 - 500 - 400apache-iotdb: 100 - 100 - 500 - 100apache-iotdb: 100 - 100 - 500 - 100apache-iotdb: 800 - 1 - 200 - 100apache-iotdb: 800 - 1 - 200 - 100apache-iotdb: 200 - 100 - 200 - 100apache-iotdb: 200 - 100 - 200 - 100apache-iotdb: 500 - 1 - 500 - 100apache-iotdb: 500 - 1 - 500 - 100memtier-benchmark: Redis - 100 - 1:10memtier-benchmark: Redis - 100 - 1:5dragonflydb: 50 - 1:5dragonflydb: 50 - 1:100dragonflydb: 50 - 1:10memtier-benchmark: Redis - 50 - 1:5dragonflydb: 10 - 1:5dragonflydb: 20 - 1:10dragonflydb: 20 - 1:5memtier-benchmark: Redis - 50 - 1:10dragonflydb: 10 - 1:100dragonflydb: 10 - 1:10apache-iotdb: 500 - 1 - 200 - 100apache-iotdb: 500 - 1 - 200 - 100deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdragonflydb: 20 - 1:100apache-iotdb: 500 - 1 - 200 - 400apache-iotdb: 500 - 1 - 200 - 400avifenc: 2deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamhadoop: Open - 20 - 100000hadoop: File Status - 20 - 100000apache-iotdb: 200 - 1 - 800 - 100apache-iotdb: 200 - 1 - 800 - 100svt-av1: Preset 4 - Bosphorus 4Kapache-iotdb: 100 - 100 - 200 - 100apache-iotdb: 100 - 100 - 200 - 100deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamapache-iotdb: 200 - 1 - 200 - 100apache-iotdb: 200 - 1 - 200 - 100apache-iotdb: 100 - 1 - 500 - 100apache-iotdb: 100 - 1 - 500 - 100apache-iotdb: 200 - 1 - 500 - 100apache-iotdb: 200 - 1 - 500 - 100deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamapache-iotdb: 100 - 1 - 800 - 100apache-iotdb: 100 - 1 - 800 - 100deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamapache-iotdb: 100 - 1 - 200 - 100apache-iotdb: 100 - 1 - 200 - 100hadoop: Rename - 50 - 100000hadoop: Delete - 50 - 100000hadoop: Rename - 1000 - 100000vvenc: Bosphorus 1080p - Fasthadoop: Rename - 500 - 100000deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamhadoop: Rename - 100 - 100000hadoop: Delete - 1000 - 100000hadoop: Create - 20 - 100000hadoop: Open - 50 - 100000hadoop: Delete - 100 - 100000hadoop: File Status - 50 - 100000hadoop: Delete - 500 - 100000hadoop: File Status - 1000 - 100000hadoop: File Status - 500 - 100000hadoop: Open - 500 - 100000hadoop: Open - 1000 - 100000hadoop: File Status - 100 - 100000hadoop: Open - 100 - 100000deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamhadoop: Create - 500 - 100000deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamhadoop: Create - 1000 - 100000hadoop: Create - 50 - 100000hadoop: Create - 100 - 100000deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamaom-av1: Speed 6 Two-Pass - Bosphorus 1080pstress-ng: IO_uringstress-ng: MMAPstress-ng: Cloningstress-ng: Mallocstress-ng: MEMFDvvenc: Bosphorus 1080p - Fasterstress-ng: Atomicstress-ng: Zlibaom-av1: Speed 0 Two-Pass - Bosphorus 1080pstress-ng: CPU Cachestress-ng: Matrix Mathstress-ng: SENDFILEstress-ng: NUMAstress-ng: Function Callstress-ng: Pthreadstress-ng: Vector Floating Pointstress-ng: Memory Copyingstress-ng: Matrix 3D Mathstress-ng: Vector Shufflestress-ng: Floating Pointstress-ng: AVL Treestress-ng: Forkingstress-ng: Hashstress-ng: Glibc C String Functionsstress-ng: Fused Multiply-Addstress-ng: Wide Vector Mathstress-ng: Socket Activitystress-ng: Mixed Schedulerstress-ng: x86_64 RdRandstress-ng: AVX-512 VNNIstress-ng: Semaphoresstress-ng: CPU Stressstress-ng: Cryptostress-ng: Mutexstress-ng: Pollstress-ng: System V Message Passingstress-ng: Glibc Qsort Data Sortingstress-ng: Context Switchingstress-ng: Vector Mathstress-ng: Futexstress-ng: Pipesvt-av1: Preset 4 - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 11 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 1080pavifenc: 6, Losslessaom-av1: Speed 6 Realtime - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 11 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 1080pavifenc: 10, Losslessavifenc: 6svt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pa741581651395442816271437813861140311770716255285232975927533148896758485608369563212938927282383621310631099.63238899911290883.38126731378013893214594419.5576478260681.8843462222167.24460407049115771848429240.4614703149641832715852408.4946287475538.324621303197.784878535716595155.144895395431087328187729287761187931984127352.035325.56341953125714796153.631821494145.335.097189074.260.24410250371.1238902200.50499125138.7171291474.7911337120.676120920.82896660159.663134095081303212.9843112437.06232893.0250068226321.51488699621361.863367536.1894660.46310799780.98210181020.76810423130.2211130515130.41218867921578791194913075165.0343848942143451468042.2244147738145.0847975695192.61345988200.2382.468.150.2436356156111.792661544111.493863109027.992704379132.92051296183.2934330554770810.98.11182802397457.0323.84197670788.4520387652269215.2637.2120097404.8957.0713.779.9621.7211.745.86.3934.1710.161.625.283.564.364.024.0111.8476.44937338120.21427558153.432561826520.0692864582.812122116232.9814056142378150.452338023.636124151.756143001.916098422.192431342.485179817.566163567.356221259.542386384.645167381.395105989.9729.7561779959.9691150.02416215410.58112.4362593664.89637.4713240.033664102697087471.710000012.581134.951214460516.5915541.7464.07276746115.4934399268.56653641121.886473.8255124.08515961520.893117.2734494.782118.188116.2313999015736188151458611.04117944394.764722.7954205212968275545917162919769444447596769231109890152910154347878740257803537.9209237.193813759168.066953.539113780143491540185.6505105.045287.5155102.804834.6519259.538234.577260.10095.80491545.187423.38245079.88425.441695.9427713750.25461.7720.183273.711947.280.672599516.29101403.4329594.28378.2310886.67139787.438877.145542.41430.912252.733943.71112.7855491.953140132.986760452.8417443689.74739258.611705.9318322.87182257.551242160.0538227021.8241250.4231071.1511003661.442364530.187317181.49429.612736448.1870075.882636992.178793037.876.48828.5930.7335.735.8236.0240.14744.4689.60172.5187.09380.7583.7885.4694.87787.225.9045.813168.49183.131OpenBenchmarking.org

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 10000000a160032004800640080007415

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 10000000a2K4K6K8K10K8165

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 10000000a30K60K90K120K150K139544

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 10000000a60K120K180K240K300K281627

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 10000000a3K6K9K12K15K14378

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 1000 - Files: 10000000a3K6K9K12K15K13861

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 500 - Files: 10000000a3K6K9K12K15K14031

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 10000000a4K8K12K16K20K17707

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 10000000a3K6K9K12K15K16255

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 1000 - Files: 10000000a6K12K18K24K30K28523

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 500 - Files: 10000000a6K12K18K24K30K29759

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 10000000a6K12K18K24K30K27533

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 10000000a30K60K90K120K150K148896

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 10000000a160032004800640080007584

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 1000 - Files: 10000000a20K40K60K80K100K85608

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 10000000a80K160K240K320K400K369563

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 1000 - Files: 10000000a50K100K150K200K250K212938

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 10000000a20K40K60K80K100K92728

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 500 - Files: 10000000a50K100K150K200K250K238362

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 10000000a30K60K90K120K150K131063

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compilea20040060080010001099.63

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 10000000a80K160K240K320K400K388999

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 1000 - Files: 10000000a2K4K6K8K10K11290

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1Ma2004006008001000883.38

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 500 - Files: 10000000a3K6K9K12K15K12673

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 10000000a3K6K9K12K15K13780

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 10000000a3K6K9K12K15K13893

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metrica50K100K150K200K250K2145941. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Containera90180270360450419.55

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 1000000a160032004800640080007647

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 1000000a2K4K6K8K10K8260

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a150300450600750681.88MAX: 30986.83

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a9M18M27M36M45M43462222

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a4080120160200167.24MAX: 24477.08

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a10M20M30M40M50M46040704

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 1000000a200K400K600K800K1000K911577

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 1000000a400K800K1200K1600K2000K1848429

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshielda50100150200250240.46

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000a3K6K9K12K15K14703

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 1000 - Files: 1000000a3K6K9K12K15K14964

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000a4K8K12K16K20K18327

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 500 - Files: 1000000a3K6K9K12K15K15852

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a90180270360450408.49MAX: 28665.5

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a10M20M30M40M50M46287475

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a120240360480600538.32MAX: 30078.01

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400a10M20M30M40M50M46213031

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a2040608010097.78MAX: 24428.74

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a10M20M30M40M50M48785357

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000a4K8K12K16K20K16595

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a306090120150155.14MAX: 12955.01

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a10M20M30M40M50M48953954

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 1000 - Files: 1000000a7K14K21K28K35K31087

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 500 - Files: 1000000a7K14K21K28K35K32818

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 1000000a170034005100680085007729

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000a6K12K18K24K30K28776

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 1000 - Files: 1000000a30K60K90K120K150K118793

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 1000 - Files: 1000000a400K800K1200K1600K2000K1984127

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streama80160240320400352.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streama61218243025.56

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000a400K800K1200K1600K2000K1953125

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000a150K300K450K600K750K714796

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal Installationa306090120150153.63

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 500 - Files: 1000000a400K800K1200K1600K2000K1821494

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beama306090120150145.33

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Ka1.14532.29063.43594.58125.72655.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 1000000a150K300K450K600K750K718907

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasta0.95851.9172.87553.8344.79254.261. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latencya0.05490.10980.16470.21960.27450.2441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Onlya200K400K600K800K1000K10250371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latencya0.25270.50540.75811.01081.26351.1231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Onlya200K400K600K800K1000K8902201. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latencya0.11340.22680.34020.45360.5670.5041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Onlya200K400K600K800K1000K9912511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latencya91827364538.721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 500 - Mode: Read Writea3K6K9K12K15K129141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latencya2040608010074.791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Writea3K6K9K12K15K133711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latencya51015202520.681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 250 - Mode: Read Writea3K6K9K12K15K120921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencya0.18630.37260.55890.74520.93150.8281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Onlya200K400K600K800K1000K9666011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latencya132639526559.661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Writea3K6K9K12K15K134091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000a110K220K330K440K550K508130

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latencya70014002100280035003212.981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Writea701402102803503111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latencya50010001500200025002437.061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Writea701402102803503281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a2040608010093.02MAX: 13075.01

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a11M22M33M44M55M50068226

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a70140210280350321.51MAX: 29847.27

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400a10M20M30M40M50M48869962

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latencya300600900120015001361.861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Writea801602403204003671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latencya120240360480600536.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Writea1002003004005004661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latencya0.10420.20840.31260.41680.5210.4631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 500 - Mode: Read Onlya200K400K600K800K1000K10799781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latencya0.2210.4420.6630.8841.1050.9821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 1000 - Mode: Read Onlya200K400K600K800K1000K10181021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latencya0.17280.34560.51840.69120.8640.7681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 800 - Mode: Read Onlya200K400K600K800K1000K10423131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latencya0.04970.09940.14910.19880.24850.2211. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1 - Clients: 250 - Mode: Read Onlya200K400K600K800K1000K11305151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0a306090120150130.411. (CXX) g++ options: -O3 -fPIC -lm

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000a400K800K1200K1600K2000K1886792

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesa30K60K90K120K150K157879

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 1000 - Files: 1000000a3K6K9K12K15K11949

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 500 - Files: 1000000a3K6K9K12K15K13075

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a4080120160200165.03MAX: 30192.43

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a9M18M27M36M45M43848942

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000a3K6K9K12K15K14345

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000a3K6K9K12K15K14680

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a102030405042.22MAX: 24463.02

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a9M18M27M36M45M44147738

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a306090120150145.08MAX: 24581.94

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a10M20M30M40M50M47975695

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a4080120160200192.61MAX: 27819.63

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400a7M14M21M28M35M34598820

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Ka0.05180.10360.15540.20720.2590.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Testa2040608010082.46

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pa2468108.11. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a112233445550.24MAX: 12917.64

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a8M16M24M32M40M36356156

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a306090120150111.79MAX: 29008.15

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a600K1200K1800K2400K3000K2661544

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a20406080100111.49MAX: 24491.59

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a8M16M24M32M40M38631090

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a71421283527.99MAX: 24397.48

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a600K1200K1800K2400K3000K2704379

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a306090120150132.9MAX: 27266.76

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400a400K800K1200K1600K2000K2051296

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a4080120160200183.29MAX: 27784.07

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100a7M14M21M28M35M34330554

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 100000a170034005100680085007708

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Ka369121510.91. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastera2468108.1111. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 100000a2K4K6K8K10K8280

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10a500K1000K1500K2000K2500K2397457.031. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a61218243023.84MAX: 24430.73

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a400K800K1200K1600K2000K1976707

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a2040608010088.45MAX: 28432.55

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a400K800K1200K1600K2000K2038765

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5a500K1000K1500K2000K2500K2269215.261. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a91827364537.21MAX: 13063.27

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a400K800K1200K1600K2000K2009740

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDeta1.10032.20063.30094.40125.50154.89MIN: 4.81 / MAX: 5.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformera132639526557.07MIN: 56.47 / MAX: 77.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400ma4812162013.77MIN: 13.58 / MAX: 14.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssda36912159.96MIN: 9.8 / MAX: 10.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinya51015202521.72MIN: 20.57 / MAX: 32.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50a369121511.74MIN: 11.59 / MAX: 12.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexneta1.3052.613.9155.226.5255.8MIN: 5.67 / MAX: 6.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18a2468106.39MIN: 6.32 / MAX: 7.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16a81624324034.17MIN: 31.56 / MAX: 43.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googleneta369121510.16MIN: 10.08 / MAX: 10.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacea0.36450.7291.09351.4581.82251.62MIN: 1.58 / MAX: 1.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0a1.1882.3763.5644.7525.945.28MIN: 5.18 / MAX: 7.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasneta0.8011.6022.4033.2044.0053.56MIN: 3.48 / MAX: 3.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2a0.9811.9622.9433.9244.9054.36MIN: 4.3 / MAX: 4.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3a0.90451.8092.71353.6184.52254.02MIN: 3.96 / MAX: 4.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2a0.90231.80462.70693.60924.51154.01MIN: 3.85 / MAX: 4.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobileneta369121511.84MIN: 11.76 / MAX: 12.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a2040608010076.44MAX: 27772.42

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a200K400K600K800K1000K937338

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a306090120150120.2MAX: 27499.1

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400a300K600K900K1200K1500K1427558

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a306090120150153.43MAX: 26693.21

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100a5M10M15M20M25M25618265

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a51015202520.06MAX: 24479.41

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a200K400K600K800K1000K928645

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a2040608010082.81MAX: 24358.67

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a5M10M15M20M25M21221162

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a81624324032.98MAX: 14361.53

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a300K600K900K1200K1500K1405614

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10a500K1000K1500K2000K2500K2378150.451. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5a500K1000K1500K2000K2500K2338023.631. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:5a1.3M2.6M3.9M5.2M6.5M6124151.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100a1.3M2.6M3.9M5.2M6.5M6143001.911. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10a1.3M2.6M3.9M5.2M6.5M6098422.191. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5a500K1000K1500K2000K2500K2431342.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:5a1.1M2.2M3.3M4.4M5.5M5179817.561. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10a1.3M2.6M3.9M5.2M6.5M6163567.351. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:5a1.3M2.6M3.9M5.2M6.5M6221259.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10a500K1000K1500K2000K2500K2386384.641. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100a1.1M2.2M3.3M4.4M5.5M5167381.391. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10a1.1M2.2M3.3M4.4M5.5M5105989.971. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a71421283529.75MAX: 13096.95

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a130K260K390K520K650K617799

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streama132639526559.97

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streama306090120150150.02

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100a1.3M2.6M3.9M5.2M6.5M6215410.581. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a306090120150112.43MAX: 27482.88

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400a130K260K390K520K650K625936

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2a142842567064.901. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streama91827364537.47

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streama50100150200250240.03

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 100000a140K280K420K560K700K641026

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 100000a200K400K600K800K1000K970874

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a163248648071.7MAX: 24301.95

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a200K400K600K800K1000K1000001

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Ka0.58071.16141.74212.32282.90352.5811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a306090120150134.95MAX: 26222.68

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100a3M6M9M12M15M12144605

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streama4812162016.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streama120240360480600541.74

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a142842567064.07MAX: 24266.3

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a60K120K180K240K300K276746

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a306090120150115.49MAX: 26277.78

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a70K140K210K280K350K343992

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a153045607568.56MAX: 24298.94

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100a140K280K420K560K700K653641

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streama306090120150121.89

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streama163248648073.83

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a306090120150124.08MAX: 26271.13

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100a110K220K330K440K550K515961

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streama110220330440550520.89

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streama4812162017.27

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streama110220330440550494.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streama4812162018.19

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a306090120150116.23MAX: 26071.68

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100a30K60K90K120K150K139990

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000a3K6K9K12K15K15736

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000a4K8K12K16K20K18815

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 1000 - Files: 100000a3K6K9K12K15K14586

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasta369121511.041. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 500 - Files: 100000a4K8K12K16K20K17944

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streama90180270360450394.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streama51015202522.80

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000a4K8K12K16K20K20521

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 1000 - Files: 100000a6K12K18K24K30K29682

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 100000a160032004800640080007554

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000a130K260K390K520K650K591716

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000a6K12K18K24K30K29197

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000a150K300K450K600K750K694444

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 500 - Files: 100000a10K20K30K40K50K47596

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 1000 - Files: 100000a160K320K480K640K800K769231

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 500 - Files: 100000a200K400K600K800K1000K1098901

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 100000a110K220K330K440K550K529101

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 1000 - Files: 100000a120K240K360K480K600K543478

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000a200K400K600K800K1000K787402

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000a120K240K360K480K600K578035

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streama91827364537.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streama50100150200250237.19

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 500 - Files: 100000a3K6K9K12K15K13759

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streama4080120160200168.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streama122436486053.54

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 1000 - Files: 100000a3K6K9K12K15K13780

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000a3K6K9K12K15K14349

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000a3K6K9K12K15K15401

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streama2040608010085.65

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streama20406080100105.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streama2040608010087.52

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streama20406080100102.80

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streama81624324034.65

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streama60120180240300259.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streama81624324034.58

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streama60120180240300260.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streama1.30612.61223.91835.22446.53055.8049

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streama300600900120015001545.19

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pa61218243023.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringa50K100K150K200K250K245079.881. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPa90180270360450425.441. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cloninga4008001200160020001695.941. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Malloca6M12M18M24M30M27713750.251. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDa100200300400500461.771. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastera51015202520.181. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Atomica60120180240300273.711. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Zliba4008001200160020001947.281. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pa0.15080.30160.45240.60320.7540.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Cachea600K1200K1800K2400K3000K2599516.291. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix Matha20K40K60K80K100K101403.41. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEa70K140K210K280K350K329594.281. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAa80160240320400378.231. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function Calla2K4K6K8K10K10886.671. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreada30K60K90K120K150K139787.41. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating Pointa8K16K24K32K40K38877.141. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory Copyinga120024003600480060005542.41. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D Matha300600900120015001430.91. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Shufflea3K6K9K12K15K12252.731. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating Pointa80016002400320040003943.711. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL Treea306090120150112.781. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Forkinga12K24K36K48K60K55491.951. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Hasha700K1400K2100K2800K3500K3140132.981. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String Functionsa1.4M2.8M4.2M5.6M7M6760452.841. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-Adda4M8M12M16M20M17443689.741. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector Matha160K320K480K640K800K739258.61. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket Activitya3K6K9K12K15K11705.931. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed Schedulera4K8K12K16K20K18322.871. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRanda40K80K120K160K200K182257.551. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIa300K600K900K1200K1500K1242160.051. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Semaphoresa8M16M24M32M40M38227021.821. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Stressa9K18K27K36K45K41250.421. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cryptoa7K14K21K28K35K31071.151. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mutexa2M4M6M8M10M11003661.441. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Polla500K1000K1500K2000K2500K2364530.181. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message Passinga1.6M3.2M4.8M6.4M8M7317181.491. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data Sortinga90180270360450429.611. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context Switchinga600K1200K1800K2400K3000K2736448.181. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Matha15K30K45K60K75K70075.881. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Futexa600K1200K1800K2400K3000K2636992.171. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pipea2M4M6M8M10M8793037.871. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pa2468106.4881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Ka71421283528.591. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Ka71421283530.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Ka81624324035.71. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4Ka81624324035.821. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Ka81624324036.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Ka91827364540.151. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pa102030405044.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessa36912159.6011. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pa163248648072.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Ka2040608010087.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pa2040608010080.751. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080pa2040608010083.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pa2040608010085.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Ka2040608010094.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pa2040608010087.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Clients Per Thread: 100 - Set To Get Ratio: 1:10

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 100 - Set To Get Ratio: 1:5

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 100 - Set To Get Ratio: 1:100

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 60 - Set To Get Ratio: 1:100

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 60 - Set To Get Ratio: 1:5

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 60 - Set To Get Ratio: 1:10

a: The test run did not produce a result. E: Connection error: Connection reset by peer

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessa1.32842.65683.98525.31366.6425.9041. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6a1.30792.61583.92375.23166.53955.8131. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pa4080120160200168.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pa4080120160200183.131. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

322 Results Shown

Apache Hadoop:
  Rename - 20 - 10000000
  Delete - 20 - 10000000
  Open - 20 - 10000000
  File Status - 20 - 10000000
  Rename - 50 - 10000000
  Rename - 1000 - 10000000
  Rename - 500 - 10000000
  Delete - 50 - 10000000
  Rename - 100 - 10000000
  Delete - 1000 - 10000000
  Delete - 500 - 10000000
  Delete - 100 - 10000000
  Open - 50 - 10000000
  Create - 20 - 10000000
  Open - 1000 - 10000000
  File Status - 50 - 10000000
  File Status - 1000 - 10000000
  Open - 500 - 10000000
  File Status - 500 - 10000000
  Open - 100 - 10000000
Timed GCC Compilation
Apache Hadoop:
  File Status - 100 - 10000000
  Create - 1000 - 10000000
OpenRadioss
Apache Hadoop:
  Create - 500 - 10000000
  Create - 50 - 10000000
  Create - 100 - 10000000
BRL-CAD
OpenRadioss
Apache Hadoop:
  Rename - 20 - 1000000
  Delete - 20 - 1000000
Apache IoTDB:
  800 - 100 - 800 - 400:
    Average Latency
    point/sec
  800 - 100 - 800 - 100:
    Average Latency
    point/sec
Apache Hadoop:
  Open - 20 - 1000000
  File Status - 20 - 1000000
OpenRadioss
Apache Hadoop:
  Rename - 50 - 1000000
  Rename - 1000 - 1000000
  Delete - 50 - 1000000
  Rename - 500 - 1000000
Apache IoTDB:
  800 - 100 - 500 - 400:
    Average Latency
    point/sec
  500 - 100 - 800 - 400:
    Average Latency
    point/sec
  800 - 100 - 500 - 100:
    Average Latency
    point/sec
Apache Hadoop
Apache IoTDB:
  500 - 100 - 800 - 100:
    Average Latency
    point/sec
Apache Hadoop:
  Delete - 1000 - 1000000
  Delete - 500 - 1000000
  Create - 20 - 1000000
  Delete - 100 - 1000000
  Open - 1000 - 1000000
  File Status - 1000 - 1000000
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop:
  File Status - 50 - 1000000
  Open - 50 - 1000000
OpenRadioss
Apache Hadoop
OpenRadioss
AOM AV1
Apache Hadoop
VVenC
PostgreSQL:
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
  100 - 500 - Read Only - Average Latency
  100 - 500 - Read Only
  100 - 500 - Read Write - Average Latency
  100 - 500 - Read Write
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
  100 - 800 - Read Only - Average Latency
  100 - 800 - Read Only
  100 - 800 - Read Write - Average Latency
  100 - 800 - Read Write
Apache Hadoop
PostgreSQL:
  1 - 1000 - Read Write - Average Latency
  1 - 1000 - Read Write
  1 - 800 - Read Write - Average Latency
  1 - 800 - Read Write
Apache IoTDB:
  500 - 100 - 500 - 100:
    Average Latency
    point/sec
  500 - 100 - 500 - 400:
    Average Latency
    point/sec
PostgreSQL:
  1 - 500 - Read Write - Average Latency
  1 - 500 - Read Write
  1 - 250 - Read Write - Average Latency
  1 - 250 - Read Write
  1 - 500 - Read Only - Average Latency
  1 - 500 - Read Only
  1 - 1000 - Read Only - Average Latency
  1 - 1000 - Read Only
  1 - 800 - Read Only - Average Latency
  1 - 800 - Read Only
  1 - 250 - Read Only - Average Latency
  1 - 250 - Read Only
libavif avifenc
Apache Hadoop
Apache Cassandra
Apache Hadoop:
  Create - 1000 - 1000000
  Create - 500 - 1000000
Apache IoTDB:
  800 - 100 - 200 - 400:
    Average Latency
    point/sec
Apache Hadoop:
  Create - 100 - 1000000
  Create - 50 - 1000000
Apache IoTDB:
  800 - 100 - 200 - 100:
    Average Latency
    point/sec
  200 - 100 - 800 - 100:
    Average Latency
    point/sec
  500 - 100 - 200 - 400:
    Average Latency
    point/sec
AOM AV1
OpenRadioss
AOM AV1
Apache IoTDB:
  500 - 100 - 200 - 100:
    Average Latency
    point/sec
  800 - 1 - 800 - 400:
    Average Latency
    point/sec
  200 - 100 - 500 - 100:
    Average Latency
    point/sec
  800 - 1 - 800 - 100:
    Average Latency
    point/sec
  500 - 1 - 800 - 400:
    Average Latency
    point/sec
  100 - 100 - 800 - 100:
    Average Latency
    point/sec
Apache Hadoop
AOM AV1
VVenC
Apache Hadoop
Redis 7.0.12 + memtier_benchmark
Apache IoTDB:
  800 - 1 - 500 - 100:
    Average Latency
    point/sec
  800 - 1 - 500 - 400:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark
Apache IoTDB:
  500 - 1 - 800 - 100:
    Average Latency
    point/sec
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Apache IoTDB:
  800 - 1 - 200 - 400:
    Average Latency
    point/sec
  500 - 1 - 500 - 400:
    Average Latency
    point/sec
  100 - 100 - 500 - 100:
    Average Latency
    point/sec
  800 - 1 - 200 - 100:
    Average Latency
    point/sec
  200 - 100 - 200 - 100:
    Average Latency
    point/sec
  500 - 1 - 500 - 100:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark:
  Redis - 100 - 1:10
  Redis - 100 - 1:5
Dragonflydb:
  50 - 1:5
  50 - 1:100
  50 - 1:10
Redis 7.0.12 + memtier_benchmark
Dragonflydb:
  10 - 1:5
  20 - 1:10
  20 - 1:5
Redis 7.0.12 + memtier_benchmark
Dragonflydb:
  10 - 1:100
  10 - 1:10
Apache IoTDB:
  500 - 1 - 200 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Dragonflydb
Apache IoTDB:
  500 - 1 - 200 - 400:
    Average Latency
    point/sec
libavif avifenc
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop:
  Open - 20 - 100000
  File Status - 20 - 100000
Apache IoTDB:
  200 - 1 - 800 - 100:
    Average Latency
    point/sec
SVT-AV1
Apache IoTDB:
  100 - 100 - 200 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache IoTDB:
  200 - 1 - 200 - 100:
    Average Latency
    point/sec
  100 - 1 - 500 - 100:
    Average Latency
    point/sec
  200 - 1 - 500 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache IoTDB:
  100 - 1 - 800 - 100:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache IoTDB:
  100 - 1 - 200 - 100:
    Average Latency
    point/sec
Apache Hadoop:
  Rename - 50 - 100000
  Delete - 50 - 100000
  Rename - 1000 - 100000
VVenC
Apache Hadoop
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop:
  Rename - 100 - 100000
  Delete - 1000 - 100000
  Create - 20 - 100000
  Open - 50 - 100000
  Delete - 100 - 100000
  File Status - 50 - 100000
  Delete - 500 - 100000
  File Status - 1000 - 100000
  File Status - 500 - 100000
  Open - 500 - 100000
  Open - 1000 - 100000
  File Status - 100 - 100000
  Open - 100 - 100000
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Hadoop:
  Create - 1000 - 100000
  Create - 50 - 100000
  Create - 100 - 100000
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
AOM AV1
Stress-NG:
  IO_uring
  MMAP
  Cloning
  Malloc
  MEMFD
VVenC
Stress-NG:
  Atomic
  Zlib
AOM AV1
Stress-NG:
  CPU Cache
  Matrix Math
  SENDFILE
  NUMA
  Function Call
  Pthread
  Vector Floating Point
  Memory Copying
  Matrix 3D Math
  Vector Shuffle
  Floating Point
  AVL Tree
  Forking
  Hash
  Glibc C String Functions
  Fused Multiply-Add
  Wide Vector Math
  Socket Activity
  Mixed Scheduler
  x86_64 RdRand
  AVX-512 VNNI
  Semaphores
  CPU Stress
  Crypto
  Mutex
  Poll
  System V Message Passing
  Glibc Qsort Data Sorting
  Context Switching
  Vector Math
  Futex
  Pipe
SVT-AV1
AOM AV1:
  Speed 8 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 11 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
SVT-AV1:
  Preset 8 - Bosphorus 4K
  Preset 8 - Bosphorus 1080p
libavif avifenc
AOM AV1
SVT-AV1
AOM AV1:
  Speed 8 Realtime - Bosphorus 1080p
  Speed 11 Realtime - Bosphorus 1080p
  Speed 9 Realtime - Bosphorus 1080p
SVT-AV1
AOM AV1
libavif avifenc:
  10, Lossless
  6
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p