extra tests

benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310231-NE-EXTRATEST13
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 2 Tests
CPU Massive 5 Tests
Creator Workloads 8 Tests
Database Test Suite 3 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Java Tests 2 Tests
Machine Learning 2 Tests
Multi-Core 9 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 4 Tests
Renderers 2 Tests
Server 3 Tests
Server CPU Tests 4 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
RESET
October 23 2023
  6 Hours, 8 Minutes
RESET2
October 23 2023
  1 Hour, 18 Minutes
RESET3
October 23 2023
  5 Hours, 18 Minutes
Invert Hiding All Results Option
  4 Hours, 15 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


extra testsProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen ResolutionRESETRESET2RESET3AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads)Supermicro H13SSW (1.1 BIOS)12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123N2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbAlmaLinux 9.25.14.0-284.25.1.el9_2.x86_64 (x86_64)GCC 11.3.1 20221121ext41024x768AMD EPYC 9334 32-Core @ 2.70GHz (32 Cores / 64 Threads)DELL E207WFP1680x1050OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Details- OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Details- Python 3.9.16Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

RESETRESET2RESET3Result OverviewPhoronix Test Suite100%124%148%172%EmbreeDragonflydbIntel Open Image DenoiseOSPRaySPECFEM3DTimed Linux Kernel CompilationRemhosNeural Magic DeepSparseSVT-AV1Liquid-DSPnekRSVVenC

extra testshadoop: Open - 50 - 1000000stress-ng: Mallochadoop: File Status - 20 - 1000000hadoop: File Status - 50 - 1000000hadoop: File Status - 500 - 100000dragonflydb: 20 - 1:100stress-ng: Semaphoreshadoop: File Status - 500 - 1000000stress-ng: x86_64 RdRandstress-ng: Hashstress-ng: Vector Shufflestress-ng: CPU Stressstress-ng: Floating Pointdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamstress-ng: Pollstress-ng: Function Calldragonflydb: 20 - 1:10dragonflydb: 10 - 1:10stress-ng: Zlibdragonflydb: 50 - 1:10embree: Pathtracer - Crowndeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamembree: Pathtracer ISPC - Crowndeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdragonflydb: 50 - 1:100deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamstress-ng: Memory Copyingstress-ng: Context Switchingstress-ng: Cryptodeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamstress-ng: Glibc C String Functionsstress-ng: AVL Treedeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdragonflydb: 10 - 1:100deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragon Objospray: gravity_spheres_volume/dim_512/ao/real_timeliquid-dsp: 64 - 256 - 32ospray: gravity_spheres_volume/dim_512/scivis/real_timestress-ng: MMAPembree: Pathtracer - Asian Dragonospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timedeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Socket Activityembree: Pathtracer ISPC - Asian Dragondeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streambrl-cad: VGR Performance Metricoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlystress-ng: NUMAoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyblender: Barbershop - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlystress-ng: Vector Floating Pointospray: gravity_spheres_volume/dim_512/pathtracer/real_timeblender: Pabellon Barcelona - CPU-Onlystress-ng: Vector Mathblender: Fishy Cat - CPU-Onlystress-ng: Glibc Qsort Data Sortingblender: BMW27 - CPU-Onlystress-ng: Fused Multiply-Addstress-ng: Wide Vector Mathblender: Classroom - CPU-Onlystress-ng: Matrix Mathstress-ng: Pipestress-ng: AVX-512 VNNIspecfem3d: Mount St. Helensliquid-dsp: 64 - 256 - 512specfem3d: Tomographic Modeldeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamspecfem3d: Homogeneous Halfspacedeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamspecfem3d: Layered Halfspacedeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamtidb: oltp_read_write - 256liquid-dsp: 64 - 256 - 57deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamspecfem3d: Water-layered Halfspacehadoop: Open - 100 - 100000stress-ng: Forkingcassandra: Writesbuild-linux-kernel: defconfighadoop: File Status - 50 - 100000deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamtidb: oltp_read_write - 32stress-ng: SENDFILEtidb: oltp_update_non_index - 128hadoop: Open - 100 - 1000000stress-ng: Matrix 3D Mathtidb: oltp_update_non_index - 64tidb: oltp_point_select - 128deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamremhos: Sample Remap Exampledeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamkripke: tidb: oltp_point_select - 256svt-av1: Preset 8 - Bosphorus 4Kliquid-dsp: 32 - 256 - 512deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamsvt-av1: Preset 12 - Bosphorus 4Khadoop: File Status - 100 - 100000svt-av1: Preset 13 - Bosphorus 4Kdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamliquid-dsp: 32 - 256 - 57svt-av1: Preset 8 - Bosphorus 4Ktidb: oltp_update_non_index - 32deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamstress-ng: Mixed Schedulersvt-av1: Preset 12 - Bosphorus 4Ktidb: oltp_read_write - 16ncnn: CPU - regnety_400msvt-av1: Preset 12 - Bosphorus 1080ptidb: oltp_update_non_index - 16svt-av1: Preset 4 - Bosphorus 4Kospray: particle_volume/pathtracer/real_timencnn: CPU - vision_transformersvt-av1: Preset 8 - Bosphorus 1080pstress-ng: System V Message Passingsvt-av1: Preset 8 - Bosphorus 1080phadoop: File Status - 100 - 1000000svt-av1: Preset 4 - Bosphorus 4Khadoop: Open - 50 - 100000hadoop: Open - 20 - 1000000svt-av1: Preset 13 - Bosphorus 1080pdragonflydb: 60 - 1:100vvenc: Bosphorus 4K - Fastnekrs: TurboPipe Periodicsvt-av1: Preset 4 - Bosphorus 1080pncnn: CPU - blazefacevvenc: Bosphorus 4K - Fasterncnn: CPU - alexnetvvenc: Bosphorus 1080p - Faststress-ng: MEMFDsvt-av1: Preset 4 - Bosphorus 1080pliquid-dsp: 1 - 256 - 57stress-ng: Cloningnekrs: Kershawncnn: CPU - efficientnet-b0liquid-dsp: 8 - 256 - 57liquid-dsp: 32 - 256 - 32stress-ng: Futexvvenc: Bosphorus 1080p - Fasterhadoop: Delete - 500 - 100000dragonflydb: 60 - 1:10ncnn: CPU - vgg16svt-av1: Preset 12 - Bosphorus 1080ptidb: oltp_update_non_index - 1liquid-dsp: 1 - 256 - 512liquid-dsp: 8 - 256 - 512ncnn: CPU - shufflenet-v2hadoop: Create - 20 - 1000000hadoop: Open - 500 - 1000000stress-ng: Atomicliquid-dsp: 4 - 256 - 512hadoop: Rename - 500 - 1000000hadoop: Delete - 100 - 100000ncnn: CPU - googlenetncnn: CPU-v3-v3 - mobilenet-v3hadoop: Open - 20 - 100000liquid-dsp: 16 - 256 - 32hadoop: Rename - 20 - 1000000liquid-dsp: 4 - 256 - 32hadoop: Delete - 20 - 1000000hadoop: Create - 20 - 100000liquid-dsp: 16 - 256 - 512hadoop: Create - 500 - 100000hadoop: Open - 500 - 100000hadoop: Create - 100 - 1000000hadoop: Rename - 50 - 100000svt-av1: Preset 13 - Bosphorus 1080pliquid-dsp: 8 - 256 - 32liquid-dsp: 1 - 256 - 32liquid-dsp: 2 - 256 - 32hadoop: Delete - 500 - 1000000ncnn: CPU - mnasnetstress-ng: Mutexstress-ng: CPU Cacheliquid-dsp: 4 - 256 - 57ncnn: CPU-v2-v2 - mobilenet-v2hadoop: Rename - 500 - 100000ncnn: CPU - resnet18hadoop: Create - 50 - 1000000hadoop: Delete - 50 - 100000ncnn: CPU - FastestDetliquid-dsp: 16 - 256 - 57ncnn: CPU - squeezenet_ssddeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamstress-ng: Pthreadliquid-dsp: 2 - 256 - 512hadoop: Delete - 20 - 100000liquid-dsp: 2 - 256 - 57deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamhadoop: Rename - 20 - 100000hadoop: Delete - 50 - 1000000deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamhadoop: Create - 50 - 100000ncnn: CPU - mobilenethadoop: Create - 100 - 100000hadoop: Rename - 50 - 1000000hadoop: Rename - 100 - 100000ncnn: CPU - resnet50ncnn: CPU - yolov4-tinytidb: oltp_point_select - 1hadoop: Delete - 100 - 1000000hadoop: Create - 500 - 1000000hadoop: Rename - 100 - 1000000deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamsvt-av1: Preset 13 - Bosphorus 4Ktidb: oltp_point_select - 64tidb: oltp_point_select - 32tidb: oltp_point_select - 16tidb: oltp_read_write - 128tidb: oltp_read_write - 64tidb: oltp_read_write - 1hadoop: File Status - 20 - 100000build-linux-kernel: allmodconfigRESETRESET2RESET326483131210490.9118796995931202155179932666.1242481788.6817793596022050.433622490.5312554.8441635.435845.741591.17042214330.7813421.1910162330.868302009.421466.412469755.621.5762240.376322.68163.1825163.084911746361.8771.8447269.210072239.6639959.4172.733619957308.6205.0716.15698637294.18109.0813505.2797257.56713.13755.535413.102422.248123.83315.6103810564000005.46527297.2924.81435.56775.5627570.922110472.1528.286824.49322956260.72664.870.34670.710.7253844.486.60071223.6117532.1990.46463.1171.8916158855.4772551.49183.1187537.4613860878.091851108.7727.105913528249000027.09467610769.631214.35535.67874914214.425369.273969.74199551320.506148.746862965110090000068.032614.697568.084614.686556.2917.763164.31798313730581032071.1919464255.15947619038.565525.9193143.26196.973143.69476.95246972419265.754163512562815817.583423413116510.727893.124931.0177.0815141.08623764110014324467.596273720000202.7454.92739.8064101.9101165.65531915165.94818.239554.808103170000059.64264591.0875916.581739604.33162.2883612417.13466.982186634.502151.86655.73119.2227982733.22128.58318281544.1525376341191895600.20211419293.836.126795034000014.2113.3512.136.0118.012431.5910.727528970003317.1102952000009.3533766000010475000004273978.2634.6989372112065388.3422.59528.292168712685000970110008.67723171164144246.6248886000767229970117.987.556369435455700008909513852000011633359630196300000523832057616963883333590.4927756000035186000688540001096737.47789472.51163806.081905400007.36785559.137140310869611.369592000015.525.0159119839.6825206000107875105200000112.603787032110963325.0226031416.7359242879128561616.8325.4259561125876603785800143.842431.024433.248815.807373.2047607.0287608.160948.995248.9994111.1048109.8869493.6079161.603118850972005973932048695659367141.21588.20499527420.88584414.1311249348.9121.42722.6729162.954312138788.9572.16068951592.94505.4126257.39513.034622.194623.92495.6081410567000005.4488224.82415.571135.5595571.142328.41590.720.340.726.5866626.34076496828206000027.37365288735.32002589169.18242290420.474948.8206109860000068.381814.622563.19805378255.21143.33336.969330.9366.9594143.5607274570000203.2194.9157164.103164.679101380000059.0581.0807922.3569469.4914.528151.794128.472595.13813659195.836.147796124000014.06512.23118.0894880300010068200000343950000104650000034.69410983543.6612264000994040004961400054480000013652000019607000027778000035179000689050001886400006865400005.023325050000104020000112.251831.053215.8033607.469449.0574110.73581256281137709065.86428816189035959880221025616.392985304.7581632712951786.697756366.826692.6687949.7112241.133312.51154603498.1727709.819505101.2316927170.482988.6922835296.8443.3981486.727445.6761327.9694327.789123607335.01143.769314537.1120135718.9279692.92145.014339779508.19407.9232.065617116522.17215.9185999.0564508.344825.7109.279425.721843.514746.462210.9153205380000010.5908577.7848.147610.793510.7542136.89220183.0954.501847.15055690901.381272.740.65351.791.37102433.0412.5075119.19219207.6848.68858.2238.9129786647.251424072.3499.35160597.8325368697.93384972.4414.86210608751016000015.275376628124.36558.037520.0062716948.1111123.196739.27637257811.56586.4112111354191140000039.464125.33639.307725.436632.827430.456137.5499653151813554137.3431805233.90476923123.980741.6733229.69354.3494229.15174.359574741665723.58656748058029028.11518971973867.1439139.834220.6884.7514210.205335106280021055397.764393800000291.46823.42876.9453143.8696231.172740741229.11613.166275.9168139750000081.062360660.80231241.844853630.31216.6254770322.32599.14237585.719192.55544.06150.62810016615.7160.66522522525.089653595981354721.77.293677683000016.3963.8613.9415.2320.675495.0312.3556770003751.82911182000010.5638028000011697000004747454.5138.35810330620.62577.7341843133590001054800009.38782231257862265.54525870008252910718119.298.085952385829300009529314590000012405463492208610000555252178657369288183558.19129272000037083000725700001043627.11828974.291221906.481978900007.72823058.727472210395011.8171665000016.24.8202124853.1826065000112233108010000116.555390253115022336.76366218916.2960790899858375217.1925.9660801147326708784474145.997431.444132.833615.995173.9772613.0328614.022548.739748.7687111.0762110.1322494.298161.4318766794643869565OpenBenchmarking.org

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000RESETRESET3300K600K900K1200K1500K2648311256281

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocRESETRESET330M60M90M120M150M31210490.91137709065.861. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 1000000RESETRESET3400K800K1200K1600K2000K1879699428816

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000RESETRESET3400K800K1200K1600K2000K5931201890359

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 500 - Files: 100000RESETRESET3130K260K390K520K650K215517598802

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100RESETRESET2RESET35M10M15M20M25M9932666.129367141.2021025616.301. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphoresRESETRESET320M40M60M80M100M42481788.6892985304.751. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 500 - Files: 1000000RESETRESET3400K800K1200K1600K2000K1779359816327

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRandRESETRESET33M6M9M12M15M6022050.4312951786.691. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashRESETRESET31.7M3.4M5.1M6.8M8.5M3622490.537756366.801. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShuffleRESETRESET36K12K18K24K30K12554.8426692.661. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressRESETRESET320K40K60K80K100K41635.4387949.711. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointRESETRESET33K6K9K12K15K5845.7412241.131. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET2RESET370014002100280035001591.171588.203312.51

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PollRESETRESET31000K2000K3000K4000K5000K2214330.784603498.171. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallRESETRESET36K12K18K24K30K13421.1927709.801. (CXX) g++ options: -O2 -std=gnu99 -lc

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10RESETRESET2RESET34M8M12M16M20M10162330.869527420.8019505101.231. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10RESETRESET2RESET34M8M12M16M20M8302009.428584414.1316927170.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibRESETRESET360012001800240030001466.402988.691. (CXX) g++ options: -O2 -std=gnu99 -lc

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10RESETRESET2RESET35M10M15M20M25M12469755.6011249348.9122835296.841. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: CrownRESETRESET2RESET3102030405021.5821.4343.40MIN: 21.45 / MAX: 21.93MIN: 21.27 / MAX: 21.76MIN: 42.87 / MAX: 44.59

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET3110220330440550240.38486.73

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrownRESETRESET2RESET3102030405022.6822.6745.68MIN: 22.49 / MAX: 22.98MIN: 22.51 / MAX: 23.04MIN: 45.1 / MAX: 46.91

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamRESETRESET2RESET370140210280350163.18162.95327.97

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRESETRESET370140210280350163.08327.79

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100RESETRESET2RESET35M10M15M20M25M11746361.8712138788.9523607335.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRESETRESET2RESET330609012015071.8472.16143.77

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingRESETRESET33K6K9K12K15K7269.2014537.111. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingRESETRESET34M8M12M16M20M10072239.6620135718.921. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptoRESETRESET320K40K60K80K100K39959.4179692.921. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET330609012015072.73145.01

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsRESETRESET39M18M27M36M45M19957308.6039779508.191. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreeRESETRESET390180270360450205.07407.921. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamRESETRESET371421283516.1632.07

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100RESETRESET2RESET34M8M12M16M20M8637294.188951592.9417116522.171. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRESETRESET350100150200250109.08215.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET2RESET32004006008001000505.28505.41999.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamRESETRESET2RESET3110220330440550257.57257.40508.34

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRESETRESET2RESET361218243013.1413.0325.70

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRESETRESET32040608010055.54109.28

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamRESETRESET361218243013.1025.72

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon ObjRESETRESET2RESET3102030405022.2522.1943.51MIN: 22.17 / MAX: 22.47MIN: 22.12 / MAX: 22.35MIN: 43.25 / MAX: 44.05

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjRESETRESET2RESET3112233445523.8323.9246.46MIN: 23.74 / MAX: 24.04MIN: 23.83 / MAX: 24.22MIN: 46.18 / MAX: 47.04

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeRESETRESET2RESET336912155.610385.6081410.91530

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32RESETRESET2RESET3400M800M1200M1600M2000M1056400000105670000020538000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeRESETRESET2RESET336912155.465275.4488210.59080

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPRESETRESET3120240360480600297.29577.781. (CXX) g++ options: -O2 -std=gnu99 -lc

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian DragonRESETRESET2RESET3112233445524.8124.8248.15MIN: 24.73 / MAX: 25.07MIN: 24.74 / MAX: 24.99MIN: 47.9 / MAX: 48.87

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeRESETRESET2RESET336912155.567705.5711310.79350

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeRESETRESET2RESET336912155.562755.5595510.75420

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamRESETRESET2RESET330609012015070.9271.14136.89

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivityRESETRESET34K8K12K16K20K10472.1520183.091. (CXX) g++ options: -O2 -std=gnu99 -lc

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonRESETRESET2RESET3122436486028.2928.4254.50MIN: 28.19 / MAX: 28.56MIN: 28.3 / MAX: 28.73MIN: 54.24 / MAX: 54.99

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamRESETRESET3112233445524.4947.15

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance MetricRESETRESET3120K240K360K480K600K2956265690901. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyRESETRESET2RESET30.31050.6210.93151.2421.55250.720.721.38

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMARESETRESET330060090012001500664.871272.741. (CXX) g++ options: -O2 -std=gnu99 -lc

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlyRESETRESET2RESET30.14630.29260.43890.58520.73150.340.340.65

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyRESETRESET3140280420560700670.71351.79

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlyRESETRESET2RESET30.30830.61660.92491.23321.54150.720.721.37

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointRESETRESET320K40K60K80K100K53844.48102433.041. (CXX) g++ options: -O2 -std=gnu99 -lc

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeRESETRESET2RESET336912156.600716.5866612.50750

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-OnlyRESETRESET350100150200250223.60119.19

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathRESETRESET350K100K150K200K250K117532.19219207.681. (CXX) g++ options: -O2 -std=gnu99 -lc

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-OnlyRESETRESET32040608010090.4648.68

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingRESETRESET32004006008001000463.11858.221. (CXX) g++ options: -O2 -std=gnu99 -lc

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyRESETRESET3163248648071.8938.91

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AddRESETRESET36M12M18M24M30M16158855.4029786647.251. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathRESETRESET3300K600K900K1200K1500K772551.491424072.341. (CXX) g++ options: -O2 -std=gnu99 -lc

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyRESETRESET34080120160200183.1199.35

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathRESETRESET330K60K90K120K150K87537.46160597.831. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipeRESETRESET35M10M15M20M25M13860878.0925368697.901. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIRESETRESET3700K1400K2100K2800K3500K1851108.773384972.441. (CXX) g++ options: -O2 -std=gnu99 -lc

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. HelensRESETRESET2RESET361218243027.1126.3414.861. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512RESETRESET2RESET3110M220M330M440M550M2824900002820600005101600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic ModelRESETRESET2RESET361218243027.0927.3715.281. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET330609012015069.63124.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET34812162014.35508.0375

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous HalfspaceRESETRESET2RESET381624324035.6835.3220.011. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamRESETRESET34812162014.42538.1111

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamRESETRESET330609012015069.27123.20

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered HalfspaceRESETRESET2RESET3163248648069.7469.1839.281. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamRESETRESET2RESET351015202520.5120.4711.57

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamRESETRESET2RESET32040608010048.7548.8286.41

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 256RESETRESET320K40K60K80K100K62965111354

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57RESETRESET2RESET3400M800M1200M1600M2000M1100900000109860000019114000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamRESETRESET2RESET3153045607568.0368.3839.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamRESETRESET2RESET361218243014.7014.6225.34

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRESETRESET3153045607568.0839.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRESETRESET361218243014.6925.44

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamRESETRESET3132639526556.2932.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamRESETRESET371421283517.7630.46

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered HalfspaceRESETRESET2RESET3142842567064.3263.2037.551. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000RESETRESET3110K220K330K440K550K305810518135

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingRESETRESET312K24K36K48K60K32071.1954137.341. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: WritesRESETRESET370K140K210K280K350K194642318052

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigRESETRESET2RESET3122436486055.1655.2133.90

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000RESETRESET3160K320K480K640K800K476190769231

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamRESETRESET391827364538.5723.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamRESETRESET3102030405025.9241.67

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamRESETRESET2RESET350100150200250143.26143.33229.69

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamRESETRESET2RESET32468106.97306.96934.3494

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamRESETRESET350100150200250143.69229.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamRESETRESET32468106.95204.3595

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 32RESETRESET316K32K48K64K80K4697274741

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILERESETRESET3140K280K420K560K700K419265.75665723.581. (CXX) g++ options: -O2 -std=gnu99 -lc

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128RESETRESET314K28K42K56K70K4163565674

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000RESETRESET3300K600K900K1200K1500K1256281805802

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathRESETRESET32K4K6K8K10K5817.589028.111. (CXX) g++ options: -O2 -std=gnu99 -lc

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 64RESETRESET311K22K33K44K55K3423451897

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128RESETRESET340K80K120K160K200K131165197386

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET3369121510.72787.1439

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET330609012015093.12139.83

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap ExampleRESETRESET2RESET371421283531.0230.9420.691. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamRESETRESET2RESET32468107.08156.95944.7514

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamRESETRESET2RESET350100150200250141.09143.56210.21

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6RESETRESET380M160M240M320M400M2376411003510628001. (CXX) g++ options: -O3 -fopenmp -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 256RESETRESET350K100K150K200K250K143244210553

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4KRESETRESET32040608010067.6097.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512RESETRESET2RESET380M160M240M320M400M2737200002745700003938000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET2RESET360120180240300202.75203.22291.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET2RESET31.10862.21723.32584.43445.5434.92734.91573.4287

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamRESETRESET336912159.80646.9453

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamRESETRESET3306090120150101.91143.87

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4KRESETRESET2RESET350100150200250165.65164.10231.171. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000RESETRESET3160K320K480K640K800K531915740741

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4KRESETRESET2RESET350100150200250165.95164.68229.121. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamRESETRESET34812162018.2413.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamRESETRESET32040608010054.8175.92

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57RESETRESET2RESET3300M600M900M1200M1500M1031700000101380000013975000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4KRESETRESET2RESET32040608010059.6459.0681.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 32RESETRESET38K16K24K32K40K2645936066

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET2RESET30.24470.48940.73410.97881.22351.08751.08070.8023

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamRESETRESET2RESET330060090012001500916.58922.361241.84

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulerRESETRESET311K22K33K44K55K39604.3353630.311. (CXX) g++ options: -O2 -std=gnu99 -lc

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4KRESETRESET350100150200250162.29216.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 16RESETRESET310K20K30K40K50K3612447703

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mRESETRESET351015202517.1322.32MIN: 16.89 / MAX: 19.07MIN: 22.1 / MAX: 22.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 1080pRESETRESET2RESET3130260390520650466.98469.49599.141. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 16RESETRESET35K10K15K20K25K1866323758

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 4KRESETRESET2RESET31.28682.57363.86045.14726.4344.5024.5285.7191. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeRESETRESET2RESET34080120160200151.87151.79192.56

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerRESETRESET3132639526555.7344.06MIN: 54.17 / MAX: 116.29MIN: 43.6 / MAX: 47.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pRESETRESET3306090120150119.22150.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingRESETRESET32M4M6M8M10M7982733.2210016615.701. (CXX) g++ options: -O2 -std=gnu99 -lc

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 1080pRESETRESET2RESET34080120160200128.58128.47160.671. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000RESETRESET3500K1000K1500K2000K2500K18281542252252

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4KRESETRESET31.1452.293.4354.585.7254.1525.0891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000RESETRESET3140K280K420K560K700K537634653595

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 1000000RESETRESET3300K600K900K1200K1500K1191895981354

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 1080pRESETRESET2RESET3160320480640800600.20595.14721.701. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:100RESETRESET23M6M9M12M15M11419293.8313659195.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Clients Per Thread: 60 - Set To Get Ratio: 1:100

RESET3: The test run did not produce a result. E: Connection error: Connection reset by peer

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastRESETRESET2RESET32468106.1266.1477.2931. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe PeriodicRESETRESET2RESET32000M4000M6000M8000M10000M7950340000796124000067768300001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 1080pRESETRESET2RESET34812162014.2114.0716.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceRESETRESET30.86851.7372.60553.4744.34253.353.86MIN: 3.27 / MAX: 3.52MIN: 3.79 / MAX: 4.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterRESETRESET2RESET34812162012.1312.2313.941. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetRESETRESET32468106.015.23MIN: 5.86 / MAX: 7.74MIN: 5.01 / MAX: 5.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FastRESETRESET2RESET351015202518.0118.0920.681. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDRESETRESET3110220330440550431.59495.031. (CXX) g++ options: -O2 -std=gnu99 -lc

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pRESETRESET3369121510.7312.301. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57RESETRESET2RESET312M24M36M48M60M5289700048803000556770001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningRESETRESET380016002400320040003317.103751.821. (CXX) g++ options: -O2 -std=gnu99 -lc

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: KershawRESETRESET2RESET32000M4000M6000M8000M10000M102952000001006820000091118200001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0RESETRESET336912159.3510.56MIN: 9.25 / MAX: 10.78MIN: 9.78 / MAX: 160.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57RESETRESET2RESET380M160M240M320M400M3376600003439500003802800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32RESETRESET2RESET3300M600M900M1200M1500M1047500000104650000011697000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexRESETRESET31000K2000K3000K4000K5000K4273978.264747454.511. (CXX) g++ options: -O2 -std=gnu99 -lc

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FasterRESETRESET2RESET391827364534.7034.6938.361. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 500 - Files: 100000RESETRESET320K40K60K80K100K93721103306

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:10RESETRESET23M6M9M12M15M12065388.3410983543.661. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Clients Per Thread: 60 - Set To Get Ratio: 1:10

RESET3: The test run did not produce a result. E: Connection error: Connection reset by peer

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16RESETRESET351015202522.5920.62MIN: 20.48 / MAX: 23.4MIN: 20.22 / MAX: 23.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pRESETRESET3120240360480600528.29577.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 1RESETRESET340080012001600200016871843

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512RESETRESET2RESET33M6M9M12M15M1268500012264000133590001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512RESETRESET2RESET320M40M60M80M100M97011000994040001054800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2RESETRESET336912158.679.38MIN: 8.53 / MAX: 10.83MIN: 9.25 / MAX: 12.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 1000000RESETRESET320K40K60K80K100K7231778223

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 1000000RESETRESET3300K600K900K1200K1500K11641441257862

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicRESETRESET360120180240300246.62265.541. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512RESETRESET2RESET311M22M33M44M55M4888600049614000525870001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 500 - Files: 1000000RESETRESET320K40K60K80K100K7672282529

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000RESETRESET320K40K60K80K100K99701107181

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetRESETRESET351015202517.9819.29MIN: 17.82 / MAX: 18.6MIN: 18.99 / MAX: 22.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3RESETRESET32468107.558.08MIN: 7.41 / MAX: 11.7MIN: 7.94 / MAX: 8.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 20 - Files: 100000RESETRESET3140K280K420K560K700K636943595238

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32RESETRESET2RESET3120M240M360M480M600M5455700005448000005829300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 1000000RESETRESET320K40K60K80K100K8909595293

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32RESETRESET2RESET330M60M90M120M150M1385200001365200001459000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 1000000RESETRESET330K60K90K120K150K116333124054

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 20 - Files: 100000RESETRESET314K28K42K56K70K5963063492

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512RESETRESET2RESET340M80M120M160M200M1963000001960700002086100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 500 - Files: 100000RESETRESET312K24K36K48K60K5238355525

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 100000RESETRESET350K100K150K200K250K205761217865

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000RESETRESET316K32K48K64K80K6963873692

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000RESETRESET320K40K60K80K100K8333388183

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pRESETRESET3130260390520650590.49558.191. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32RESETRESET2RESET360M120M180M240M300M2775600002777800002927200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32RESETRESET2RESET38M16M24M32M40M3518600035179000370830001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32RESETRESET2RESET316M32M48M64M80M6885400068905000725700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 500 - Files: 1000000RESETRESET320K40K60K80K100K109673104362

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetRESETRESET32468107.477.11MIN: 6.42 / MAX: 190.35MIN: 7.04 / MAX: 7.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexRESETRESET3200K400K600K800K1000K789472.50828974.291. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CacheRESETRESET3300K600K900K1200K1500K1163806.081221906.481. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57RESETRESET2RESET340M80M120M160M200M1905400001886400001978900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2RESETRESET32468107.367.72MIN: 6.92 / MAX: 8.19MIN: 7.29 / MAX: 8.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 500 - Files: 100000RESETRESET320K40K60K80K100K7855582305

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18RESETRESET336912159.138.72MIN: 9 / MAX: 11.27MIN: 8.48 / MAX: 9.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000RESETRESET316K32K48K64K80K7140374722

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000RESETRESET320K40K60K80K100K108696103950

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetRESETRESET3369121511.3011.81MIN: 11.03 / MAX: 14.81MIN: 11.65 / MAX: 12.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57RESETRESET2RESET3150M300M450M600M750M6959200006865400007166500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdRESETRESET34812162015.5216.20MIN: 15.22 / MAX: 20.06MIN: 15.81 / MAX: 54.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET2RESET31.13022.26043.39064.52085.6515.01595.02334.8202

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PthreadRESETRESET330K60K90K120K150K119839.68124853.181. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512RESETRESET2RESET36M12M18M24M30M2520600025050000260650001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 20 - Files: 100000RESETRESET320K40K60K80K100K107875112233

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57RESETRESET2RESET320M40M60M80M100M1052000001040200001080100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamRESETRESET2RESET3306090120150112.60112.25116.56

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 20 - Files: 100000RESETRESET320K40K60K80K100K8703290253

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000RESETRESET320K40K60K80K100K110963115022

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamRESETRESET370140210280350325.02336.76

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000RESETRESET313K26K39K52K65K6031462189

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetRESETRESET34812162016.7316.29MIN: 16.57 / MAX: 18.96MIN: 16.05 / MAX: 16.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000RESETRESET313K26K39K52K65K5924260790

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000RESETRESET320K40K60K80K100K8791289985

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000RESETRESET320K40K60K80K100K8561683752

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50RESETRESET34812162016.8317.19MIN: 16.57 / MAX: 24.47MIN: 16.86 / MAX: 35.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyRESETRESET361218243025.4225.96MIN: 24.66 / MAX: 60.7MIN: 25.03 / MAX: 29.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 1RESETRESET31300260039005200650059566080

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000RESETRESET320K40K60K80K100K112587114732

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 500 - Files: 1000000RESETRESET314K28K42K56K70K6603767087

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000RESETRESET320K40K60K80K100K8580084474

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRESETRESET3306090120150143.84146.00

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamRESETRESET2RESET371421283531.0231.0531.44

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET381624324033.2532.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET2RESET34812162015.8115.8016.00

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRESETRESET3163248648073.2073.98

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRESETRESET2RESET3130260390520650607.03607.47613.03

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamRESETRESET3130260390520650608.16614.02

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamRESETRESET2RESET3112233445549.0049.0648.74

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRESETRESET3112233445549.0048.77

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRESETRESET2RESET320406080100111.10110.74111.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamRESETRESET320406080100109.89110.13

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamRESETRESET3110220330440550493.61494.30

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4KRESETRESET34080120160200161.60161.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 64RESET30K60K90K120K150K118850

Test: oltp_point_select - Threads: 64

RESET3: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 32RESET20K40K60K80K100K97200

Test: oltp_point_select - Threads: 32

RESET3: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 16RESET320K40K60K80K100K87667

Test: oltp_point_select - Threads: 16

RESET: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128RESET13K26K39K52K65K59739

Test: oltp_read_write - Threads: 128

RESET3: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 64RESET320K40K60K80K100K94643

Test: oltp_read_write - Threads: 64

RESET: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 1RESET70014002100280035003204

Test: oltp_read_write - Threads: 1

RESET3: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 20 - Files: 100000RESETRESET3200K400K600K800K1000K869565869565

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

Test: IO_uring

RESET: The test run did not produce a result.

RESET3: The test run did not produce a result.

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Clients Per Thread: 60 - Set To Get Ratio: 1:100

RESET3: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 60 - Set To Get Ratio: 1:10

RESET3: The test run did not produce a result. E: Connection error: Broken pipe

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

Build: allmodconfig

RESET: The test quit with a non-zero exit status.

RESET2: The test quit with a non-zero exit status.

RESET3: The test quit with a non-zero exit status.

253 Results Shown

Apache Hadoop
Stress-NG
Apache Hadoop:
  File Status - 20 - 1000000
  File Status - 50 - 1000000
  File Status - 500 - 100000
Dragonflydb
Stress-NG
Apache Hadoop
Stress-NG:
  x86_64 RdRand
  Hash
  Vector Shuffle
  CPU Stress
  Floating Point
Neural Magic DeepSparse
Stress-NG:
  Poll
  Function Call
Dragonflydb:
  20 - 1:10
  10 - 1:10
Stress-NG
Dragonflydb
Embree
Neural Magic DeepSparse
Embree
Neural Magic DeepSparse:
  ResNet-50, Baseline - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
Dragonflydb
Neural Magic DeepSparse
Stress-NG:
  Memory Copying
  Context Switching
  Crypto
Neural Magic DeepSparse
Stress-NG:
  Glibc C String Functions
  AVL Tree
Neural Magic DeepSparse
Dragonflydb
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Embree:
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon Obj
OSPRay
Liquid-DSP
OSPRay
Stress-NG
Embree
OSPRay:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
Neural Magic DeepSparse
Stress-NG
Embree
Neural Magic DeepSparse
BRL-CAD
Intel Open Image Denoise
Stress-NG
Intel Open Image Denoise
Blender
Intel Open Image Denoise
Stress-NG
OSPRay
Blender
Stress-NG
Blender
Stress-NG
Blender
Stress-NG:
  Fused Multiply-Add
  Wide Vector Math
Blender
Stress-NG:
  Matrix Math
  Pipe
  AVX-512 VNNI
SPECFEM3D
Liquid-DSP
SPECFEM3D
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
SPECFEM3D
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
SPECFEM3D
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
TiDB Community Server
Liquid-DSP
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering - Synchronous Single-Stream:
    ms/batch
    items/sec
SPECFEM3D
Apache Hadoop
Stress-NG
Apache Cassandra
Timed Linux Kernel Compilation
Apache Hadoop
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
TiDB Community Server
Stress-NG
TiDB Community Server
Apache Hadoop
Stress-NG
TiDB Community Server:
  oltp_update_non_index - 64
  oltp_point_select - 128
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
Remhos
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    ms/batch
    items/sec
Kripke
TiDB Community Server
SVT-AV1
Liquid-DSP
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
SVT-AV1
Apache Hadoop
SVT-AV1
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
Liquid-DSP
SVT-AV1
TiDB Community Server
Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
Stress-NG
SVT-AV1
TiDB Community Server
NCNN
SVT-AV1
TiDB Community Server
SVT-AV1
OSPRay
NCNN
SVT-AV1
Stress-NG
SVT-AV1
Apache Hadoop
SVT-AV1
Apache Hadoop:
  Open - 50 - 100000
  Open - 20 - 1000000
SVT-AV1
Dragonflydb
VVenC
nekRS
SVT-AV1
NCNN
VVenC
NCNN
VVenC
Stress-NG
SVT-AV1
Liquid-DSP
Stress-NG
nekRS
NCNN
Liquid-DSP:
  8 - 256 - 57
  32 - 256 - 32
Stress-NG
VVenC
Apache Hadoop
Dragonflydb
NCNN
SVT-AV1
TiDB Community Server
Liquid-DSP:
  1 - 256 - 512
  8 - 256 - 512
NCNN
Apache Hadoop:
  Create - 20 - 1000000
  Open - 500 - 1000000
Stress-NG
Liquid-DSP
Apache Hadoop:
  Rename - 500 - 1000000
  Delete - 100 - 100000
NCNN:
  CPU - googlenet
  CPU-v3-v3 - mobilenet-v3
Apache Hadoop
Liquid-DSP
Apache Hadoop
Liquid-DSP
Apache Hadoop:
  Delete - 20 - 1000000
  Create - 20 - 100000
Liquid-DSP
Apache Hadoop:
  Create - 500 - 100000
  Open - 500 - 100000
  Create - 100 - 1000000
  Rename - 50 - 100000
SVT-AV1
Liquid-DSP:
  8 - 256 - 32
  1 - 256 - 32
  2 - 256 - 32
Apache Hadoop
NCNN
Stress-NG:
  Mutex
  CPU Cache
Liquid-DSP
NCNN
Apache Hadoop
NCNN
Apache Hadoop:
  Create - 50 - 1000000
  Delete - 50 - 100000
NCNN
Liquid-DSP
NCNN
Neural Magic DeepSparse
Stress-NG
Liquid-DSP
Apache Hadoop
Liquid-DSP
Neural Magic DeepSparse
Apache Hadoop:
  Rename - 20 - 100000
  Delete - 50 - 1000000
Neural Magic DeepSparse
Apache Hadoop
NCNN
Apache Hadoop:
  Create - 100 - 100000
  Rename - 50 - 1000000
  Rename - 100 - 100000
NCNN:
  CPU - resnet50
  CPU - yolov4-tiny
TiDB Community Server
Apache Hadoop:
  Delete - 100 - 1000000
  Create - 500 - 1000000
  Rename - 100 - 1000000
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream
SVT-AV1
TiDB Community Server:
  oltp_point_select - 64
  oltp_point_select - 32
  oltp_point_select - 16
  oltp_read_write - 128
  oltp_read_write - 64
  oltp_read_write - 1
Apache Hadoop