dddxxx

Intel Core i7-8565U testing with a Dell 0KTW76 (1.17.0 BIOS) and Intel UHD 620 WHL GT2 15GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308067-NE-DDDXXX42317
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 4 Tests
Creator Workloads 8 Tests
Database Test Suite 5 Tests
Encoding 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
Multi-Core 10 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 4 Tests
Server 5 Tests
Server CPU Tests 4 Tests
Video Encoding 3 Tests
Vulkan Compute 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
August 05 2023
  15 Hours, 4 Minutes
b
August 06 2023
  15 Hours, 10 Minutes
Invert Hiding All Results Option
  15 Hours, 7 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


dddxxxOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-8565U @ 4.60GHz (4 Cores / 8 Threads)Dell 0KTW76 (1.17.0 BIOS)Intel Cannon Point-LP16GBSK hynix PC401 NVMe 256GBIntel UHD 620 WHL GT2 15GB (1150MHz)Realtek ALC3271Qualcomm Atheros QCA6174 802.11acUbuntu 22.045.19.0-rc6-phx-retbleed (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.1OpenCL 3.01.3.204GCC 11.3.0GCC 11.4.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilersFile-SystemScreen ResolutionDddxxx BenchmarksSystem Logs- Transparent Huge Pages: madvise- a: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - b: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 - a: OpenJDK Runtime Environment (build 11.0.18+10-post-Ubuntu-0ubuntu122.04) - b: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04) - a: Python 3.10.6- b: Python 3.10.12- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+48%+48%+96%+96%+144%+144%+192%+192%61.2%61.1%42.1%40.2%37.1%36.4%34.9%34.6%34.5%33.9%31.1%28.4%27.1%24.3%24.1%23.8%23%21.9%21.7%20.7%19.3%19%18.4%18.4%18.2%17.8%17.6%17.6%17.1%15.5%15%14.1%13.8%13.1%12.1%12%12%11.3%9.1%8.8%8.4%8.1%7.8%7.6%7.3%7.2%6.9%6.5%5.1%5.1%4.9%4.8%4.7%4.3%4.3%4.2%4%3.9%3.7%3.5%3.4%3.3%3.2%3.2%2.8%2.6%2.3%2.3%2%1192.1%B.L.N.Q.A.S.I - A.M.SB.L.N.Q.A.S.I - A.M.SCPU - shufflenet-v2Redis - 500 - 1:5237.7%437.5%Redis - 500 - 1:10MMAPRedis - 100 - 1:5Redis - 50 - 1:5CPU - mnasnetRedis - 100 - 1:10MEMFD31.3%Redis - 50 - 1:10FutexForkingB.L.N.Q.A - A.M.SCPU CacheCryptoZlibPreset 12 - Bosphorus 4K22.5%HashB.L.N.Q.A - A.M.SPthreadCPU-v3-v3 - mobilenet-v3SENDFILEN.T.C.B.b.u.S - A.M.SN.T.C.D.m - A.M.SN.T.C.B.b.u.S - A.M.SN.T.C.D.m - A.M.SAVL TreeN.T.C.B.b.u.c - A.M.SN.T.C.B.b.u.c - A.M.SMallocPreset 8 - Bosphorus 1080p15.2%Preset 4 - Bosphorus 4K15.2%Mutexgravity_spheres_volume/dim_512/scivis/real_timeS.N.1200 - 100 - 50013.3%100 - 100 - 500100 - 100 - 50012.2%NUMAPreset 4 - Bosphorus 1080pPollAtomicPreset 13 - Bosphorus 4K9.9%200 - 100 - 500particle_volume/ao/real_time9.1%Fused Multiply-Add8.8%CPU-v2-v2 - mobilenet-v2C.S.9.P.Y.P - A.M.SV.F.PPipeCPU StressIO_uringG.Q.D.S500 - 100 - 2007.1%500 - 1 - 5007%500 - 100 - 5006.9%CloningCPU - mobilenet6.7%4 - 256 - 576.6%C.S.9.P.Y.P - A.M.SVulkan GPU - regnety_400m6%8 - 256 - 575.7%G.C.S.F5.3%S.V.M.P5.3%C.D.Y.C.S.I - A.M.SFunction Callparticle_volume/pathtracer/real_time5%CPU - resnet50gravity_spheres_volume/dim_512/pathtracer/real_time4.8%128CPU - squeezenet_ssdPathtracer ISPC - Asian Dragon Obj4.4%500 - 100 - 5008 - 256 - 32x86_64 RdRand4.3%500 - 1 - 2004.2%500 - 1 - 500Context Switching200 - 1 - 2004%500 - 100 - 200CPU - efficientnet-b0Vulkan GPU - FastestDet100 - 100 - 200Matrix 3D Math100 - 100 - 2003.3%C.D.Y.C.S.I - A.M.SVulkan GPU - squeezenet_ssd1:103.1%R.5.B - A.M.S3.1%Bosphorus 1080p - Fast3%CPU - FastestDet3%Vulkan GPU - mobilenetR.5.B - A.M.S2.7%200 - 1 - 5002.7%Floating PointTime To Compile2.5%CPU - yolov4-tiny2.4%C.C.R.5.I - A.M.S2.4%C.C.R.5.I - A.M.S2.3%Chimera 1080p8 - 256 - 512CPU - blazeface2.2%particle_volume/scivis/real_time2.2%1:1002.1%Vulkan GPU - blazeface2.1%Preset 12 - Bosphorus 1080pPathtracer - Crown2%SQLiteNeural Magic DeepSparseNeural Magic DeepSparseNCNNRedis 7.0.12 + memtier_benchmarkSQLiteSQLiteRedis 7.0.12 + memtier_benchmarkStress-NGRedis 7.0.12 + memtier_benchmarkRedis 7.0.12 + memtier_benchmarkNCNNRedis 7.0.12 + memtier_benchmarkStress-NGRedis 7.0.12 + memtier_benchmarkStress-NGStress-NGNeural Magic DeepSparseStress-NGStress-NGStress-NGSVT-AV1Stress-NGNeural Magic DeepSparseStress-NGNCNNStress-NGNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseStress-NGNeural Magic DeepSparseNeural Magic DeepSparseStress-NGSVT-AV1SVT-AV1Stress-NGOSPRaydav1dApache IoTDBApache IoTDBApache IoTDBStress-NGSVT-AV1Stress-NGStress-NGSVT-AV1Apache IoTDBOSPRayStress-NGNCNNNeural Magic DeepSparseStress-NGStress-NGStress-NGStress-NGStress-NGApache IoTDBApache IoTDBApache IoTDBStress-NGNCNNLiquid-DSPNeural Magic DeepSparseNCNNLiquid-DSPStress-NGStress-NGNeural Magic DeepSparseStress-NGOSPRayNCNNOSPRaylibxsmmNCNNEmbreeApache IoTDBLiquid-DSPStress-NGApache IoTDBApache IoTDBStress-NGApache IoTDBApache IoTDBNCNNNCNNApache IoTDBStress-NGApache IoTDBNeural Magic DeepSparseNCNNMemcachedNeural Magic DeepSparseVVenCNCNNNCNNNeural Magic DeepSparseApache IoTDBStress-NGBuild2NCNNNeural Magic DeepSparseNeural Magic DeepSparsedav1dLiquid-DSPNCNNOSPRayMemcachedNCNNSVT-AV1Embreeab

dddxxxbuild-llvm: Unix Makefilesbuild-llvm: Ninjabuild-godot: Time To Compileapache-iotdb: 500 - 100 - 500apache-iotdb: 500 - 100 - 500xonotic: 1920 x 1080 - Ultimatebuild2: Time To Compilevvenc: Bosphorus 4K - Fastxonotic: 1920 x 1080 - Ultraxonotic: 1920 x 1080 - Highoidn: RTLightmap.hdr.4096x4096 - CPU-Onlylibxsmm: 128vvenc: Bosphorus 4K - Fasterembree: Pathtracer - Crownembree: Pathtracer - Asian Dragon Objapache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 200embree: Pathtracer ISPC - Crownxonotic: 1920 x 1080 - Lowapache-iotdb: 200 - 100 - 500apache-iotdb: 200 - 100 - 500ospray: particle_volume/pathtracer/real_timeembree: Pathtracer ISPC - Asian Dragon Objoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetospray: particle_volume/scivis/real_timencnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetembree: Pathtracer - Asian Dragonsvt-av1: Preset 4 - Bosphorus 4Kvvenc: Bosphorus 1080p - Fastsqlite: 4sqlite: 2embree: Pathtracer ISPC - Asian Dragondeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamlibxsmm: 64cassandra: Writesz3: 2.smt2libxsmm: 32ospray: particle_volume/ao/real_timedeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamapache-iotdb: 500 - 1 - 500apache-iotdb: 500 - 1 - 500ospray: gravity_spheres_volume/dim_512/scivis/real_timeapache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 100 - 500memtier-benchmark: Redis - 500 - 1:10memtier-benchmark: Redis - 500 - 1:5memcached: 1:5memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 100 - 1:10memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 50 - 1:5memcached: 1:10memcached: 1:100ospray: gravity_spheres_volume/dim_512/ao/real_timesvt-av1: Preset 8 - Bosphorus 4Kdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fasterdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamospray: gravity_spheres_volume/dim_512/pathtracer/real_timesqlite: 1apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 100 - 200deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdav1d: Summer Nature 4Kdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdav1d: Chimera 1080p 10-bitz3: 1.smt2deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamsvt-av1: Preset 4 - Bosphorus 1080pdav1d: Chimera 1080pencode-opus: WAV To Opus Encodeapache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 100 - 200quantlib: stress-ng: IO_uringapache-iotdb: 500 - 1 - 200apache-iotdb: 500 - 1 - 200stress-ng: Mallocstress-ng: Cloningstress-ng: MMAPstress-ng: MEMFDstress-ng: Zlibstress-ng: Pipestress-ng: Atomicstress-ng: NUMAstress-ng: Pthreadstress-ng: x86_64 RdRandstress-ng: Function Callstress-ng: System V Message Passingstress-ng: Socket Activitystress-ng: Matrix 3D Mathliquid-dsp: 8 - 256 - 512stress-ng: Vector Floating Pointliquid-dsp: 4 - 256 - 512liquid-dsp: 2 - 256 - 512liquid-dsp: 1 - 256 - 32stress-ng: AVL Treestress-ng: Floating Pointstress-ng: Hashliquid-dsp: 1 - 256 - 512liquid-dsp: 2 - 256 - 57liquid-dsp: 2 - 256 - 32liquid-dsp: 1 - 256 - 57stress-ng: Memory Copyingliquid-dsp: 8 - 256 - 32stress-ng: Vector Shufflestress-ng: CPU Cacheliquid-dsp: 8 - 256 - 57liquid-dsp: 4 - 256 - 57liquid-dsp: 4 - 256 - 32stress-ng: Forkingstress-ng: Mutexstress-ng: Glibc Qsort Data Sortingstress-ng: Matrix Mathstress-ng: CPU Stressstress-ng: SENDFILEstress-ng: Cryptostress-ng: Wide Vector Mathstress-ng: Pollstress-ng: Glibc C String Functionsstress-ng: Fused Multiply-Addstress-ng: Context Switchingstress-ng: Vector Mathstress-ng: Semaphoresstress-ng: Futexapache-iotdb: 200 - 1 - 500apache-iotdb: 200 - 1 - 500apache-iotdb: 200 - 1 - 200apache-iotdb: 200 - 1 - 200apache-iotdb: 100 - 1 - 500apache-iotdb: 100 - 1 - 500svt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 1080papache-iotdb: 100 - 1 - 200apache-iotdb: 100 - 1 - 200dav1d: Summer Nature 1080pvkpeak: fp32-scalarsvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pab2967.5282892.0881372.604984.74948660.1959.1708882555.9361.18077.933873691.11459890.0679.92.6933.52183.8229212.438900311.033.7369205.6062326550.478609112.9752.59044.16700.120.125.66240.039.8215.9739.3933.1811.6712.7397.7016.290.949.225.233.665.116.6728.151.373995.39233.9610.0316.5538.3736.0211.6812.6197.4216.230.919.786.044.295.146.6925.934.21831.0923.905125.830123.4654.7883764.26762.667790.926422130.94147.71.4933493.848221.298575.4611157.730.622182342.2113467917.31914462.02890374.23525368.391109812.441028641.191016703.271039311.12502050.43496309.760.7451469.49030.539865.42509.882211.98309.45531.0616430.327144.5312613348.93669.18092.9815979.17972.0309244.56178.300969.559328.918567.01855.67692.3527191.3043.973116.268317.5124162.164712.5441137.948914.672065.951730.567368.049129.658510.2465196.78434.260244.1235.396107.8415017343.492379.8170492.1615.751036429.93375933.99669.4120.8656.59273.051422109.84224.5153.9534974.263267.312058.702652462.622495.57383.49357190007289.3225830500151795004651650016.41743.83523699.7077218007379550085550500434290001078.731779450002431.22929178.431403450001195900001402800009551.79604156.7269.1317318.77310.0536779.864590.53147314.85284300.802485856.072978232.73705297.9612666.193218725.51486078.6829.651333775.5917.34816344.234.81996242.8135.49734.04935.73922.62529599273.35268.33154.597202.8522929.9112908.8781384.7661026.934629287.7958.9499391569.6091.17077.519588790.90800950.0683.72.6723.45423.8503198.419246224.423.6962205.4486260485.939394851.3750.08293.99230.120.125.47235.5110.4115.4839.1733.6011.6912.5696.2616.190.969.345.263.675.116.6727.381.344675.55232.1710.1215.8039.3134.3311.6412.6298.7516.190.939.434.493.024.316.1527.684.19980.9483.790173.045170.0144.7636614.67933.246190.826438131.40548.01.369258.236834.323170.46637041.620.709986387.1512006039.141253523.691248610.70518308.201455171.861377833.481372033.181398883.01486807.49485922.630.7347839.43630.566465.35649.868179.077211.17191.01295388.585143.2412464073628.36393.2320835.85172.3883245.64308.283270.821328.464066.38864.99322.3352190.8743.41998.714320.7264154.258612.9502139.047514.606968.000329.760369.618128.962210.336195.48584.773249.6235.426104.4315526243.842404.8182936.6815.111040683.13434156.10715.5828.4643.09335.911533355.11249.9960.4942229.613134.042164.202519916.102508.63396.03365230007877.7025812500150235004682500019.3762.86638650.5177108507362050085664000434275001088.751855400002431.011153192.7513283000011220500013995000012140.60694562.4774.1317116.797866.443752.375684.53148388.79318466.502360589.102737483.30733269.7912444.333241408.21624014.1729.751298988.3116.68812170.8634.481009840.2828.97830.97531.01422.93524660.18310.94268.33157.692206.769OpenBenchmarking.org

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesab6001200180024003000SE +/- 45.61, N = 2SE +/- 7.61, N = 22967.532929.91
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesab5001000150020002500Min: 2921.92 / Avg: 2967.53 / Max: 3013.14Min: 2922.3 / Avg: 2929.91 / Max: 2937.52

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaab6001200180024003000SE +/- 4.75, N = 2SE +/- 4.95, N = 22892.092908.88
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaab5001000150020002500Min: 2887.33 / Avg: 2892.09 / Max: 2896.84Min: 2903.93 / Avg: 2908.88 / Max: 2913.82

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compileab30060090012001500SE +/- 0.29, N = 2SE +/- 12.96, N = 21372.601384.77
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compileab2004006008001000Min: 1372.32 / Avg: 1372.6 / Max: 1372.89Min: 1371.81 / Avg: 1384.77 / Max: 1397.72

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500ab2004006008001000984.701026.93MAX: 4944.15MAX: 6211.26

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500ab1.1M2.2M3.3M4.4M5.5M4948660.194629287.79

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Ultimateab1326395265SE +/- 0.07, N = 2SE +/- 0.06, N = 259.1758.95MIN: 24 / MAX: 94MIN: 24 / MAX: 94
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Ultimateab1224364860Min: 59.1 / Avg: 59.17 / Max: 59.24Min: 58.89 / Avg: 58.95 / Max: 59.01

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compileab120240360480600SE +/- 1.14, N = 2SE +/- 4.22, N = 2555.94569.61
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compileab100200300400500Min: 554.79 / Avg: 555.94 / Max: 557.08Min: 565.39 / Avg: 569.61 / Max: 573.83

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab0.26550.5310.79651.0621.3275SE +/- 0.013, N = 2SE +/- 0.013, N = 21.1801.1701. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab246810Min: 1.17 / Avg: 1.18 / Max: 1.19Min: 1.16 / Avg: 1.17 / Max: 1.181. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Ultraab20406080100SE +/- 0.16, N = 2SE +/- 0.41, N = 277.9377.52MIN: 34 / MAX: 120MIN: 32 / MAX: 120
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Ultraab1530456075Min: 77.77 / Avg: 77.93 / Max: 78.1Min: 77.11 / Avg: 77.52 / Max: 77.93

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Highab20406080100SE +/- 0.04, N = 2SE +/- 0.32, N = 291.1190.91MIN: 39 / MAX: 130MIN: 38 / MAX: 130
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Highab20406080100Min: 91.08 / Avg: 91.11 / Max: 91.15Min: 90.59 / Avg: 90.91 / Max: 91.23

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyab0.01350.0270.04050.0540.0675SE +/- 0.00, N = 2SE +/- 0.00, N = 20.060.06
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyab12345Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ab20406080100SE +/- 4.60, N = 2SE +/- 4.10, N = 279.983.71. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ab1632486480Min: 75.3 / Avg: 79.9 / Max: 84.5Min: 79.6 / Avg: 83.7 / Max: 87.81. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab0.60591.21181.81772.42363.0295SE +/- 0.052, N = 2SE +/- 0.045, N = 22.6932.6721. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab246810Min: 2.64 / Avg: 2.69 / Max: 2.75Min: 2.63 / Avg: 2.67 / Max: 2.721. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownab0.79241.58482.37723.16963.962SE +/- 0.0068, N = 2SE +/- 0.0668, N = 23.52183.4542MIN: 2.81 / MAX: 4.3MIN: 2.82 / MAX: 4.4
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownab246810Min: 3.52 / Avg: 3.52 / Max: 3.53Min: 3.39 / Avg: 3.45 / Max: 3.52

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objab0.86631.73262.59893.46524.3315SE +/- 0.0521, N = 2SE +/- 0.0122, N = 23.82293.8503MIN: 3.19 / MAX: 4.76MIN: 3.22 / MAX: 4.83
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objab246810Min: 3.77 / Avg: 3.82 / Max: 3.88Min: 3.84 / Avg: 3.85 / Max: 3.86

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200ab50100150200250212.43198.41MAX: 2080.14MAX: 2427.81

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200ab2M4M6M8M10M8900311.039246224.42

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownab0.84081.68162.52243.36324.204SE +/- 0.0660, N = 2SE +/- 0.1040, N = 23.73693.6962MIN: 3.08 / MAX: 4.7MIN: 3.06 / MAX: 4.78
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownab246810Min: 3.67 / Avg: 3.74 / Max: 3.8Min: 3.59 / Avg: 3.7 / Max: 3.8

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Lowab50100150200250SE +/- 0.72, N = 2SE +/- 0.60, N = 2205.61205.45MIN: 99 / MAX: 336MIN: 95 / MAX: 334
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Lowab4080120160200Min: 204.89 / Avg: 205.61 / Max: 206.32Min: 204.85 / Avg: 205.45 / Max: 206.05

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab120240360480600550.47485.93MAX: 3020.53MAX: 2859.52

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab2M4M6M8M10M8609112.979394851.37

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeab1224364860SE +/- 0.59, N = 2SE +/- 0.02, N = 252.5950.08
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeab1122334455Min: 52 / Avg: 52.59 / Max: 53.18Min: 50.07 / Avg: 50.08 / Max: 50.1

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objab0.93761.87522.81283.75044.688SE +/- 0.0272, N = 2SE +/- 0.1882, N = 24.16703.9923MIN: 3.5 / MAX: 5.07MIN: 3.51 / MAX: 5.03
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objab246810Min: 4.14 / Avg: 4.17 / Max: 4.19Min: 3.8 / Avg: 3.99 / Max: 4.18

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyab0.0270.0540.0810.1080.135SE +/- 0.00, N = 2SE +/- 0.00, N = 20.120.12
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyab12345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyab0.0270.0540.0810.1080.135SE +/- 0.00, N = 2SE +/- 0.00, N = 20.120.12
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyab12345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.11 / Avg: 0.12 / Max: 0.12

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetab1.27352.5473.82055.0946.3675SE +/- 0.20, N = 2SE +/- 0.04, N = 25.665.47MIN: 5.23 / MAX: 24.86MIN: 5.21 / MAX: 24.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetab246810Min: 5.46 / Avg: 5.66 / Max: 5.86Min: 5.43 / Avg: 5.47 / Max: 5.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerab50100150200250SE +/- 3.20, N = 2SE +/- 3.18, N = 2240.03235.51MIN: 196.33 / MAX: 300.62MIN: 187.15 / MAX: 291.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerab4080120160200Min: 236.83 / Avg: 240.03 / Max: 243.23Min: 232.33 / Avg: 235.51 / Max: 238.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mab3691215SE +/- 0.01, N = 2SE +/- 0.07, N = 29.8210.41MIN: 9.26 / MAX: 25.33MIN: 9.74 / MAX: 26.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mab3691215Min: 9.81 / Avg: 9.82 / Max: 9.83Min: 10.34 / Avg: 10.41 / Max: 10.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdab48121620SE +/- 0.47, N = 2SE +/- 0.84, N = 215.9715.48MIN: 13.93 / MAX: 35.68MIN: 13.66 / MAX: 36.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdab48121620Min: 15.5 / Avg: 15.97 / Max: 16.43Min: 14.64 / Avg: 15.48 / Max: 16.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyab918273645SE +/- 1.02, N = 2SE +/- 1.11, N = 239.3939.17MIN: 36.4 / MAX: 60.27MIN: 36.39 / MAX: 59.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyab816243240Min: 38.37 / Avg: 39.39 / Max: 40.41Min: 38.06 / Avg: 39.17 / Max: 40.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50ab816243240SE +/- 0.55, N = 2SE +/- 1.36, N = 233.1833.60MIN: 31.12 / MAX: 58.5MIN: 30.7 / MAX: 59.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50ab714212835Min: 32.63 / Avg: 33.18 / Max: 33.73Min: 32.24 / Avg: 33.6 / Max: 34.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetab3691215SE +/- 0.03, N = 2SE +/- 0.04, N = 211.6711.69MIN: 11.06 / MAX: 27.75MIN: 10.95 / MAX: 28.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetab3691215Min: 11.64 / Avg: 11.67 / Max: 11.7Min: 11.65 / Avg: 11.69 / Max: 11.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18ab3691215SE +/- 0.04, N = 2SE +/- 0.04, N = 212.7312.56MIN: 11.94 / MAX: 28.78MIN: 11.83 / MAX: 29.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18ab48121620Min: 12.68 / Avg: 12.73 / Max: 12.77Min: 12.52 / Avg: 12.56 / Max: 12.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16ab20406080100SE +/- 0.23, N = 2SE +/- 0.11, N = 297.7096.26MIN: 94.39 / MAX: 117.31MIN: 93.43 / MAX: 114.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16ab20406080100Min: 97.47 / Avg: 97.7 / Max: 97.93Min: 96.15 / Avg: 96.26 / Max: 96.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetab48121620SE +/- 0.00, N = 2SE +/- 0.01, N = 216.2916.19MIN: 14.94 / MAX: 33.61MIN: 14.91 / MAX: 32.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetab48121620Min: 16.29 / Avg: 16.29 / Max: 16.29Min: 16.18 / Avg: 16.19 / Max: 16.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceab0.2160.4320.6480.8641.08SE +/- 0.00, N = 2SE +/- 0.01, N = 20.940.96MIN: 0.86 / MAX: 3.07MIN: 0.8 / MAX: 3.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceab246810Min: 0.93 / Avg: 0.94 / Max: 0.94Min: 0.95 / Avg: 0.96 / Max: 0.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0ab3691215SE +/- 0.07, N = 2SE +/- 0.02, N = 29.229.34MIN: 8.52 / MAX: 24.96MIN: 8.61 / MAX: 25.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0ab3691215Min: 9.14 / Avg: 9.22 / Max: 9.29Min: 9.32 / Avg: 9.34 / Max: 9.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetab1.18352.3673.55054.7345.9175SE +/- 0.81, N = 2SE +/- 0.78, N = 25.235.26MIN: 4.05 / MAX: 26.68MIN: 4.06 / MAX: 22.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetab246810Min: 4.42 / Avg: 5.23 / Max: 6.04Min: 4.48 / Avg: 5.26 / Max: 6.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2ab0.82581.65162.47743.30324.129SE +/- 0.63, N = 2SE +/- 0.62, N = 23.663.67MIN: 2.74 / MAX: 16.28MIN: 2.75 / MAX: 19.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2ab246810Min: 3.03 / Avg: 3.66 / Max: 4.29Min: 3.05 / Avg: 3.67 / Max: 4.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3ab1.14982.29963.44944.59925.749SE +/- 0.77, N = 2SE +/- 0.80, N = 25.115.11MIN: 4.07 / MAX: 20.14MIN: 4.05 / MAX: 26.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3ab246810Min: 4.34 / Avg: 5.11 / Max: 5.87Min: 4.31 / Avg: 5.11 / Max: 5.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ab246810SE +/- 0.67, N = 2SE +/- 0.65, N = 26.676.67MIN: 5.54 / MAX: 27.83MIN: 5.58 / MAX: 27.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ab3691215Min: 6 / Avg: 6.67 / Max: 7.33Min: 6.02 / Avg: 6.67 / Max: 7.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetab714212835SE +/- 0.19, N = 2SE +/- 0.57, N = 228.1527.38MIN: 26.87 / MAX: 48.42MIN: 24.23 / MAX: 47.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetab612182430Min: 27.96 / Avg: 28.15 / Max: 28.34Min: 26.8 / Avg: 27.38 / Max: 27.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeab0.30910.61820.92731.23641.5455SE +/- 0.00084, N = 2SE +/- 0.00287, N = 21.373991.34467
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeab246810Min: 1.37 / Avg: 1.37 / Max: 1.37Min: 1.34 / Avg: 1.34 / Max: 1.35

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetab1.24882.49763.74644.99526.244SE +/- 0.04, N = 2SE +/- 0.16, N = 25.395.55MIN: 5.09 / MAX: 24.31MIN: 5.23 / MAX: 25.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetab246810Min: 5.35 / Avg: 5.39 / Max: 5.43Min: 5.39 / Avg: 5.55 / Max: 5.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerab50100150200250SE +/- 0.17, N = 2SE +/- 5.04, N = 2233.96232.17MIN: 196.52 / MAX: 292.9MIN: 187.55 / MAX: 291.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerab4080120160200Min: 233.79 / Avg: 233.96 / Max: 234.13Min: 227.13 / Avg: 232.17 / Max: 237.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mab3691215SE +/- 0.14, N = 2SE +/- 0.04, N = 210.0310.12MIN: 8.93 / MAX: 25.51MIN: 9.48 / MAX: 25.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mab3691215Min: 9.89 / Avg: 10.03 / Max: 10.17Min: 10.08 / Avg: 10.12 / Max: 10.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdab48121620SE +/- 0.30, N = 2SE +/- 0.37, N = 216.5515.80MIN: 15.48 / MAX: 37.44MIN: 13.81 / MAX: 36.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdab48121620Min: 16.25 / Avg: 16.55 / Max: 16.84Min: 15.43 / Avg: 15.8 / Max: 16.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyab918273645SE +/- 0.08, N = 2SE +/- 0.83, N = 238.3739.31MIN: 36.54 / MAX: 54.65MIN: 36.44 / MAX: 59.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyab816243240Min: 38.29 / Avg: 38.37 / Max: 38.45Min: 38.48 / Avg: 39.31 / Max: 40.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ab816243240SE +/- 0.05, N = 2SE +/- 1.96, N = 236.0234.33MIN: 31.18 / MAX: 61.6MIN: 30.77 / MAX: 59.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ab816243240Min: 35.97 / Avg: 36.02 / Max: 36.06Min: 32.37 / Avg: 34.33 / Max: 36.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetab3691215SE +/- 0.06, N = 2SE +/- 0.07, N = 211.6811.64MIN: 10.92 / MAX: 27.91MIN: 10.99 / MAX: 27.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetab3691215Min: 11.62 / Avg: 11.68 / Max: 11.73Min: 11.57 / Avg: 11.64 / Max: 11.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ab3691215SE +/- 0.04, N = 2SE +/- 0.11, N = 212.6112.62MIN: 11.89 / MAX: 29.06MIN: 11.83 / MAX: 28.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ab48121620Min: 12.57 / Avg: 12.61 / Max: 12.65Min: 12.51 / Avg: 12.62 / Max: 12.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ab20406080100SE +/- 0.26, N = 2SE +/- 2.34, N = 297.4298.75MIN: 94.57 / MAX: 115.91MIN: 90.66 / MAX: 1242.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ab20406080100Min: 97.16 / Avg: 97.42 / Max: 97.67Min: 96.41 / Avg: 98.75 / Max: 101.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetab48121620SE +/- 0.08, N = 2SE +/- 0.10, N = 216.2316.19MIN: 15.01 / MAX: 32.85MIN: 14.97 / MAX: 32.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetab48121620Min: 16.15 / Avg: 16.23 / Max: 16.3Min: 16.09 / Avg: 16.19 / Max: 16.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceab0.20930.41860.62790.83721.0465SE +/- 0.01, N = 2SE +/- 0.00, N = 20.910.93MIN: 0.78 / MAX: 3.03MIN: 0.8 / MAX: 3.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceab246810Min: 0.9 / Avg: 0.91 / Max: 0.91Min: 0.93 / Avg: 0.93 / Max: 0.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ab3691215SE +/- 0.65, N = 2SE +/- 0.02, N = 29.789.43MIN: 8.38 / MAX: 32.56MIN: 8.62 / MAX: 26.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ab3691215Min: 9.13 / Avg: 9.78 / Max: 10.42Min: 9.4 / Avg: 9.43 / Max: 9.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetab246810SE +/- 0.02, N = 2SE +/- 0.02, N = 26.044.49MIN: 5.41 / MAX: 26.6MIN: 4.03 / MAX: 20.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetab246810Min: 6.01 / Avg: 6.04 / Max: 6.06Min: 4.46 / Avg: 4.49 / Max: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ab0.96531.93062.89593.86124.8265SE +/- 0.02, N = 2SE +/- 0.01, N = 24.293.02MIN: 3.99 / MAX: 24.93MIN: 2.73 / MAX: 18.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ab246810Min: 4.26 / Avg: 4.29 / Max: 4.31Min: 3 / Avg: 3.02 / Max: 3.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ab1.15652.3133.46954.6265.7825SE +/- 0.80, N = 2SE +/- 0.04, N = 25.144.31MIN: 4.05 / MAX: 26.81MIN: 4.05 / MAX: 20.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ab246810Min: 4.34 / Avg: 5.14 / Max: 5.94Min: 4.27 / Avg: 4.31 / Max: 4.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ab246810SE +/- 0.70, N = 2SE +/- 0.07, N = 26.696.15MIN: 5.6 / MAX: 28.58MIN: 5.65 / MAX: 26.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ab3691215Min: 5.99 / Avg: 6.69 / Max: 7.39Min: 6.08 / Avg: 6.15 / Max: 6.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetab714212835SE +/- 0.19, N = 2SE +/- 0.03, N = 225.9327.68MIN: 23.82 / MAX: 45.07MIN: 24.92 / MAX: 47.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetab612182430Min: 25.74 / Avg: 25.93 / Max: 26.11Min: 27.65 / Avg: 27.68 / Max: 27.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonab0.94911.89822.84733.79644.7455SE +/- 0.0414, N = 2SE +/- 0.0239, N = 24.21834.1998MIN: 3.54 / MAX: 5.31MIN: 3.54 / MAX: 5.43
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonab246810Min: 4.18 / Avg: 4.22 / Max: 4.26Min: 4.18 / Avg: 4.2 / Max: 4.22

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 4Kab0.24570.49140.73710.98281.2285SE +/- 0.023, N = 2SE +/- 0.003, N = 21.0920.9481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 4Kab246810Min: 1.07 / Avg: 1.09 / Max: 1.11Min: 0.95 / Avg: 0.95 / Max: 0.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab0.87861.75722.63583.51444.393SE +/- 0.015, N = 2SE +/- 0.042, N = 23.9053.7901. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab246810Min: 3.89 / Avg: 3.9 / Max: 3.92Min: 3.75 / Avg: 3.79 / Max: 3.831. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4ab4080120160200SE +/- 17.49, N = 2SE +/- 0.50, N = 2125.83173.051. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4ab306090120150Min: 108.34 / Avg: 125.83 / Max: 143.32Min: 172.54 / Avg: 173.04 / Max: 173.551. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2ab4080120160200SE +/- 24.93, N = 2SE +/- 1.42, N = 2123.47170.011. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2ab306090120150Min: 98.54 / Avg: 123.47 / Max: 148.39Min: 168.59 / Avg: 170.01 / Max: 171.441. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonab1.07742.15483.23224.30965.387SE +/- 0.0697, N = 2SE +/- 0.0364, N = 24.78834.7636MIN: 4.05 / MAX: 5.89MIN: 4.05 / MAX: 5.98
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonab246810Min: 4.72 / Avg: 4.79 / Max: 4.86Min: 4.73 / Avg: 4.76 / Max: 4.8

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab160320480640800SE +/- 113.73, N = 2SE +/- 2.10, N = 2764.27614.68
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab130260390520650Min: 650.54 / Avg: 764.27 / Max: 878Min: 612.58 / Avg: 614.68 / Max: 616.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab0.73041.46082.19122.92163.652SE +/- 0.3971, N = 2SE +/- 0.0188, N = 22.66773.2461
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab246810Min: 2.27 / Avg: 2.67 / Max: 3.06Min: 3.23 / Avg: 3.25 / Max: 3.26

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ab20406080100SE +/- 4.45, N = 2SE +/- 4.80, N = 290.990.81. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ab20406080100Min: 86.4 / Avg: 90.85 / Max: 95.3Min: 86 / Avg: 90.8 / Max: 95.61. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesab6K12K18K24K30KSE +/- 149.00, N = 2SE +/- 71.50, N = 22642226438
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesab5K10K15K20K25KMin: 26273 / Avg: 26422 / Max: 26571Min: 26366 / Avg: 26437.5 / Max: 26509

Z3 Theorem Prover

The Z3 Theorem Prover / SMT solver is developed by Microsoft Research under the MIT license. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 2.smt2ab306090120150SE +/- 0.11, N = 2SE +/- 0.05, N = 2130.94131.411. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 2.smt2ab20406080100Min: 130.83 / Avg: 130.94 / Max: 131.05Min: 131.36 / Avg: 131.4 / Max: 131.451. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ab1122334455SE +/- 0.35, N = 2SE +/- 0.55, N = 247.748.01. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ab1020304050Min: 47.3 / Avg: 47.65 / Max: 48Min: 47.4 / Avg: 47.95 / Max: 48.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeab0.3360.6721.0081.3441.68SE +/- 0.09686, N = 2SE +/- 0.02805, N = 21.493341.36920
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeab246810Min: 1.4 / Avg: 1.49 / Max: 1.59Min: 1.34 / Avg: 1.37 / Max: 1.4

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab20406080100SE +/- 0.32, N = 2SE +/- 0.22, N = 293.8558.24
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab20406080100Min: 93.53 / Avg: 93.85 / Max: 94.17Min: 58.02 / Avg: 58.24 / Max: 58.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.07, N = 2SE +/- 0.13, N = 221.3034.32
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab714212835Min: 21.23 / Avg: 21.3 / Max: 21.37Min: 34.19 / Avg: 34.32 / Max: 34.45

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ab2040608010075.4070.46MAX: 1528.52MAX: 1429.13

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ab140K280K420K560K700K611157.73637041.62

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeab0.15970.31940.47910.63880.7985SE +/- 0.014296, N = 2SE +/- 0.025213, N = 20.6221820.709986
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeab246810Min: 0.61 / Avg: 0.62 / Max: 0.64Min: 0.68 / Avg: 0.71 / Max: 0.74

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab80160240320400342.21387.15MAX: 2349.14MAX: 2248.18

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab3M6M9M12M15M13467917.3112006039.14

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ab300K600K900K1200K1500KSE +/- 45279.14, N = 2SE +/- 63299.47, N = 2914462.021253523.691. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ab200K400K600K800K1000KMin: 869182.88 / Avg: 914462.02 / Max: 959741.15Min: 1190224.22 / Avg: 1253523.69 / Max: 1316823.161. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ab300K600K900K1200K1500KSE +/- 8374.42, N = 2SE +/- 25673.44, N = 2890374.231248610.701. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ab200K400K600K800K1000KMin: 881999.8 / Avg: 890374.23 / Max: 898748.65Min: 1222937.26 / Avg: 1248610.7 / Max: 1274284.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5ab110K220K330K440K550KSE +/- 12286.93, N = 2SE +/- 13966.62, N = 2525368.39518308.201. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5ab90K180K270K360K450KMin: 513081.46 / Avg: 525368.39 / Max: 537655.32Min: 504341.58 / Avg: 518308.2 / Max: 532274.821. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ab300K600K900K1200K1500KSE +/- 19181.49, N = 2SE +/- 18012.65, N = 21109812.441455171.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ab300K600K900K1200K1500KMin: 1090630.95 / Avg: 1109812.44 / Max: 1128993.92Min: 1437159.21 / Avg: 1455171.86 / Max: 1473184.511. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ab300K600K900K1200K1500KSE +/- 568.99, N = 2SE +/- 8107.22, N = 21028641.191377833.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ab200K400K600K800K1000KMin: 1028072.2 / Avg: 1028641.19 / Max: 1029210.18Min: 1369726.26 / Avg: 1377833.48 / Max: 1385940.71. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ab300K600K900K1200K1500KSE +/- 44860.53, N = 2SE +/- 39391.55, N = 21016703.271372033.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ab200K400K600K800K1000KMin: 971842.74 / Avg: 1016703.27 / Max: 1061563.79Min: 1332641.63 / Avg: 1372033.18 / Max: 1411424.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ab300K600K900K1200K1500KSE +/- 43465.56, N = 2SE +/- 4729.88, N = 21039311.121398883.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ab200K400K600K800K1000KMin: 995845.55 / Avg: 1039311.12 / Max: 1082776.68Min: 1394153.12 / Avg: 1398883.01 / Max: 1403612.891. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10ab110K220K330K440K550KSE +/- 9547.84, N = 2SE +/- 4490.17, N = 2502050.43486807.491. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10ab90K180K270K360K450KMin: 492502.59 / Avg: 502050.43 / Max: 511598.27Min: 482317.32 / Avg: 486807.49 / Max: 491297.661. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100ab110K220K330K440K550KSE +/- 10780.90, N = 2SE +/- 14781.65, N = 2496309.76485922.631. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100ab90K180K270K360K450KMin: 485528.86 / Avg: 496309.76 / Max: 507090.65Min: 471140.98 / Avg: 485922.63 / Max: 500704.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeab0.16770.33540.50310.67080.8385SE +/- 0.009807, N = 2SE +/- 0.008961, N = 20.7451460.734783
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeab246810Min: 0.74 / Avg: 0.75 / Max: 0.75Min: 0.73 / Avg: 0.73 / Max: 0.74

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4Kab3691215SE +/- 0.307, N = 2SE +/- 1.160, N = 29.4909.4361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4Kab3691215Min: 9.18 / Avg: 9.49 / Max: 9.8Min: 8.28 / Avg: 9.44 / Max: 10.61. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab714212835SE +/- 0.17, N = 2SE +/- 0.13, N = 230.5430.57
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab714212835Min: 30.37 / Avg: 30.54 / Max: 30.71Min: 30.43 / Avg: 30.57 / Max: 30.7

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab1530456075SE +/- 0.37, N = 2SE +/- 0.29, N = 265.4365.36
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab1326395265Min: 65.05 / Avg: 65.42 / Max: 65.8Min: 65.07 / Avg: 65.36 / Max: 65.65

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab3691215SE +/- 0.090, N = 2SE +/- 0.603, N = 29.8829.8681. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab3691215Min: 9.79 / Avg: 9.88 / Max: 9.97Min: 9.27 / Avg: 9.87 / Max: 10.471. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab50100150200250SE +/- 13.47, N = 2SE +/- 3.79, N = 2211.98179.08
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab4080120160200Min: 198.51 / Avg: 211.98 / Max: 225.46Min: 175.29 / Avg: 179.08 / Max: 182.87

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab3691215SE +/- 0.6045, N = 2SE +/- 0.2364, N = 29.455311.1719
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab3691215Min: 8.85 / Avg: 9.46 / Max: 10.06Min: 10.94 / Avg: 11.17 / Max: 11.41

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeab0.23890.47780.71670.95561.1945SE +/- 0.011835, N = 2SE +/- 0.016867, N = 21.0616401.012953
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeab246810Min: 1.05 / Avg: 1.06 / Max: 1.07Min: 1 / Avg: 1.01 / Max: 1.03

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 1ab20406080100SE +/- 0.86, N = 2SE +/- 0.19, N = 230.3388.591. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 1ab20406080100Min: 29.46 / Avg: 30.33 / Max: 31.19Min: 88.39 / Avg: 88.59 / Max: 88.781. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab306090120150144.53143.24MAX: 1992.27MAX: 2073.51

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab3M6M9M12M15M12613348.9312464073.00

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab140280420560700SE +/- 12.84, N = 2SE +/- 79.47, N = 2669.18628.36
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab120240360480600Min: 656.34 / Avg: 669.18 / Max: 682.02Min: 548.89 / Avg: 628.36 / Max: 707.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab0.72721.45442.18162.90883.636SE +/- 0.0495, N = 2SE +/- 0.4116, N = 22.98153.2320
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab246810Min: 2.93 / Avg: 2.98 / Max: 3.03Min: 2.82 / Avg: 3.23 / Max: 3.64

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab2004006008001000SE +/- 0.29, N = 2SE +/- 13.19, N = 2979.18835.85
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab2004006008001000Min: 978.89 / Avg: 979.18 / Max: 979.47Min: 822.67 / Avg: 835.85 / Max: 849.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab0.53741.07481.61222.14962.687SE +/- 0.0085, N = 2SE +/- 0.0327, N = 22.03092.3883
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab246810Min: 2.02 / Avg: 2.03 / Max: 2.04Min: 2.36 / Avg: 2.39 / Max: 2.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab50100150200250SE +/- 31.56, N = 2SE +/- 32.45, N = 2244.56245.64
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab4080120160200Min: 213 / Avg: 244.56 / Max: 276.12Min: 213.2 / Avg: 245.64 / Max: 278.09

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab246810SE +/- 1.0816, N = 2SE +/- 1.0939, N = 28.30098.2832
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab3691215Min: 7.22 / Avg: 8.3 / Max: 9.38Min: 7.19 / Avg: 8.28 / Max: 9.38

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab1632486480SE +/- 5.54, N = 2SE +/- 6.64, N = 269.5670.82
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab1428425670Min: 64.02 / Avg: 69.56 / Max: 75.1Min: 64.18 / Avg: 70.82 / Max: 77.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab714212835SE +/- 2.30, N = 2SE +/- 2.67, N = 228.9228.46
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab612182430Min: 26.62 / Avg: 28.92 / Max: 31.22Min: 25.8 / Avg: 28.46 / Max: 31.13

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4Kab1530456075SE +/- 0.31, N = 2SE +/- 2.35, N = 267.0166.381. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4Kab1326395265Min: 66.7 / Avg: 67.01 / Max: 67.32Min: 64.03 / Avg: 66.38 / Max: 68.731. (CC) gcc options: -pthread -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab2004006008001000SE +/- 75.66, N = 2SE +/- 86.04, N = 2855.68864.99
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab150300450600750Min: 780.01 / Avg: 855.68 / Max: 931.34Min: 778.96 / Avg: 864.99 / Max: 951.03

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab0.52941.05881.58822.11762.647SE +/- 0.2053, N = 2SE +/- 0.2323, N = 22.35272.3352
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab246810Min: 2.15 / Avg: 2.35 / Max: 2.56Min: 2.1 / Avg: 2.34 / Max: 2.57

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p 10-bitab4080120160200SE +/- 19.41, N = 2SE +/- 12.81, N = 2191.30190.871. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p 10-bitab4080120160200Min: 171.89 / Avg: 191.3 / Max: 210.71Min: 178.06 / Avg: 190.87 / Max: 203.681. (CC) gcc options: -pthread -lm

Z3 Theorem Prover

The Z3 Theorem Prover / SMT solver is developed by Microsoft Research under the MIT license. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 1.smt2ab1020304050SE +/- 0.05, N = 2SE +/- 0.18, N = 243.9743.421. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 1.smt2ab918273645Min: 43.92 / Avg: 43.97 / Max: 44.02Min: 43.24 / Avg: 43.42 / Max: 43.61. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 15.66, N = 2SE +/- 14.87, N = 2116.2798.71
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab20406080100Min: 100.61 / Avg: 116.27 / Max: 131.92Min: 83.84 / Avg: 98.71 / Max: 113.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab510152025SE +/- 2.36, N = 2SE +/- 3.12, N = 217.5120.73
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab510152025Min: 15.15 / Avg: 17.51 / Max: 19.87Min: 17.6 / Avg: 20.73 / Max: 23.85

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab4080120160200SE +/- 21.74, N = 2SE +/- 1.05, N = 2162.16154.26
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab306090120150Min: 140.43 / Avg: 162.16 / Max: 183.9Min: 153.21 / Avg: 154.26 / Max: 155.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab3691215SE +/- 1.68, N = 2SE +/- 0.10, N = 212.5412.95
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab48121620Min: 10.87 / Avg: 12.54 / Max: 14.22Min: 12.85 / Avg: 12.95 / Max: 13.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 15.16, N = 2SE +/- 17.32, N = 2137.95139.05
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab306090120150Min: 122.79 / Avg: 137.95 / Max: 153.11Min: 121.73 / Avg: 139.05 / Max: 156.37

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab48121620SE +/- 1.61, N = 2SE +/- 1.82, N = 214.6714.61
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab48121620Min: 13.06 / Avg: 14.67 / Max: 16.28Min: 12.79 / Avg: 14.61 / Max: 16.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab1530456075SE +/- 6.09, N = 2SE +/- 7.55, N = 265.9568.00
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab1326395265Min: 59.86 / Avg: 65.95 / Max: 72.04Min: 60.45 / Avg: 68 / Max: 75.55

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab714212835SE +/- 2.83, N = 2SE +/- 3.30, N = 230.5729.76
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab714212835Min: 27.74 / Avg: 30.57 / Max: 33.4Min: 26.46 / Avg: 29.76 / Max: 33.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab1530456075SE +/- 6.60, N = 2SE +/- 6.42, N = 268.0569.62
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab1326395265Min: 61.45 / Avg: 68.05 / Max: 74.64Min: 63.2 / Avg: 69.62 / Max: 76.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab714212835SE +/- 2.87, N = 2SE +/- 2.67, N = 229.6628.96
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab714212835Min: 26.78 / Avg: 29.66 / Max: 32.53Min: 26.29 / Avg: 28.96 / Max: 31.63

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab3691215SE +/- 1.04, N = 2SE +/- 1.15, N = 210.2510.34
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab3691215Min: 9.2 / Avg: 10.25 / Max: 11.29Min: 9.19 / Avg: 10.34 / Max: 11.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab4080120160200SE +/- 20.06, N = 2SE +/- 21.75, N = 2196.78195.49
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab4080120160200Min: 176.73 / Avg: 196.78 / Max: 216.84Min: 173.73 / Avg: 195.49 / Max: 217.24

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 1080pab1.07392.14783.22174.29565.3695SE +/- 0.495, N = 2SE +/- 0.109, N = 24.2604.7731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 1080pab246810Min: 3.77 / Avg: 4.26 / Max: 4.76Min: 4.66 / Avg: 4.77 / Max: 4.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080pab50100150200250SE +/- 2.76, N = 2SE +/- 1.95, N = 2244.12249.621. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080pab50100150200250Min: 241.36 / Avg: 244.12 / Max: 246.87Min: 247.67 / Avg: 249.62 / Max: 251.571. (CC) gcc options: -pthread -lm

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.4WAV To Opus Encodeab816243240SE +/- 0.10, N = 2SE +/- 0.06, N = 235.4035.431. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.4WAV To Opus Encodeab816243240Min: 35.29 / Avg: 35.4 / Max: 35.5Min: 35.37 / Avg: 35.43 / Max: 35.491. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab20406080100107.84104.43MAX: 1809.87MAX: 2007.51

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab3M6M9M12M15M15017343.4915526243.84

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30ab5001000150020002500SE +/- 210.20, N = 2SE +/- 234.25, N = 22379.82404.81. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30ab400800120016002000Min: 2169.6 / Avg: 2379.8 / Max: 2590Min: 2170.5 / Avg: 2404.75 / Max: 26391. (CXX) g++ options: -O3 -march=native -fPIE -pie

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: IO_uringab40K80K120K160K200KSE +/- 2497.26, N = 2SE +/- 499.46, N = 2170492.16182936.681. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: IO_uringab30K60K90K120K150KMin: 167994.9 / Avg: 170492.16 / Max: 172989.42Min: 182437.22 / Avg: 182936.68 / Max: 183436.131. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ab4812162015.7515.11MAX: 1425.54MAX: 1458.64

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ab200K400K600K800K1000K1036429.931040683.13

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mallocab90K180K270K360K450KSE +/- 39637.02, N = 2SE +/- 46761.51, N = 2375933.99434156.101. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mallocab80K160K240K320K400KMin: 336296.97 / Avg: 375933.99 / Max: 415571Min: 387394.59 / Avg: 434156.1 / Max: 480917.61. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningab150300450600750SE +/- 28.62, N = 2SE +/- 2.47, N = 2669.41715.581. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningab130260390520650Min: 640.79 / Avg: 669.41 / Max: 698.02Min: 713.11 / Avg: 715.58 / Max: 718.051. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MMAPab714212835SE +/- 3.19, N = 2SE +/- 3.70, N = 220.8628.461. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MMAPab612182430Min: 17.67 / Avg: 20.86 / Max: 24.05Min: 24.76 / Avg: 28.46 / Max: 32.161. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MEMFDab1326395265SE +/- 0.79, N = 2SE +/- 0.72, N = 256.5943.091. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MEMFDab1122334455Min: 55.8 / Avg: 56.59 / Max: 57.37Min: 42.37 / Avg: 43.09 / Max: 43.811. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibab70140210280350SE +/- 31.94, N = 2SE +/- 21.49, N = 2273.05335.911. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibab60120180240300Min: 241.11 / Avg: 273.05 / Max: 304.99Min: 314.42 / Avg: 335.91 / Max: 357.41. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeab300K600K900K1200K1500KSE +/- 30380.48, N = 2SE +/- 131648.17, N = 21422109.841533355.111. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeab300K600K900K1200K1500KMin: 1391729.36 / Avg: 1422109.84 / Max: 1452490.32Min: 1401706.94 / Avg: 1533355.11 / Max: 1665003.281. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Atomicab50100150200250SE +/- 10.78, N = 2SE +/- 11.52, N = 2224.51249.991. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Atomicab50100150200250Min: 213.73 / Avg: 224.51 / Max: 235.28Min: 238.47 / Avg: 249.99 / Max: 261.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: NUMAab1428425670SE +/- 5.63, N = 2SE +/- 5.27, N = 253.9560.491. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: NUMAab1224364860Min: 48.32 / Avg: 53.95 / Max: 59.57Min: 55.22 / Avg: 60.49 / Max: 65.761. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadab9K18K27K36K45KSE +/- 4732.28, N = 2SE +/- 4375.87, N = 234974.2642229.611. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadab7K14K21K28K35KMin: 30241.98 / Avg: 34974.26 / Max: 39706.53Min: 37853.74 / Avg: 42229.61 / Max: 46605.471. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: x86_64 RdRandab7001400210028003500SE +/- 13.37, N = 2SE +/- 68.45, N = 23267.313134.041. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: x86_64 RdRandab6001200180024003000Min: 3253.94 / Avg: 3267.31 / Max: 3280.67Min: 3065.58 / Avg: 3134.04 / Max: 3202.491. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Function Callab5001000150020002500SE +/- 96.37, N = 2SE +/- 40.41, N = 22058.702164.201. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Function Callab400800120016002000Min: 1962.33 / Avg: 2058.7 / Max: 2155.07Min: 2123.79 / Avg: 2164.2 / Max: 2204.61. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: System V Message Passingab600K1200K1800K2400K3000KSE +/- 119752.37, N = 2SE +/- 200089.21, N = 22652462.622519916.101. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: System V Message Passingab500K1000K1500K2000K2500KMin: 2532710.25 / Avg: 2652462.62 / Max: 2772214.99Min: 2319826.89 / Avg: 2519916.1 / Max: 2720005.311. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Socket Activityab5001000150020002500SE +/- 249.85, N = 2SE +/- 271.08, N = 22495.572508.631. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Socket Activityab400800120016002000Min: 2245.72 / Avg: 2495.57 / Max: 2745.41Min: 2237.55 / Avg: 2508.63 / Max: 2779.71. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab90180270360450SE +/- 5.36, N = 2SE +/- 6.17, N = 2383.49396.031. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab70140210280350Min: 378.13 / Avg: 383.49 / Max: 388.85Min: 389.85 / Avg: 396.03 / Max: 402.21. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512ab8M16M24M32M40MSE +/- 3468000.00, N = 2SE +/- 2570000.00, N = 235719000365230001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512ab6M12M18M24M30MMin: 32251000 / Avg: 35719000 / Max: 39187000Min: 33953000 / Avg: 36523000 / Max: 390930001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointab2K4K6K8K10KSE +/- 364.11, N = 2SE +/- 310.83, N = 27289.327877.701. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointab14002800420056007000Min: 6925.21 / Avg: 7289.32 / Max: 7653.43Min: 7566.87 / Avg: 7877.7 / Max: 8188.531. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512ab6M12M18M24M30MSE +/- 466500.00, N = 2SE +/- 478500.00, N = 225830500258125001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512ab4M8M12M16M20MMin: 25364000 / Avg: 25830500 / Max: 26297000Min: 25334000 / Avg: 25812500 / Max: 262910001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512ab3M6M9M12M15MSE +/- 158500.00, N = 2SE +/- 149500.00, N = 215179500150235001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512ab3M6M9M12M15MMin: 15021000 / Avg: 15179500 / Max: 15338000Min: 14874000 / Avg: 15023500 / Max: 151730001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32ab10M20M30M40M50MSE +/- 451500.00, N = 2SE +/- 77000.00, N = 246516500468250001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32ab8M16M24M32M40MMin: 46065000 / Avg: 46516500 / Max: 46968000Min: 46748000 / Avg: 46825000 / Max: 469020001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab510152025SE +/- 0.05, N = 2SE +/- 0.15, N = 216.4119.301. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab510152025Min: 16.36 / Avg: 16.41 / Max: 16.46Min: 19.1 / Avg: 19.25 / Max: 19.41. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointab160320480640800SE +/- 25.00, N = 2SE +/- 25.67, N = 2743.83762.861. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointab130260390520650Min: 718.83 / Avg: 743.83 / Max: 768.83Min: 737.19 / Avg: 762.86 / Max: 788.521. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Hashab140K280K420K560K700KSE +/- 38337.11, N = 2SE +/- 38690.57, N = 2523699.70638650.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Hashab110K220K330K440K550KMin: 485362.59 / Avg: 523699.7 / Max: 562036.81Min: 599959.93 / Avg: 638650.51 / Max: 677341.081. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512ab1.7M3.4M5.1M6.8M8.5MSE +/- 8100.00, N = 2SE +/- 20850.00, N = 2772180077108501. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512ab1.3M2.6M3.9M5.2M6.5MMin: 7713700 / Avg: 7721800 / Max: 7729900Min: 7690000 / Avg: 7710850 / Max: 77317001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57ab16M32M48M64M80MSE +/- 128500.00, N = 2SE +/- 620500.00, N = 273795500736205001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57ab13M26M39M52M65MMin: 73667000 / Avg: 73795500 / Max: 73924000Min: 73000000 / Avg: 73620500 / Max: 742410001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32ab20M40M60M80M100MSE +/- 873500.00, N = 2SE +/- 1161000.00, N = 285550500856640001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32ab15M30M45M60M75MMin: 84677000 / Avg: 85550500 / Max: 86424000Min: 84503000 / Avg: 85664000 / Max: 868250001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57ab9M18M27M36M45MSE +/- 225000.00, N = 2SE +/- 154500.00, N = 243429000434275001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57ab8M16M24M32M40MMin: 43204000 / Avg: 43429000 / Max: 43654000Min: 43273000 / Avg: 43427500 / Max: 435820001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Memory Copyingab2004006008001000SE +/- 40.86, N = 2SE +/- 40.05, N = 21078.731088.751. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Memory Copyingab2004006008001000Min: 1037.87 / Avg: 1078.73 / Max: 1119.58Min: 1048.69 / Avg: 1088.75 / Max: 1128.81. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32ab40M80M120M160M200MSE +/- 11475000.00, N = 2SE +/- 5480000.00, N = 21779450001855400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32ab30M60M90M120M150MMin: 166470000 / Avg: 177945000 / Max: 189420000Min: 180060000 / Avg: 185540000 / Max: 1910200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab5001000150020002500SE +/- 33.39, N = 2SE +/- 17.07, N = 22431.222431.011. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab400800120016002000Min: 2397.83 / Avg: 2431.22 / Max: 2464.61Min: 2413.94 / Avg: 2431.01 / Max: 2448.081. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Cacheab200K400K600K800K1000KSE +/- 244998.38, N = 2SE +/- 138960.51, N = 2929178.431153192.751. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Cacheab200K400K600K800K1000KMin: 684180.05 / Avg: 929178.43 / Max: 1174176.81Min: 1014232.24 / Avg: 1153192.75 / Max: 1292153.261. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57ab30M60M90M120M150MSE +/- 5615000.00, N = 2SE +/- 7270000.00, N = 21403450001328300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57ab20M40M60M80M100MMin: 134730000 / Avg: 140345000 / Max: 145960000Min: 125560000 / Avg: 132830000 / Max: 1401000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57ab30M60M90M120M150MSE +/- 6360000.00, N = 2SE +/- 3395000.00, N = 21195900001122050001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57ab20M40M60M80M100MMin: 113230000 / Avg: 119590000 / Max: 125950000Min: 108810000 / Avg: 112205000 / Max: 1156000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32ab30M60M90M120M150MSE +/- 2650000.00, N = 2SE +/- 2610000.00, N = 21402800001399500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32ab20M40M60M80M100MMin: 137630000 / Avg: 140280000 / Max: 142930000Min: 137340000 / Avg: 139950000 / Max: 1425600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Forkingab3K6K9K12K15KSE +/- 301.28, N = 2SE +/- 1330.94, N = 29551.7912140.601. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Forkingab2K4K6K8K10KMin: 9250.51 / Avg: 9551.79 / Max: 9853.07Min: 10809.66 / Avg: 12140.6 / Max: 13471.541. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mutexab150K300K450K600K750KSE +/- 76155.40, N = 2SE +/- 20730.70, N = 2604156.72694562.471. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mutexab120K240K360K480K600KMin: 528001.32 / Avg: 604156.72 / Max: 680312.12Min: 673831.77 / Avg: 694562.47 / Max: 715293.161. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc Qsort Data Sortingab1632486480SE +/- 5.90, N = 2SE +/- 3.74, N = 269.1374.131. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc Qsort Data Sortingab1428425670Min: 63.23 / Avg: 69.13 / Max: 75.03Min: 70.39 / Avg: 74.13 / Max: 77.861. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix Mathab4K8K12K16K20KSE +/- 1025.15, N = 2SE +/- 854.63, N = 217318.7017116.791. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix Mathab3K6K9K12K15KMin: 16293.5 / Avg: 17318.65 / Max: 18343.8Min: 16262.15 / Avg: 17116.79 / Max: 17971.421. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Stressab2K4K6K8K10KSE +/- 567.82, N = 2SE +/- 158.65, N = 27310.057866.401. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Stressab14002800420056007000Min: 6742.23 / Avg: 7310.05 / Max: 7877.86Min: 7707.7 / Avg: 7866.35 / Max: 80251. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: SENDFILEab9K18K27K36K45KSE +/- 1669.87, N = 2SE +/- 1151.61, N = 236779.8643752.371. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: SENDFILEab8K16K24K32K40KMin: 35109.99 / Avg: 36779.86 / Max: 38449.72Min: 42600.76 / Avg: 43752.37 / Max: 44903.981. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cryptoab12002400360048006000SE +/- 47.72, N = 2SE +/- 96.10, N = 24590.535684.531. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cryptoab10002000300040005000Min: 4542.8 / Avg: 4590.53 / Max: 4638.25Min: 5588.43 / Avg: 5684.53 / Max: 5780.631. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab30K60K90K120K150KSE +/- 2993.76, N = 2SE +/- 3329.63, N = 2147314.85148388.791. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab30K60K90K120K150KMin: 144321.09 / Avg: 147314.85 / Max: 150308.6Min: 145059.16 / Avg: 148388.79 / Max: 151718.421. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pollab70K140K210K280K350KSE +/- 20478.34, N = 2SE +/- 23847.48, N = 2284300.80318466.501. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pollab60K120K180K240K300KMin: 263822.46 / Avg: 284300.8 / Max: 304779.13Min: 294619.02 / Avg: 318466.5 / Max: 342313.981. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc C String Functionsab500K1000K1500K2000K2500KSE +/- 92139.37, N = 2SE +/- 158119.57, N = 22485856.072360589.101. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc C String Functionsab400K800K1200K1600K2000KMin: 2393716.7 / Avg: 2485856.07 / Max: 2577995.44Min: 2202469.53 / Avg: 2360589.1 / Max: 2518708.671. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addab600K1200K1800K2400K3000KSE +/- 63230.65, N = 2SE +/- 262560.86, N = 22978232.732737483.301. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addab500K1000K1500K2000K2500KMin: 2915002.08 / Avg: 2978232.73 / Max: 3041463.38Min: 2474922.44 / Avg: 2737483.3 / Max: 3000044.161. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Context Switchingab160K320K480K640K800KSE +/- 53162.51, N = 2SE +/- 16234.88, N = 2705297.96733269.791. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Context Switchingab130K260K390K520K650KMin: 652135.45 / Avg: 705297.96 / Max: 758460.47Min: 717034.91 / Avg: 733269.79 / Max: 749504.661. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Mathab3K6K9K12K15KSE +/- 425.64, N = 2SE +/- 913.67, N = 212666.1912444.331. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Mathab2K4K6K8K10KMin: 12240.55 / Avg: 12666.19 / Max: 13091.83Min: 11530.66 / Avg: 12444.33 / Max: 13357.991. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Semaphoresab700K1400K2100K2800K3500KSE +/- 287315.12, N = 2SE +/- 70160.30, N = 23218725.513241408.211. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Semaphoresab600K1200K1800K2400K3000KMin: 2931410.39 / Avg: 3218725.51 / Max: 3506040.62Min: 3171247.91 / Avg: 3241408.21 / Max: 3311568.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Futexab130K260K390K520K650KSE +/- 36419.89, N = 2SE +/- 57507.32, N = 2486078.68624014.171. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Futexab110K220K330K440K550KMin: 449658.79 / Avg: 486078.68 / Max: 522498.57Min: 566506.85 / Avg: 624014.17 / Max: 681521.491. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab71421283529.6529.75MAX: 1193.78MAX: 1271.26

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab300K600K900K1200K1500K1333775.591298988.31

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab4812162017.3416.68MAX: 866.08MAX: 1251.48

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab200K400K600K800K1000K816344.20812170.86

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab81624324034.8134.48MAX: 1426.89MAX: 1495.07

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab200K400K600K800K1000K996242.811009840.28

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4Kab816243240SE +/- 0.91, N = 2SE +/- 0.81, N = 235.5028.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4Kab816243240Min: 34.59 / Avg: 35.5 / Max: 36.41Min: 28.17 / Avg: 28.98 / Max: 29.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4Kab816243240SE +/- 2.10, N = 2SE +/- 2.71, N = 234.0530.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4Kab714212835Min: 31.95 / Avg: 34.05 / Max: 36.15Min: 28.27 / Avg: 30.98 / Max: 33.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 1080pab816243240SE +/- 0.87, N = 2SE +/- 2.58, N = 235.7431.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 1080pab816243240Min: 34.87 / Avg: 35.74 / Max: 36.61Min: 28.43 / Avg: 31.01 / Max: 33.61. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab51015202522.6222.93MAX: 1174.88MAX: 1216.89

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab110K220K330K440K550K529599.00524660.18

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 1080pab70140210280350SE +/- 4.15, N = 2SE +/- 2.55, N = 2273.35310.941. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 1080pab60120180240300Min: 269.2 / Avg: 273.35 / Max: 277.49Min: 308.39 / Avg: 310.94 / Max: 313.481. (CC) gcc options: -pthread -lm

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-scalarab60120180240300SE +/- 0.00, N = 2SE +/- 0.01, N = 2268.33268.33
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-scalarab50100150200250Min: 268.33 / Avg: 268.33 / Max: 268.33Min: 268.31 / Avg: 268.33 / Max: 268.34

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 1080pab306090120150SE +/- 0.24, N = 2SE +/- 0.70, N = 2154.60157.691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 1080pab306090120150Min: 154.36 / Avg: 154.6 / Max: 154.83Min: 157 / Avg: 157.69 / Max: 158.391. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 1080pab50100150200250SE +/- 0.66, N = 2SE +/- 0.84, N = 2202.85206.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 1080pab4080120160200Min: 202.19 / Avg: 202.85 / Max: 203.51Min: 205.93 / Avg: 206.77 / Max: 207.61. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

197 Results Shown

Timed LLVM Compilation:
  Unix Makefiles
  Ninja
Timed Godot Game Engine Compilation
Apache IoTDB:
  500 - 100 - 500:
    Average Latency
    point/sec
Xonotic
Build2
VVenC
Xonotic:
  1920 x 1080 - Ultra
  1920 x 1080 - High
Intel Open Image Denoise
libxsmm
VVenC
Embree:
  Pathtracer - Crown
  Pathtracer - Asian Dragon Obj
Apache IoTDB:
  500 - 100 - 200:
    Average Latency
    point/sec
Embree
Xonotic
Apache IoTDB:
  200 - 100 - 500:
    Average Latency
    point/sec
OSPRay
Embree
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
NCNN:
  Vulkan GPU - FastestDet
  Vulkan GPU - vision_transformer
  Vulkan GPU - regnety_400m
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
OSPRay
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Embree
SVT-AV1
VVenC
SQLite:
  4
  2
Embree
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    ms/batch
    items/sec
libxsmm
Apache Cassandra
Z3 Theorem Prover
libxsmm
OSPRay
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache IoTDB:
  500 - 1 - 500:
    Average Latency
    point/sec
OSPRay
Apache IoTDB:
  100 - 100 - 500:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark:
  Redis - 500 - 1:10
  Redis - 500 - 1:5
Memcached
Redis 7.0.12 + memtier_benchmark:
  Redis - 50 - 1:10
  Redis - 100 - 1:10
  Redis - 100 - 1:5
  Redis - 50 - 1:5
Memcached:
  1:10
  1:100
OSPRay
SVT-AV1
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
VVenC
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OSPRay
SQLite
Apache IoTDB:
  200 - 100 - 200:
    Average Latency
    point/sec
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    ms/batch
    items/sec
dav1d
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
dav1d
Z3 Theorem Prover
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
SVT-AV1
dav1d
Opus Codec Encoding
Apache IoTDB:
  100 - 100 - 200:
    Average Latency
    point/sec
QuantLib
Stress-NG
Apache IoTDB:
  500 - 1 - 200:
    Average Latency
    point/sec
Stress-NG:
  Malloc
  Cloning
  MMAP
  MEMFD
  Zlib
  Pipe
  Atomic
  NUMA
  Pthread
  x86_64 RdRand
  Function Call
  System V Message Passing
  Socket Activity
  Matrix 3D Math
Liquid-DSP
Stress-NG
Liquid-DSP:
  4 - 256 - 512
  2 - 256 - 512
  1 - 256 - 32
Stress-NG:
  AVL Tree
  Floating Point
  Hash
Liquid-DSP:
  1 - 256 - 512
  2 - 256 - 57
  2 - 256 - 32
  1 - 256 - 57
Stress-NG
Liquid-DSP
Stress-NG:
  Vector Shuffle
  CPU Cache
Liquid-DSP:
  8 - 256 - 57
  4 - 256 - 57
  4 - 256 - 32
Stress-NG:
  Forking
  Mutex
  Glibc Qsort Data Sorting
  Matrix Math
  CPU Stress
  SENDFILE
  Crypto
  Wide Vector Math
  Poll
  Glibc C String Functions
  Fused Multiply-Add
  Context Switching
  Vector Math
  Semaphores
  Futex
Apache IoTDB:
  200 - 1 - 500:
    Average Latency
    point/sec
  200 - 1 - 200:
    Average Latency
    point/sec
  100 - 1 - 500:
    Average Latency
    point/sec
SVT-AV1:
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 8 - Bosphorus 1080p
Apache IoTDB:
  100 - 1 - 200:
    Average Latency
    point/sec
dav1d
vkpeak
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p