newnew

Intel Core i7-1165G7 testing with a Dell 0GG9PT (3.15.0 BIOS) and Intel Xe TGL GT2 15GB on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308049-NE-NEWNEW95665
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
Database Test Suite 3 Tests
Multi-Core 3 Tests
NVIDIA GPU Compute 4 Tests
Server 3 Tests
Vulkan Compute 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
August 03 2023
  6 Hours, 23 Minutes
b
August 03 2023
  6 Hours, 21 Minutes
c
August 03 2023
  6 Hours, 43 Minutes
Invert Hiding All Results Option
  6 Hours, 29 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


newnewOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 0GG9PT (3.15.0 BIOS)Intel Tiger Lake-LP16GBKioxia KBG40ZNS256G NVMe 256GBIntel Xe TGL GT2 15GB (1300MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 23.046.2.0-24-generic (x86_64)GNOME Shell 44.0X Server + Wayland4.6 Mesa 23.0.2GCC 12.2.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionNewnew BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xa6 - Thermald 2.5.2 - OpenJDK Runtime Environment (build 11.0.19+7-post-Ubuntu-0ubuntu123.04) - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%102%104%107%NCNNVVenCApache CassandraDragonflydbApache IoTDBVkFFTBRL-CADvkpeakTimed GCC CompilationVkResample

newnewapache-iotdb: 200 - 100 - 500apache-iotdb: 200 - 100 - 500apache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 100 - 500apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 100 - 200ncnn: CPU - blazefacencnn: CPU - regnety_400mncnn: CPU - efficientnet-b0ncnn: Vulkan GPU - regnety_400mapache-iotdb: 100 - 100 - 200ncnn: CPU - googlenetncnn: Vulkan GPU - blazefacencnn: CPU - resnet18ncnn: CPU - mnasnetncnn: CPU - alexnetncnn: CPU - shufflenet-v2ncnn: Vulkan GPU - googlenetapache-iotdb: 100 - 100 - 200dragonflydb: 20 - 1:100ncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - resnet18vvenc: Bosphorus 1080p - Fasterncnn: CPU - vgg16ncnn: CPU - resnet50ncnn: CPU - vision_transformerncnn: CPU - squeezenet_ssdncnn: Vulkan GPU - alexnetapache-iotdb: 200 - 1 - 200dragonflydb: 10 - 1:5dragonflydb: 20 - 1:10vvenc: Bosphorus 4K - Fasterapache-iotdb: 100 - 1 - 200apache-iotdb: 200 - 1 - 500apache-iotdb: 200 - 1 - 200dragonflydb: 10 - 1:10dragonflydb: 20 - 1:5dragonflydb: 50 - 1:5dragonflydb: 50 - 1:100cassandra: Writesvkfft: FFT + iFFT C2C multidimensional in single precisiondragonflydb: 10 - 1:100apache-iotdb: 200 - 1 - 500ncnn: Vulkan GPU - efficientnet-b0vvenc: Bosphorus 1080p - Fastapache-iotdb: 100 - 1 - 200ncnn: Vulkan GPU - FastestDetncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - vgg16dragonflydb: 50 - 1:10vkfft: FFT + iFFT R2C / C2Rncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - yolov4-tinyapache-iotdb: 100 - 1 - 500vvenc: Bosphorus 4K - Fastncnn: Vulkan GPU - mnasnetncnn: CPU-v2-v2 - mobilenet-v2apache-iotdb: 100 - 1 - 500ncnn: CPU - yolov4-tinyncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - resnet50build-gcc: Time To Compilencnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - shufflenet-v2brl-cad: VGR Performance Metricvkfft: FFT + iFFT C2C Bluestein in single precisionvkfft: FFT + iFFT C2C 1D batched in single precisionvkfft: FFT + iFFT C2C 1D batched in half precisionvkfft: FFT + iFFT C2C 1D batched in single precision, no reshufflingvkpeak: fp32-vec4vkpeak: fp32-scalarvkpeak: int16-scalarvkpeak: fp16-vec4vkpeak: int16-vec4vkpeak: int32-scalarvkpeak: fp16-scalarvkpeak: int32-vec4vkresample: 2x - Singlencnn: CPU - mobilenetvkfft: FFT + iFFT C2C 1D batched in double precisionabc16383690.96284.41211.9621059562.617866468.2691.671.3211.768.8511.1922588608.6715.821.2611.174.558.614.0315.8668.481504800.26207.4611.1812.79659.0928.74189.4012.958.68121331051.901553942.493.68016.7822.081035973.611283252.401620397.171509161.501590238.433986449441267626.251696504.026.995.14640563.223.973.583.9721.0759.031559408.5155853.6029.6024.981.6543.934.601265418.9129.444.6628.842404.77413.053.5052008103374861424681761478.58934.74907.913182.23979.25474.882309.29493.63100.00920.7410851137.82370.36200.9721812047.2118637109.8385.640.948.616.928.5721253793.5312.780.999.253.867.433.4714.0974.081532061.63193.9010.2212.63754.2526.57204.3612.238.0812.521310891.481532531.223.78116.4123.11007122.621275557.391572424.891546553.321538729.614082349371305698.481651562.366.985.1246448333.943.593.9120.6557.881545294.4755893.5929.2325.141.6503.894.60124779629.034.6328.712404.32013.013.4951967103474821423281761478.50935.02907.823182.01979.3474.862309.19493.63100.01020.745724897.6757.34499.988804114.199365385.68164.990.968.396.649.6217612257.1312.421.089.193.837.303.4714.0176.871655250.86188.7910.4013.80254.1326.57200.2312.058.1512.781394989.581620928.163.85316.0322.63995398.431325104.131561730.491559859.231548401.543953650871297415.251659495.296.815.249655887.034.033.513.8920.6958.281574780.4556883.5429.1124.731.6773.874.531266283.9429.254.6028.782394.13913.003.4951880103574781424181831479.04934.81907.873182.29979.23474.892309.28493.65100.00920.74OpenBenchmarking.org

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abc4M8M12M16M20M16383690.9610851137.825724897.60

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abc160320480640800284.41370.36757.34MAX: 2551.38MAX: 7948.19MAX: 21117.8

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abc110220330440550211.96200.97499.98MAX: 1593.95MAX: 1387.04MAX: 7223.32

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abc5M10M15M20M25M21059562.6021812047.218804114.19

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abc4M8M12M16M20M17866468.2618637109.839365385.68

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abc408012016020091.6785.64164.99MAX: 7883.93MAX: 1651.27MAX: 11718.16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceabc0.2970.5940.8911.1881.485SE +/- 0.03, N = 2SE +/- 0.02, N = 2SE +/- 0.03, N = 21.320.940.96MIN: 1.2 / MAX: 4.08MIN: 0.9 / MAX: 1.09MIN: 0.9 / MAX: 5.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceabc246810Min: 1.29 / Avg: 1.32 / Max: 1.34Min: 0.92 / Avg: 0.94 / Max: 0.95Min: 0.93 / Avg: 0.96 / Max: 0.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mabc3691215SE +/- 0.10, N = 2SE +/- 0.03, N = 2SE +/- 0.01, N = 211.768.618.39MIN: 11.26 / MAX: 22.16MIN: 8.21 / MAX: 17.64MIN: 8.14 / MAX: 18.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mabc3691215Min: 11.66 / Avg: 11.76 / Max: 11.86Min: 8.58 / Avg: 8.61 / Max: 8.63Min: 8.38 / Avg: 8.39 / Max: 8.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0abc246810SE +/- 0.29, N = 2SE +/- 0.04, N = 2SE +/- 0.05, N = 28.856.926.64MIN: 6.6 / MAX: 21.28MIN: 6.5 / MAX: 16.04MIN: 6.38 / MAX: 17.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0abc3691215Min: 8.56 / Avg: 8.85 / Max: 9.13Min: 6.87 / Avg: 6.92 / Max: 6.96Min: 6.59 / Avg: 6.64 / Max: 6.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mabc3691215SE +/- 0.30, N = 2SE +/- 0.01, N = 2SE +/- 1.13, N = 211.198.579.62MIN: 10.38 / MAX: 21.88MIN: 8.18 / MAX: 21.08MIN: 8.23 / MAX: 23.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mabc3691215Min: 10.89 / Avg: 11.19 / Max: 11.48Min: 8.56 / Avg: 8.57 / Max: 8.58Min: 8.49 / Avg: 9.62 / Max: 10.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abc5M10M15M20M25M22588608.6721253793.5317612257.13

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetabc48121620SE +/- 0.07, N = 2SE +/- 0.20, N = 2SE +/- 0.27, N = 215.8212.7812.42MIN: 15.12 / MAX: 28.62MIN: 11.9 / MAX: 22.46MIN: 11.68 / MAX: 23.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetabc48121620Min: 15.75 / Avg: 15.82 / Max: 15.88Min: 12.58 / Avg: 12.78 / Max: 12.98Min: 12.14 / Avg: 12.42 / Max: 12.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceabc0.28350.5670.85051.1341.4175SE +/- 0.02, N = 2SE +/- 0.05, N = 2SE +/- 0.14, N = 21.260.991.08MIN: 1.16 / MAX: 4.12MIN: 0.91 / MAX: 3.77MIN: 0.91 / MAX: 3.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceabc246810Min: 1.24 / Avg: 1.26 / Max: 1.27Min: 0.94 / Avg: 0.99 / Max: 1.04Min: 0.94 / Avg: 1.08 / Max: 1.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18abc3691215SE +/- 0.04, N = 2SE +/- 0.11, N = 2SE +/- 0.50, N = 211.179.259.19MIN: 10.66 / MAX: 21.33MIN: 8.51 / MAX: 24.74MIN: 8.38 / MAX: 20.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18abc3691215Min: 11.13 / Avg: 11.17 / Max: 11.21Min: 9.13 / Avg: 9.25 / Max: 9.36Min: 8.69 / Avg: 9.19 / Max: 9.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetabc1.02382.04763.07144.09525.119SE +/- 0.63, N = 2SE +/- 0.07, N = 2SE +/- 0.01, N = 24.553.863.83MIN: 3.81 / MAX: 15.97MIN: 3.69 / MAX: 12.48MIN: 3.68 / MAX: 12.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetabc246810Min: 3.92 / Avg: 4.55 / Max: 5.18Min: 3.79 / Avg: 3.86 / Max: 3.92Min: 3.82 / Avg: 3.83 / Max: 3.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetabc246810SE +/- 0.04, N = 2SE +/- 0.09, N = 2SE +/- 0.25, N = 28.617.437.30MIN: 8.27 / MAX: 20.05MIN: 6.9 / MAX: 16.46MIN: 6.79 / MAX: 17.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetabc3691215Min: 8.56 / Avg: 8.61 / Max: 8.65Min: 7.33 / Avg: 7.43 / Max: 7.52Min: 7.05 / Avg: 7.3 / Max: 7.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2abc0.90681.81362.72043.62724.534SE +/- 0.57, N = 2SE +/- 0.01, N = 2SE +/- 0.02, N = 24.033.473.47MIN: 3.35 / MAX: 10.89MIN: 3.32 / MAX: 12.41MIN: 3.36 / MAX: 13.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2abc246810Min: 3.46 / Avg: 4.03 / Max: 4.6Min: 3.46 / Avg: 3.47 / Max: 3.47Min: 3.45 / Avg: 3.47 / Max: 3.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetabc48121620SE +/- 0.02, N = 2SE +/- 1.53, N = 2SE +/- 1.79, N = 215.8614.0914.01MIN: 15.16 / MAX: 26.57MIN: 12.03 / MAX: 25.74MIN: 11.83 / MAX: 25.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetabc48121620Min: 15.84 / Avg: 15.86 / Max: 15.88Min: 12.56 / Avg: 14.09 / Max: 15.62Min: 12.22 / Avg: 14.01 / Max: 15.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abc2040608010068.4874.0876.87MAX: 1696.56MAX: 1547.17MAX: 9168.34

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100abc400K800K1200K1600K2000KSE +/- 66621.90, N = 2SE +/- 71249.46, N = 2SE +/- 168860.22, N = 21504800.261532061.631655250.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100abc300K600K900K1200K1500KMin: 1438178.36 / Avg: 1504800.26 / Max: 1571422.16Min: 1460812.17 / Avg: 1532061.63 / Max: 1603311.09Min: 1486390.64 / Avg: 1655250.86 / Max: 1824111.081. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerabc50100150200250SE +/- 8.01, N = 2SE +/- 0.73, N = 2SE +/- 0.68, N = 2207.46193.90188.79MIN: 169.78 / MAX: 244.36MIN: 167.64 / MAX: 231.56MIN: 169.1 / MAX: 243.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerabc4080120160200Min: 199.45 / Avg: 207.46 / Max: 215.46Min: 193.17 / Avg: 193.9 / Max: 194.63Min: 188.11 / Avg: 188.79 / Max: 189.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18abc3691215SE +/- 0.02, N = 2SE +/- 1.05, N = 2SE +/- 0.82, N = 211.1810.2210.40MIN: 10.62 / MAX: 21.34MIN: 8.49 / MAX: 20.91MIN: 8.57 / MAX: 21.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18abc3691215Min: 11.15 / Avg: 11.18 / Max: 11.2Min: 9.17 / Avg: 10.22 / Max: 11.27Min: 9.58 / Avg: 10.4 / Max: 11.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterabc48121620SE +/- 0.08, N = 2SE +/- 0.11, N = 2SE +/- 0.79, N = 212.8012.6413.801. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterabc48121620Min: 12.71 / Avg: 12.8 / Max: 12.88Min: 12.53 / Avg: 12.64 / Max: 12.74Min: 13.01 / Avg: 13.8 / Max: 14.591. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16abc1326395265SE +/- 0.11, N = 2SE +/- 1.96, N = 2SE +/- 2.37, N = 259.0954.2554.13MIN: 57.54 / MAX: 76.29MIN: 50.32 / MAX: 71.7MIN: 48.55 / MAX: 72.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16abc1224364860Min: 58.98 / Avg: 59.09 / Max: 59.2Min: 52.29 / Avg: 54.25 / Max: 56.2Min: 51.76 / Avg: 54.13 / Max: 56.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50abc714212835SE +/- 0.18, N = 2SE +/- 2.23, N = 2SE +/- 2.20, N = 228.7426.5726.57MIN: 27.89 / MAX: 40MIN: 23.28 / MAX: 38.91MIN: 23.37 / MAX: 38.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50abc612182430Min: 28.56 / Avg: 28.74 / Max: 28.92Min: 24.34 / Avg: 26.57 / Max: 28.8Min: 24.37 / Avg: 26.57 / Max: 28.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerabc4080120160200SE +/- 4.42, N = 2SE +/- 3.85, N = 2SE +/- 5.48, N = 2189.40204.36200.23MIN: 169.61 / MAX: 225.69MIN: 168.8 / MAX: 231.88MIN: 169.16 / MAX: 234.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerabc4080120160200Min: 184.98 / Avg: 189.4 / Max: 193.82Min: 200.51 / Avg: 204.36 / Max: 208.21Min: 194.75 / Avg: 200.23 / Max: 205.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdabc3691215SE +/- 0.03, N = 2SE +/- 0.86, N = 2SE +/- 0.97, N = 212.9512.2312.05MIN: 12.67 / MAX: 23.3MIN: 10.77 / MAX: 31.43MIN: 10.72 / MAX: 23.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdabc48121620Min: 12.92 / Avg: 12.95 / Max: 12.98Min: 11.37 / Avg: 12.23 / Max: 13.08Min: 11.08 / Avg: 12.05 / Max: 13.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetabc246810SE +/- 0.04, N = 2SE +/- 0.52, N = 2SE +/- 0.55, N = 28.688.088.15MIN: 8.25 / MAX: 19.5MIN: 7.01 / MAX: 17.55MIN: 7.01 / MAX: 17.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetabc3691215Min: 8.64 / Avg: 8.68 / Max: 8.71Min: 7.56 / Avg: 8.08 / Max: 8.6Min: 7.6 / Avg: 8.15 / Max: 8.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abc369121512.0012.5212.78MAX: 771.24MAX: 794.23MAX: 800.68

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:5abc300K600K900K1200K1500KSE +/- 124523.27, N = 2SE +/- 115490.54, N = 2SE +/- 156954.29, N = 21331051.901310891.481394989.581. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:5abc200K400K600K800K1000KMin: 1206528.63 / Avg: 1331051.9 / Max: 1455575.16Min: 1195400.94 / Avg: 1310891.48 / Max: 1426382.01Min: 1238035.29 / Avg: 1394989.58 / Max: 1551943.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10abc300K600K900K1200K1500KSE +/- 93795.63, N = 2SE +/- 144264.17, N = 2SE +/- 153108.99, N = 21553942.491532531.221620928.161. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10abc300K600K900K1200K1500KMin: 1460146.86 / Avg: 1553942.49 / Max: 1647738.12Min: 1388267.05 / Avg: 1532531.22 / Max: 1676795.38Min: 1467819.17 / Avg: 1620928.16 / Max: 1774037.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterabc0.86691.73382.60073.46764.3345SE +/- 0.014, N = 2SE +/- 0.014, N = 2SE +/- 0.081, N = 23.6803.7813.8531. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterabc246810Min: 3.67 / Avg: 3.68 / Max: 3.69Min: 3.77 / Avg: 3.78 / Max: 3.79Min: 3.77 / Avg: 3.85 / Max: 3.931. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abc4812162016.7816.4116.03MAX: 906.87MAX: 1017.33MAX: 1027.73

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abc61218243022.0823.1022.63MAX: 863.43MAX: 906.47MAX: 909.27

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abc200K400K600K800K1000K1035973.611007122.62995398.43

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10abc300K600K900K1200K1500KSE +/- 119195.59, N = 2SE +/- 90004.70, N = 2SE +/- 143882.01, N = 21283252.401275557.391325104.131. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10abc200K400K600K800K1000KMin: 1164056.81 / Avg: 1283252.4 / Max: 1402447.99Min: 1185552.69 / Avg: 1275557.39 / Max: 1365562.08Min: 1181222.12 / Avg: 1325104.13 / Max: 1468986.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:5abc300K600K900K1200K1500KSE +/- 138897.05, N = 2SE +/- 146499.45, N = 2SE +/- 157442.25, N = 21620397.171572424.891561730.491. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:5abc300K600K900K1200K1500KMin: 1481500.12 / Avg: 1620397.17 / Max: 1759294.22Min: 1425925.44 / Avg: 1572424.89 / Max: 1718924.33Min: 1404288.24 / Avg: 1561730.49 / Max: 1719172.731. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:5abc300K600K900K1200K1500KSE +/- 35029.57, N = 2SE +/- 81765.99, N = 2SE +/- 148775.44, N = 21509161.501546553.321559859.231. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:5abc300K600K900K1200K1500KMin: 1474131.93 / Avg: 1509161.5 / Max: 1544191.07Min: 1464787.33 / Avg: 1546553.32 / Max: 1628319.3Min: 1411083.79 / Avg: 1559859.23 / Max: 1708634.661. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100abc300K600K900K1200K1500KSE +/- 113274.82, N = 2SE +/- 98659.37, N = 2SE +/- 65233.84, N = 21590238.431538729.611548401.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100abc300K600K900K1200K1500KMin: 1476963.61 / Avg: 1590238.43 / Max: 1703513.25Min: 1440070.24 / Avg: 1538729.61 / Max: 1637388.97Min: 1483167.7 / Avg: 1548401.54 / Max: 1613635.381. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesabc9K18K27K36K45KSE +/- 2472.50, N = 2SE +/- 1520.50, N = 2SE +/- 2294.50, N = 2398644082339536
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesabc7K14K21K28K35KMin: 37391 / Avg: 39863.5 / Max: 42336Min: 39302 / Avg: 40822.5 / Max: 42343Min: 37241 / Avg: 39535.5 / Max: 41830

VkFFT

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C multidimensional in single precisionabc11002200330044005500SE +/- 17.50, N = 2SE +/- 6.50, N = 2SE +/- 9.00, N = 24944493750871. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C multidimensional in single precisionabc9001800270036004500Min: 4926 / Avg: 4943.5 / Max: 4961Min: 4930 / Avg: 4936.5 / Max: 4943Min: 5078 / Avg: 5087 / Max: 50961. (CXX) g++ options: -O3

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100abc300K600K900K1200K1500KSE +/- 117619.34, N = 2SE +/- 95110.31, N = 2SE +/- 141978.48, N = 21267626.251305698.481297415.251. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100abc200K400K600K800K1000KMin: 1150006.91 / Avg: 1267626.25 / Max: 1385245.59Min: 1210588.17 / Avg: 1305698.48 / Max: 1400808.78Min: 1155436.77 / Avg: 1297415.25 / Max: 1439393.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abc400K800K1200K1600K2000K1696504.021651562.361659495.29

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0abc246810SE +/- 0.02, N = 2SE +/- 0.02, N = 2SE +/- 0.15, N = 26.996.986.81MIN: 6.55 / MAX: 16.31MIN: 6.57 / MAX: 17.79MIN: 6.42 / MAX: 17.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0abc3691215Min: 6.97 / Avg: 6.99 / Max: 7.01Min: 6.96 / Avg: 6.98 / Max: 6.99Min: 6.66 / Avg: 6.81 / Max: 6.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastabc1.1812.3623.5434.7245.905SE +/- 0.000, N = 2SE +/- 0.008, N = 2SE +/- 0.115, N = 25.1405.1245.2491. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastabc246810Min: 5.14 / Avg: 5.14 / Max: 5.14Min: 5.12 / Avg: 5.12 / Max: 5.13Min: 5.13 / Avg: 5.25 / Max: 5.361. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abc140K280K420K560K700K640563.22644833.00655887.03

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetabc0.90681.81362.72043.62724.534SE +/- 0.09, N = 2SE +/- 0.00, N = 2SE +/- 0.10, N = 23.973.944.03MIN: 3.7 / MAX: 15.04MIN: 3.73 / MAX: 12.78MIN: 3.74 / MAX: 10.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetabc246810Min: 3.88 / Avg: 3.97 / Max: 4.06Min: 3.93 / Avg: 3.94 / Max: 3.94Min: 3.93 / Avg: 4.03 / Max: 4.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3abc0.80781.61562.42343.23124.039SE +/- 0.03, N = 2SE +/- 0.04, N = 2SE +/- 0.00, N = 23.583.593.51MIN: 3.35 / MAX: 13.78MIN: 3.37 / MAX: 14.17MIN: 3.32 / MAX: 11.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3abc246810Min: 3.55 / Avg: 3.58 / Max: 3.61Min: 3.55 / Avg: 3.59 / Max: 3.62Min: 3.5 / Avg: 3.51 / Max: 3.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetabc0.89331.78662.67993.57324.4665SE +/- 0.00, N = 2SE +/- 0.00, N = 2SE +/- 0.03, N = 23.973.913.89MIN: 3.76 / MAX: 14.59MIN: 3.71 / MAX: 13.7MIN: 3.67 / MAX: 13.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetabc246810Min: 3.97 / Avg: 3.97 / Max: 3.97Min: 3.91 / Avg: 3.91 / Max: 3.91Min: 3.86 / Avg: 3.89 / Max: 3.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetabc510152025SE +/- 0.19, N = 2SE +/- 0.03, N = 2SE +/- 0.03, N = 221.0720.6520.69MIN: 20.27 / MAX: 31.83MIN: 20.22 / MAX: 31.66MIN: 20.24 / MAX: 31.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetabc510152025Min: 20.88 / Avg: 21.07 / Max: 21.25Min: 20.62 / Avg: 20.65 / Max: 20.67Min: 20.66 / Avg: 20.69 / Max: 20.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16abc1326395265SE +/- 0.15, N = 2SE +/- 1.40, N = 2SE +/- 0.65, N = 259.0357.8858.28MIN: 57.6 / MAX: 79.49MIN: 52 / MAX: 70.84MIN: 52.14 / MAX: 69.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16abc1224364860Min: 58.88 / Avg: 59.03 / Max: 59.18Min: 56.48 / Avg: 57.88 / Max: 59.28Min: 57.63 / Avg: 58.28 / Max: 58.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10abc300K600K900K1200K1500KSE +/- 99951.62, N = 2SE +/- 111709.66, N = 2SE +/- 128692.32, N = 21559408.511545294.471574780.451. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10abc300K600K900K1200K1500KMin: 1459456.89 / Avg: 1559408.51 / Max: 1659360.12Min: 1433584.81 / Avg: 1545294.47 / Max: 1657004.13Min: 1446088.13 / Avg: 1574780.45 / Max: 1703472.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

VkFFT

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT R2C / C2Rabc12002400360048006000SE +/- 3.50, N = 2SE +/- 30.50, N = 2SE +/- 69.00, N = 25585558956881. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT R2C / C2Rabc10002000300040005000Min: 5581 / Avg: 5584.5 / Max: 5588Min: 5558 / Avg: 5588.5 / Max: 5619Min: 5619 / Avg: 5688 / Max: 57571. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3abc0.811.622.433.244.05SE +/- 0.03, N = 2SE +/- 0.02, N = 2SE +/- 0.01, N = 23.603.593.54MIN: 3.38 / MAX: 12.52MIN: 3.4 / MAX: 12.21MIN: 3.33 / MAX: 11.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3abc246810Min: 3.57 / Avg: 3.6 / Max: 3.62Min: 3.57 / Avg: 3.59 / Max: 3.6Min: 3.53 / Avg: 3.54 / Max: 3.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyabc714212835SE +/- 0.26, N = 2SE +/- 0.15, N = 2SE +/- 0.05, N = 229.6029.2329.11MIN: 28.6 / MAX: 44.5MIN: 28.34 / MAX: 41.02MIN: 28.38 / MAX: 40.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyabc714212835Min: 29.34 / Avg: 29.6 / Max: 29.86Min: 29.08 / Avg: 29.23 / Max: 29.38Min: 29.05 / Avg: 29.11 / Max: 29.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abc61218243024.9825.1424.73MAX: 1064.56MAX: 1026.24MAX: 998.23

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastabc0.37730.75461.13191.50921.8865SE +/- 0.037, N = 2SE +/- 0.028, N = 2SE +/- 0.036, N = 21.6541.6501.6771. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastabc246810Min: 1.62 / Avg: 1.65 / Max: 1.69Min: 1.62 / Avg: 1.65 / Max: 1.68Min: 1.64 / Avg: 1.68 / Max: 1.711. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetabc0.88431.76862.65293.53724.4215SE +/- 0.01, N = 2SE +/- 0.03, N = 2SE +/- 0.04, N = 23.933.893.87MIN: 3.73 / MAX: 14.41MIN: 3.74 / MAX: 12.97MIN: 3.71 / MAX: 12.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetabc246810Min: 3.92 / Avg: 3.93 / Max: 3.93Min: 3.86 / Avg: 3.89 / Max: 3.91Min: 3.83 / Avg: 3.87 / Max: 3.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2abc1.0352.073.1054.145.175SE +/- 0.04, N = 2SE +/- 0.02, N = 2SE +/- 0.02, N = 24.604.604.53MIN: 4.38 / MAX: 12.37MIN: 4.34 / MAX: 14.83MIN: 4.3 / MAX: 15.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2abc246810Min: 4.55 / Avg: 4.6 / Max: 4.64Min: 4.58 / Avg: 4.6 / Max: 4.62Min: 4.51 / Avg: 4.53 / Max: 4.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abc300K600K900K1200K1500K1265418.911247796.001266283.94

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyabc714212835SE +/- 0.42, N = 2SE +/- 0.03, N = 2SE +/- 0.03, N = 229.4429.0329.25MIN: 28.43 / MAX: 46.03MIN: 28.45 / MAX: 40.14MIN: 28.42 / MAX: 40.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyabc714212835Min: 29.02 / Avg: 29.44 / Max: 29.85Min: 29 / Avg: 29.03 / Max: 29.06Min: 29.22 / Avg: 29.25 / Max: 29.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2abc1.04852.0973.14554.1945.2425SE +/- 0.00, N = 2SE +/- 0.03, N = 2SE +/- 0.04, N = 24.664.634.60MIN: 4.46 / MAX: 14.54MIN: 4.44 / MAX: 14.09MIN: 4.38 / MAX: 141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2abc246810Min: 4.66 / Avg: 4.66 / Max: 4.66Min: 4.6 / Avg: 4.63 / Max: 4.65Min: 4.55 / Avg: 4.6 / Max: 4.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50abc714212835SE +/- 0.19, N = 2SE +/- 0.12, N = 2SE +/- 0.15, N = 228.8428.7128.78MIN: 27.92 / MAX: 41.94MIN: 27.9 / MAX: 39.22MIN: 27.96 / MAX: 39.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50abc612182430Min: 28.65 / Avg: 28.84 / Max: 29.02Min: 28.59 / Avg: 28.71 / Max: 28.82Min: 28.63 / Avg: 28.78 / Max: 28.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileabc5001000150020002500SE +/- 0.32, N = 2SE +/- 2.12, N = 2SE +/- 1.40, N = 22404.772404.322394.14
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileabc400800120016002000Min: 2404.45 / Avg: 2404.77 / Max: 2405.1Min: 2402.2 / Avg: 2404.32 / Max: 2406.44Min: 2392.74 / Avg: 2394.14 / Max: 2395.54

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdabc3691215SE +/- 0.04, N = 2SE +/- 0.01, N = 2SE +/- 0.03, N = 213.0513.0113.00MIN: 12.7 / MAX: 23.56MIN: 12.67 / MAX: 27.85MIN: 12.68 / MAX: 23.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdabc48121620Min: 13.01 / Avg: 13.05 / Max: 13.08Min: 12.99 / Avg: 13.01 / Max: 13.02Min: 12.97 / Avg: 13 / Max: 13.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2abc0.78751.5752.36253.153.9375SE +/- 0.01, N = 2SE +/- 0.01, N = 2SE +/- 0.02, N = 23.503.493.49MIN: 3.35 / MAX: 12.33MIN: 3.33 / MAX: 13.62MIN: 3.35 / MAX: 11.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2abc246810Min: 3.48 / Avg: 3.5 / Max: 3.51Min: 3.48 / Avg: 3.49 / Max: 3.49Min: 3.47 / Avg: 3.49 / Max: 3.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricabc11K22K33K44K55KSE +/- 16.50, N = 2SE +/- 147.00, N = 2SE +/- 34.00, N = 25200851967518801. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricabc9K18K27K36K45KMin: 51991 / Avg: 52007.5 / Max: 52024Min: 51820 / Avg: 51967 / Max: 52114Min: 51846 / Avg: 51880 / Max: 519141. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

VkFFT

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C Bluestein in single precisionabc2004006008001000SE +/- 0.50, N = 2SE +/- 1.00, N = 2SE +/- 1.00, N = 21033103410351. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C Bluestein in single precisionabc2004006008001000Min: 1032 / Avg: 1032.5 / Max: 1033Min: 1033 / Avg: 1034 / Max: 1035Min: 1034 / Avg: 1035 / Max: 10361. (CXX) g++ options: -O3

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precisionabc16003200480064008000SE +/- 14.00, N = 2SE +/- 4.00, N = 2SE +/- 3.50, N = 27486748274781. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precisionabc13002600390052006500Min: 7472 / Avg: 7486 / Max: 7500Min: 7478 / Avg: 7482 / Max: 7486Min: 7474 / Avg: 7477.5 / Max: 74811. (CXX) g++ options: -O3

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in half precisionabc3K6K9K12K15KSE +/- 5.00, N = 2SE +/- 1.50, N = 2SE +/- 4.00, N = 21424614232142411. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in half precisionabc2K4K6K8K10KMin: 14241 / Avg: 14246 / Max: 14251Min: 14230 / Avg: 14231.5 / Max: 14233Min: 14237 / Avg: 14241 / Max: 142451. (CXX) g++ options: -O3

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precision, no reshufflingabc2K4K6K8K10KSE +/- 4.00, N = 2SE +/- 0.50, N = 2SE +/- 5.00, N = 28176817681831. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precision, no reshufflingabc14002800420056007000Min: 8172 / Avg: 8176 / Max: 8180Min: 8175 / Avg: 8175.5 / Max: 8176Min: 8178 / Avg: 8183 / Max: 81881. (CXX) g++ options: -O3

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-vec4abc30060090012001500SE +/- 0.52, N = 2SE +/- 0.51, N = 2SE +/- 0.00, N = 21478.581478.501479.04
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-vec4abc30060090012001500Min: 1478.05 / Avg: 1478.58 / Max: 1479.1Min: 1477.99 / Avg: 1478.5 / Max: 1479.01Min: 1479.04 / Avg: 1479.04 / Max: 1479.04

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-scalarabc2004006008001000SE +/- 0.27, N = 2SE +/- 0.00, N = 2SE +/- 0.29, N = 2934.74935.02934.81
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-scalarabc160320480640800Min: 934.46 / Avg: 934.74 / Max: 935.01Min: 935.01 / Avg: 935.02 / Max: 935.02Min: 934.52 / Avg: 934.81 / Max: 935.09

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-scalarabc2004006008001000SE +/- 0.10, N = 2SE +/- 0.01, N = 2SE +/- 0.03, N = 2907.91907.82907.87
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-scalarabc160320480640800Min: 907.81 / Avg: 907.91 / Max: 908.01Min: 907.81 / Avg: 907.82 / Max: 907.82Min: 907.84 / Avg: 907.87 / Max: 907.9

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-vec4abc7001400210028003500SE +/- 0.11, N = 2SE +/- 0.04, N = 2SE +/- 0.12, N = 23182.233182.013182.29
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-vec4abc6001200180024003000Min: 3182.12 / Avg: 3182.23 / Max: 3182.33Min: 3181.97 / Avg: 3182.01 / Max: 3182.04Min: 3182.17 / Avg: 3182.29 / Max: 3182.41

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-vec4abc2004006008001000SE +/- 0.08, N = 2SE +/- 0.05, N = 2SE +/- 0.00, N = 2979.25979.30979.23
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-vec4abc2004006008001000Min: 979.17 / Avg: 979.25 / Max: 979.33Min: 979.2 / Avg: 979.25 / Max: 979.3Min: 979.22 / Avg: 979.23 / Max: 979.23

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-scalarabc100200300400500SE +/- 0.03, N = 2SE +/- 0.00, N = 2SE +/- 0.00, N = 2474.88474.86474.89
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-scalarabc80160240320400Min: 474.85 / Avg: 474.88 / Max: 474.91Min: 474.86 / Avg: 474.86 / Max: 474.86Min: 474.88 / Avg: 474.89 / Max: 474.89

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-scalarabc5001000150020002500SE +/- 0.07, N = 2SE +/- 0.02, N = 2SE +/- 0.04, N = 22309.292309.192309.28
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-scalarabc400800120016002000Min: 2309.22 / Avg: 2309.29 / Max: 2309.36Min: 2309.17 / Avg: 2309.19 / Max: 2309.21Min: 2309.24 / Avg: 2309.28 / Max: 2309.31

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-vec4abc110220330440550SE +/- 0.02, N = 2SE +/- 0.00, N = 2SE +/- 0.00, N = 2493.63493.63493.65
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-vec4abc90180270360450Min: 493.61 / Avg: 493.63 / Max: 493.65Min: 493.62 / Avg: 493.63 / Max: 493.63Min: 493.64 / Avg: 493.65 / Max: 493.65

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Singleabc20406080100SE +/- 0.00, N = 2SE +/- 0.00, N = 2SE +/- 0.00, N = 2100.01100.01100.011. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Singleabc20406080100Min: 100.01 / Avg: 100.01 / Max: 100.01Min: 100.01 / Avg: 100.01 / Max: 100.01Min: 100.01 / Avg: 100.01 / Max: 100.011. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetabc510152025SE +/- 0.05, N = 2SE +/- 0.04, N = 2SE +/- 0.07, N = 220.7420.7420.74MIN: 20.17 / MAX: 31.42MIN: 20.23 / MAX: 31.9MIN: 20.26 / MAX: 32.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetabc510152025Min: 20.69 / Avg: 20.74 / Max: 20.79Min: 20.69 / Avg: 20.74 / Max: 20.78Min: 20.66 / Avg: 20.74 / Max: 20.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkFFT

Test: FFT + iFFT C2C Bluestein benchmark in double precision

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

Test: FFT + iFFT C2C 1D batched in double precision

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

81 Results Shown

Apache IoTDB:
  200 - 100 - 500:
    point/sec
    Average Latency
  100 - 100 - 500:
    Average Latency
    point/sec
  200 - 100 - 200:
    point/sec
    Average Latency
NCNN:
  CPU - blazeface
  CPU - regnety_400m
  CPU - efficientnet-b0
  Vulkan GPU - regnety_400m
Apache IoTDB
NCNN:
  CPU - googlenet
  Vulkan GPU - blazeface
  CPU - resnet18
  CPU - mnasnet
  CPU - alexnet
  CPU - shufflenet-v2
  Vulkan GPU - googlenet
Apache IoTDB
Dragonflydb
NCNN:
  Vulkan GPU - vision_transformer
  Vulkan GPU - resnet18
VVenC
NCNN:
  CPU - vgg16
  CPU - resnet50
  CPU - vision_transformer
  CPU - squeezenet_ssd
  Vulkan GPU - alexnet
Apache IoTDB
Dragonflydb:
  10 - 1:5
  20 - 1:10
VVenC
Apache IoTDB:
  100 - 1 - 200
  200 - 1 - 500
  200 - 1 - 200
Dragonflydb:
  10 - 1:10
  20 - 1:5
  50 - 1:5
  50 - 1:100
Apache Cassandra
VkFFT
Dragonflydb
Apache IoTDB
NCNN
VVenC
Apache IoTDB
NCNN:
  Vulkan GPU - FastestDet
  CPU-v3-v3 - mobilenet-v3
  CPU - FastestDet
  Vulkan GPU - mobilenet
  Vulkan GPU - vgg16
Dragonflydb
VkFFT
NCNN:
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU - yolov4-tiny
Apache IoTDB
VVenC
NCNN:
  Vulkan GPU - mnasnet
  CPU-v2-v2 - mobilenet-v2
Apache IoTDB
NCNN:
  CPU - yolov4-tiny
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - resnet50
Timed GCC Compilation
NCNN:
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - shufflenet-v2
BRL-CAD
VkFFT:
  FFT + iFFT C2C Bluestein in single precision
  FFT + iFFT C2C 1D batched in single precision
  FFT + iFFT C2C 1D batched in half precision
  FFT + iFFT C2C 1D batched in single precision, no reshuffling
vkpeak:
  fp32-vec4
  fp32-scalar
  int16-scalar
  fp16-vec4
  int16-vec4
  int32-scalar
  fp16-scalar
  int32-vec4
VkResample
NCNN