new t

AMD Ryzen 7 7840U testing with a PHX Ray_PEU (V1.04 BIOS) and GFX1103 512MB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308054-NE-NEWT4354621
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
Database Test Suite 6 Tests
Java Tests 2 Tests
Multi-Core 3 Tests
NVIDIA GPU Compute 4 Tests
Server 6 Tests
Vulkan Compute 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
August 03 2023
  23 Hours, 18 Minutes
b
August 04 2023
  22 Hours, 35 Minutes
Invert Hiding All Results Option
  22 Hours, 57 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


new tOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 7840U @ 3.30GHz (8 Cores / 16 Threads)PHX Ray_PEU (V1.04 BIOS)AMD Device 14e816GB1024GB Micron_3400_MTFDKBA1T0TFHGFX1103 512MB (2700/800MHz)AMD Device 1640MEDIATEK Device 0616Ubuntu 22.046.4.0-060400-generic (x86_64)KDE Plasma 5.24.7X Server 1.21.1.44.6 Mesa 22.2.5-0ubuntu0.1~22.04.3 (LLVM 15.0.7 DRM 3.52)1.3.224GCC 11.4.0ext43200x2000ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionNew T BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa704101 - GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-PHXGENERIC-001 - OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)- Python 3.10.12- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+53.5%+53.5%+107%+107%+160.5%+160.5%+214%+214%181%61.6%44.4%44.2%30.4%27.6%27.1%24.2%23.2%18.4%10.2%8.5%8.1%7.7%4.2%3.8%3.6%2.6%2.5%2.4%2.2%2.1%C.B.1 - 16 - 32214.1%C.B.2 - 16 - 16C.B.2 - 32 - 16113.9%CassandraKeyValue - 32 - 1664.3%C.B.1 - 32 - 32CassandraKeyValue - 16 - 32C.B.1 - 16 - 32CassandraKeyValue - 32 - 141.2%CassandraKeyValue - 16 - 1C.B.1 - 32 - 16CassandraKeyValue - 1 - 1C.B.2 - 32 - 32C.B.2 - 32 - 1CassandraKeyValue - 1 - 16C.B.2 - 1 - 1614.9%C.B.2 - 16 - 110.3%C.B.1 - 16 - 1CassandraKeyValue - 1 - 329.7%CassandraKeyValue - 32 - 329.1%10 - 1:109%C.B.1 - 32 - 110 - 1:5C.B.1 - 1 - 16Redis - 100 - 1:57.7%CPU - FastestDet7.6%Redis - 50 - 1:56.6%C.B.1 - 1 - 16.2%CassandraKeyValue - 16 - 166%200 - 100 - 2005%200 - 100 - 200Redis - 500 - 1:5100 - 1 - 5003.7%C.B.2 - 1 - 3210 - 1:100Redis - 500 - 1:10100 - 1 - 500C.B.2 - 1 - 1CPU - shufflenet-v2YugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBYugabyteDBDragonflydbYugabyteDBDragonflydbYugabyteDBRedis 7.0.12 + memtier_benchmarkNCNNRedis 7.0.12 + memtier_benchmarkYugabyteDBYugabyteDBApache IoTDBApache IoTDBRedis 7.0.12 + memtier_benchmarkApache IoTDBYugabyteDBDragonflydbRedis 7.0.12 + memtier_benchmarkApache IoTDBYugabyteDBNCNNab

new tyugabytedb: CassandraBatchKeyValue, Batch 10 - 16 - 32yugabytedb: CassandraBatchKeyValue, Batch 25 - 16 - 16yugabytedb: CassandraBatchKeyValue, Batch 25 - 32 - 16yugabytedb: CassandraKeyValue - 32 - 16yugabytedb: CassandraBatchKeyValue, Batch 10 - 32 - 32yugabytedb: CassandraKeyValue - 16 - 32yugabytedb: CassandraBatchKeyValue, Batch 10 - 16 - 32yugabytedb: CassandraKeyValue - 32 - 1yugabytedb: CassandraKeyValue - 16 - 1yugabytedb: CassandraBatchKeyValue, Batch 10 - 32 - 16yugabytedb: CassandraKeyValue - 1 - 1yugabytedb: CassandraBatchKeyValue, Batch 25 - 32 - 32yugabytedb: CassandraBatchKeyValue, Batch 25 - 32 - 1yugabytedb: CassandraKeyValue - 1 - 16yugabytedb: CassandraBatchKeyValue, Batch 25 - 1 - 16yugabytedb: CassandraBatchKeyValue, Batch 25 - 16 - 1yugabytedb: CassandraBatchKeyValue, Batch 10 - 16 - 1yugabytedb: CassandraKeyValue - 1 - 32yugabytedb: CassandraKeyValue - 32 - 32dragonflydb: 10 - 1:10yugabytedb: CassandraBatchKeyValue, Batch 10 - 32 - 1dragonflydb: 10 - 1:5yugabytedb: CassandraBatchKeyValue, Batch 10 - 1 - 16memtier-benchmark: Redis - 100 - 1:5ncnn: CPU - FastestDetmemtier-benchmark: Redis - 50 - 1:5yugabytedb: CassandraBatchKeyValue, Batch 10 - 1 - 1yugabytedb: CassandraKeyValue - 16 - 16apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 100 - 200memtier-benchmark: Redis - 500 - 1:5apache-iotdb: 100 - 1 - 500yugabytedb: CassandraBatchKeyValue, Batch 25 - 1 - 32dragonflydb: 10 - 1:100memtier-benchmark: Redis - 500 - 1:10apache-iotdb: 100 - 1 - 500yugabytedb: CassandraBatchKeyValue, Batch 25 - 1 - 1ncnn: CPU - shufflenet-v2ncnn: CPU - resnet50yugabytedb: CassandraBatchKeyValue, Batch 25 - 16 - 32couchdb: 300 - 1000 - 30apache-iotdb: 200 - 100 - 500apache-iotdb: 200 - 100 - 500dragonflydb: 20 - 1:10yugabytedb: CassandraBatchKeyValue, Batch 10 - 1 - 32apache-iotdb: 100 - 1 - 200vkfft: FFT + iFFT C2C multidimensional in single precisionvkpeak: int32-vec4apache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 100 - 200dragonflydb: 60 - 1:5yugabytedb: CassandraBatchKeyValue, Batch 10 - 16 - 16apache-iotdb: 100 - 1 - 200vvenc: Bosphorus 4K - Fasterdragonflydb: 50 - 1:100vkfft: FFT + iFFT C2C 1D batched in double precisionncnn: CPU - yolov4-tinydragonflydb: 60 - 1:100dragonflydb: 20 - 1:100vkfft: FFT + iFFT R2C / C2Rncnn: Vulkan GPU - yolov4-tinycassandra: Writesvkpeak: fp64-vec4apache-iotdb: 100 - 100 - 200ncnn: Vulkan GPU - shufflenet-v2memtier-benchmark: Redis - 100 - 1:10cryptopp: Keyed Algorithmsncnn: CPU - mnasnetncnn: Vulkan GPU - vision_transformerncnn: CPU - alexnetcouchdb: 100 - 1000 - 30apache-iotdb: 100 - 100 - 500ncnn: CPU - efficientnet-b0vvenc: Bosphorus 1080p - Fasterncnn: Vulkan GPU - mobilenetbrl-cad: VGR Performance Metricncnn: CPU - squeezenet_ssdcouchdb: 300 - 3000 - 30dragonflydb: 50 - 1:10vkpeak: fp32-vec4ncnn: CPU - mobilenetncnn: Vulkan GPU - mnasnetdragonflydb: 50 - 1:5apache-iotdb: 200 - 1 - 200vkpeak: fp64-scalarncnn: Vulkan GPU-v2-v2 - mobilenet-v2dragonflydb: 60 - 1:10ncnn: Vulkan GPU - FastestDetvkpeak: int16-vec4dragonflydb: 20 - 1:5ncnn: Vulkan GPU - squeezenet_ssdcryptopp: Unkeyed Algorithmsvkpeak: fp16-scalarvkpeak: fp32-scalarvvenc: Bosphorus 1080p - Fastapache-iotdb: 200 - 1 - 500ncnn: CPU - googlenetncnn: Vulkan GPU - googlenetcouchdb: 100 - 3000 - 30ncnn: Vulkan GPU-v3-v3 - mobilenet-v3memtier-benchmark: Redis - 50 - 1:10apache-iotdb: 200 - 1 - 500vkpeak: int32-scalarncnn: Vulkan GPU - efficientnet-b0vvenc: Bosphorus 4K - Fastncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: CPU - resnet18ncnn: Vulkan GPU - resnet18vkpeak: fp16-vec4ncnn: CPU - vision_transformerbuild-gcc: Time To Compilevkpeak: int16-scalarncnn: Vulkan GPU - vgg16ncnn: CPU - vgg16yugabytedb: CassandraBatchKeyValue, Batch 10 - 32 - 32ncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - blazefacencnn: CPU - regnety_400mncnn: CPU - blazefacencnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2apache-iotdb: 200 - 1 - 200couchdb: 500 - 3000 - 30couchdb: 500 - 1000 - 30vkfft: FFT + iFFT C2C 1D batched in single precision, no reshufflingvkfft: FFT + iFFT C2C 1D batched in single precisionvkfft: FFT + iFFT C2C 1D batched in half precisionvkfft: FFT + iFFT C2C Bluestein in single precisionab7436.8227192.2942063.1642448.3342619.0332899.6932758.116087.765042.9352428.304847.1224786.464562.185085.1948848.265195.875369.555128.7158589.841637117.074117.931379888.1626474.343069952.674.193170726.694966.5751050.6650.6334027351.932779173.0820.8938485.271499036.962847988.801434283.466052.405.8238.2434204.91134.335154.1730434143.683869729.3838593.13764198.22166241.8537173315.0148.743586089.5650925.6912.252.4183755227.9121837.213676368.913937888.9414137.464434396.3431156415.065.862880502.94819.7618968.05291.3811.6381.496118.1211.358.46627.614163016.14444.1183697587.29194.9627.508.023688737.761148481.56161.178.363591856.984.38708.283840222.0516.06548.98635144.53191.613.73119.2621.1521.15267.2398.033050497.791904407.96269.0911.371.15411.6938.2315.2815.31114.59290.592640.776267.3475.3675.3717.711.7917.661.788.038.3210.47942.496238.8993032802592367.7476398.3919662.0025837.1068865.0247496.5347245.324312.296576.4066912.266161.6330787.115622.466018.8942497.314712.615918.624674.1053696.741502494.924467.151491166.6628523.232849758.374.512973746.294678.4348138.5548.235441446.082884149.3020.1539864.581538274.602918894.711469303.826184.375.7038.9834813.01136.361151.9430869603.973922671.6039107.77754295.46168244.7436760107.6549.273624676.1650394.912.372.4413789965.3722037.553645464.563970717.7414237.234461096.9130973284.35.832895297.95815.5969498.01289.9711.6881.176118.5811.318.49427.524149516.09445.4763708754.43194.4027.438.043697778.341151261.34161.568.343600224.354.37709.853831976.0816.09549.99782344.61191.923.73719.2921.1221.18266.8948.043054116.421902392.91269.3611.361.15511.7038.2615.2915.3114.66290.712641.577267.2775.3775.3676379.9817.711.7917.661.788.038.3210.47303280259OpenBenchmarking.org

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 32ab160032004800640080007436.822367.74

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 16ab16K32K48K64K80KSE +/- 3357.44, N = 2SE +/- 11025.98, N = 227192.2976398.39
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 16ab13K26K39K52K65KMin: 23834.85 / Avg: 27192.29 / Max: 30549.73Min: 65372.41 / Avg: 76398.39 / Max: 87424.36

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 16ab9K18K27K36K45KSE +/- 23519.81, N = 2SE +/- 4479.15, N = 242063.1619662.00
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 16ab7K14K21K28K35KMin: 18543.35 / Avg: 42063.16 / Max: 65582.97Min: 15182.85 / Avg: 19662 / Max: 24141.14

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 16ab9K18K27K36K45KSE +/- 23107.42, N = 2SE +/- 10982.43, N = 242448.3325837.10
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 16ab7K14K21K28K35KMin: 19340.91 / Avg: 42448.33 / Max: 65555.74Min: 14854.67 / Avg: 25837.1 / Max: 36819.52

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 32ab15K30K45K60K75KSE +/- 6339.81, N = 2SE +/- 11376.12, N = 242619.0368865.02
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 32ab12K24K36K48K60KMin: 36279.22 / Avg: 42619.03 / Max: 48958.83Min: 57488.9 / Avg: 68865.02 / Max: 80241.13

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 32ab10K20K30K40K50KSE +/- 6407.07, N = 2SE +/- 19065.84, N = 232899.6947496.53
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 32ab8K16K24K32K40KMin: 26492.62 / Avg: 32899.69 / Max: 39306.75Min: 28430.69 / Avg: 47496.53 / Max: 66562.37

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 32ab10K20K30K40K50KSE +/- 2539.49, N = 2SE +/- 24447.26, N = 232758.1147245.32
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 32ab8K16K24K32K40KMin: 30218.62 / Avg: 32758.11 / Max: 35297.59Min: 22798.06 / Avg: 47245.32 / Max: 71692.57

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 1ab13002600390052006500SE +/- 617.45, N = 2SE +/- 1032.62, N = 26087.764312.29
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 1ab11002200330044005500Min: 5470.31 / Avg: 6087.76 / Max: 6705.21Min: 3279.67 / Avg: 4312.29 / Max: 5344.9

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 1ab14002800420056007000SE +/- 767.76, N = 2SE +/- 197.16, N = 25042.936576.40
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 1ab11002200330044005500Min: 4275.17 / Avg: 5042.93 / Max: 5810.68Min: 6379.24 / Avg: 6576.4 / Max: 6773.56

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 16ab14K28K42K56K70KSE +/- 30353.05, N = 2SE +/- 10966.82, N = 252428.3066912.26
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 16ab12K24K36K48K60KMin: 22075.25 / Avg: 52428.3 / Max: 82781.34Min: 55945.44 / Avg: 66912.26 / Max: 77879.07

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 1ab13002600390052006500SE +/- 986.84, N = 2SE +/- 335.88, N = 24847.126161.63
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 1ab11002200330044005500Min: 3860.28 / Avg: 4847.12 / Max: 5833.96Min: 5825.75 / Avg: 6161.63 / Max: 6497.5

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 32ab7K14K21K28K35KSE +/- 2403.39, N = 2SE +/- 6788.20, N = 224786.4630787.11
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 32ab5K10K15K20K25KMin: 22383.07 / Avg: 24786.46 / Max: 27189.84Min: 23998.91 / Avg: 30787.11 / Max: 37575.31

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 1ab12002400360048006000SE +/- 1089.33, N = 2SE +/- 223.46, N = 24562.185622.46
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 32 - Num Threads Read: 1ab10002000300040005000Min: 3472.85 / Avg: 4562.18 / Max: 5651.51Min: 5399 / Avg: 5622.46 / Max: 5845.92

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 16ab13002600390052006500SE +/- 668.21, N = 2SE +/- 379.10, N = 25085.196018.89
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 16ab10002000300040005000Min: 4416.98 / Avg: 5085.19 / Max: 5753.39Min: 5639.79 / Avg: 6018.89 / Max: 6397.99

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 16ab10K20K30K40K50KSE +/- 4577.21, N = 2SE +/- 11007.88, N = 248848.2642497.31
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 16ab8K16K24K32K40KMin: 44271.05 / Avg: 48848.26 / Max: 53425.46Min: 31489.43 / Avg: 42497.31 / Max: 53505.18

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 1ab11002200330044005500SE +/- 1250.57, N = 2SE +/- 1490.02, N = 25195.874712.61
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 1ab9001800270036004500Min: 3945.3 / Avg: 5195.87 / Max: 6446.43Min: 3222.59 / Avg: 4712.61 / Max: 6202.62

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 1ab13002600390052006500SE +/- 947.68, N = 2SE +/- 151.68, N = 25369.555918.62
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 1ab10002000300040005000Min: 4421.87 / Avg: 5369.55 / Max: 6317.23Min: 5766.94 / Avg: 5918.62 / Max: 6070.3

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 32ab11002200330044005500SE +/- 1010.82, N = 2SE +/- 896.94, N = 25128.714674.10
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 1 - Num Threads Read: 32ab9001800270036004500Min: 4117.89 / Avg: 5128.71 / Max: 6139.53Min: 3777.16 / Avg: 4674.1 / Max: 5571.03

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 32ab13K26K39K52K65KSE +/- 31328.12, N = 2SE +/- 37315.43, N = 258589.8453696.74
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 32 - Num Threads Read: 32ab10K20K30K40K50KMin: 27261.72 / Avg: 58589.84 / Max: 89917.95Min: 16381.31 / Avg: 53696.74 / Max: 91012.16

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10ab400K800K1200K1600K2000KSE +/- 43890.60, N = 2SE +/- 20403.19, N = 21637117.071502494.921. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10ab300K600K900K1200K1500KMin: 1593226.47 / Avg: 1637117.07 / Max: 1681007.66Min: 1482091.73 / Avg: 1502494.92 / Max: 1522898.11. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 1ab10002000300040005000SE +/- 500.57, N = 2SE +/- 244.63, N = 24117.934467.15
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 1ab8001600240032004000Min: 3617.36 / Avg: 4117.93 / Max: 4618.49Min: 4222.52 / Avg: 4467.15 / Max: 4711.77

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:5ab300K600K900K1200K1500KSE +/- 28740.19, N = 2SE +/- 6251.51, N = 21379888.161491166.661. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:5ab300K600K900K1200K1500KMin: 1351147.97 / Avg: 1379888.16 / Max: 1408628.35Min: 1484915.15 / Avg: 1491166.66 / Max: 1497418.171. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 16ab6K12K18K24K30KSE +/- 3376.71, N = 2SE +/- 6523.89, N = 226474.3428523.23
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 16ab5K10K15K20K25KMin: 23097.63 / Avg: 26474.34 / Max: 29851.05Min: 21999.34 / Avg: 28523.23 / Max: 35047.11

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ab700K1400K2100K2800K3500KSE +/- 5574.79, N = 2SE +/- 33849.60, N = 23069952.672849758.371. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ab500K1000K1500K2000K2500KMin: 3064377.88 / Avg: 3069952.67 / Max: 3075527.45Min: 2815908.77 / Avg: 2849758.37 / Max: 2883607.971. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetab1.01482.02963.04444.05925.074SE +/- 0.01, N = 2SE +/- 0.01, N = 24.194.51MIN: 2.93 / MAX: 9.61MIN: 3.3 / MAX: 7.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetab246810Min: 4.18 / Avg: 4.19 / Max: 4.2Min: 4.5 / Avg: 4.51 / Max: 4.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ab700K1400K2100K2800K3500KSE +/- 46191.93, N = 2SE +/- 24256.88, N = 23170726.692973746.291. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ab500K1000K1500K2000K2500KMin: 3124534.76 / Avg: 3170726.69 / Max: 3216918.61Min: 2949489.41 / Avg: 2973746.29 / Max: 2998003.171. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 1ab11002200330044005500SE +/- 595.76, N = 2SE +/- 87.49, N = 24966.574678.43
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 1ab9001800270036004500Min: 4370.81 / Avg: 4966.57 / Max: 5562.32Min: 4590.93 / Avg: 4678.43 / Max: 4765.92

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 16ab11K22K33K44K55KSE +/- 21212.18, N = 2SE +/- 17997.43, N = 251050.6648138.55
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraKeyValue - Num Threads Write: 16 - Num Threads Read: 16ab9K18K27K36K45KMin: 29838.48 / Avg: 51050.66 / Max: 72262.84Min: 30141.12 / Avg: 48138.55 / Max: 66135.98

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab112233445550.6348.20MAX: 820.78MAX: 768.47

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab8M16M24M32M40M34027351.9335441446.08

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ab600K1200K1800K2400K3000KSE +/- 25623.03, N = 2SE +/- 89372.19, N = 22779173.082884149.301. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ab500K1000K1500K2000K2500KMin: 2753550.05 / Avg: 2779173.08 / Max: 2804796.11Min: 2794777.11 / Avg: 2884149.3 / Max: 2973521.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab51015202520.8920.15MAX: 826.8MAX: 827.09

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 32ab9K18K27K36K45KSE +/- 6821.24, N = 2SE +/- 12382.59, N = 238485.2739864.58
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 32ab7K14K21K28K35KMin: 31664.03 / Avg: 38485.27 / Max: 45306.5Min: 27481.99 / Avg: 39864.58 / Max: 52247.17

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100ab300K600K900K1200K1500KSE +/- 44835.52, N = 2SE +/- 2395.11, N = 21499036.961538274.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100ab300K600K900K1200K1500KMin: 1454201.44 / Avg: 1499036.96 / Max: 1543872.48Min: 1535879.48 / Avg: 1538274.6 / Max: 1540669.711. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ab600K1200K1800K2400K3000KSE +/- 6197.98, N = 2SE +/- 65279.03, N = 22847988.802918894.711. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ab500K1000K1500K2000K2500KMin: 2841790.82 / Avg: 2847988.8 / Max: 2854186.77Min: 2853615.68 / Avg: 2918894.71 / Max: 2984173.741. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab300K600K900K1200K1500K1434283.461469303.82

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 1ab13002600390052006500SE +/- 240.87, N = 2SE +/- 128.29, N = 26052.406184.37
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 1 - Num Threads Read: 1ab11002200330044005500Min: 5811.53 / Avg: 6052.4 / Max: 6293.27Min: 6056.08 / Avg: 6184.37 / Max: 6312.65

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ab1.30952.6193.92855.2386.5475SE +/- 0.02, N = 2SE +/- 0.12, N = 25.825.70MIN: 3.66 / MAX: 12.83MIN: 3.56 / MAX: 10.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ab246810Min: 5.8 / Avg: 5.82 / Max: 5.84Min: 5.58 / Avg: 5.7 / Max: 5.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ab918273645SE +/- 0.01, N = 2SE +/- 0.62, N = 238.2438.98MIN: 38.01 / MAX: 39.6MIN: 37.88 / MAX: 66.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ab816243240Min: 38.23 / Avg: 38.24 / Max: 38.24Min: 38.36 / Avg: 38.98 / Max: 39.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 32ab7K14K21K28K35KSE +/- 436.58, N = 2SE +/- 1624.51, N = 234204.9134813.01
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 25 - Num Threads Write: 16 - Num Threads Read: 32ab6K12K18K24K30KMin: 33768.33 / Avg: 34204.91 / Max: 34641.49Min: 33188.5 / Avg: 34813.01 / Max: 36437.52

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30ab306090120150SE +/- 0.53, N = 2SE +/- 0.53, N = 2134.34136.361. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30ab306090120150Min: 133.8 / Avg: 134.33 / Max: 134.87Min: 135.83 / Avg: 136.36 / Max: 136.91. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab306090120150154.17151.94MAX: 1065.85MAX: 946.73

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab7M14M21M28M35M30434143.6830869603.97

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10ab800K1600K2400K3200K4000KSE +/- 15714.53, N = 2SE +/- 25444.16, N = 23869729.383922671.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10ab700K1400K2100K2800K3500KMin: 3854014.85 / Avg: 3869729.38 / Max: 3885443.91Min: 3897227.44 / Avg: 3922671.6 / Max: 3948115.761. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 32ab8K16K24K32K40KSE +/- 1572.24, N = 2SE +/- 2895.08, N = 238593.1339107.77
OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 1 - Num Threads Read: 32ab7K14K21K28K35KMin: 37020.89 / Avg: 38593.13 / Max: 40165.37Min: 36212.69 / Avg: 39107.77 / Max: 42002.84

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab160K320K480K640K800K764198.22754295.46

VkFFT

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C multidimensional in single precisionab4080120160200SE +/- 2.00, N = 2SE +/- 0.00, N = 21661681. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C multidimensional in single precisionab306090120150Min: 164 / Avg: 166 / Max: 168Min: 168 / Avg: 168 / Max: 1681. (CXX) g++ options: -O3

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-vec4ab50100150200250SE +/- 0.88, N = 2SE +/- 0.52, N = 2241.85244.74
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-vec4ab4080120160200Min: 240.97 / Avg: 241.85 / Max: 242.73Min: 244.22 / Avg: 244.74 / Max: 245.25

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab8M16M24M32M40M37173315.0136760107.65

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab112233445548.7449.27MAX: 979.36MAX: 969.67

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:5ab800K1600K2400K3200K4000KSE +/- 9932.93, N = 2SE +/- 16896.18, N = 23586089.563624676.161. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:5ab600K1200K1800K2400K3000KMin: 3576156.63 / Avg: 3586089.56 / Max: 3596022.49Min: 3607779.98 / Avg: 3624676.16 / Max: 3641572.341. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 16ab11K22K33K44K55KSE +/- 26450.23, N = 2SE +/- 28220.15, N = 250925.6950394.90
OpenBenchmarking.orgRead Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 16 - Num Threads Read: 16ab9K18K27K36K45KMin: 24475.46 / Avg: 50925.69 / Max: 77375.91Min: 22174.7 / Avg: 50394.85 / Max: 78615

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab369121512.2512.37MAX: 798.47MAX: 767.51

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab0.54921.09841.64762.19682.746SE +/- 0.005, N = 2SE +/- 0.002, N = 22.4182.4411. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab246810Min: 2.41 / Avg: 2.42 / Max: 2.42Min: 2.44 / Avg: 2.44 / Max: 2.441. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100ab800K1600K2400K3200K4000KSE +/- 1776.60, N = 2SE +/- 6036.38, N = 23755227.913789965.371. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100ab700K1400K2100K2800K3500KMin: 3753451.3 / Avg: 3755227.91 / Max: 3757004.51Min: 3783928.99 / Avg: 3789965.37 / Max: 3796001.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

VkFFT

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in double precisionab50100150200250SE +/- 2.00, N = 2SE +/- 0.50, N = 22182201. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in double precisionab4080120160200Min: 216 / Avg: 218 / Max: 220Min: 219 / Avg: 219.5 / Max: 2201. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyab918273645SE +/- 0.01, N = 2SE +/- 0.34, N = 237.2137.55MIN: 36.57 / MAX: 38.53MIN: 36.59 / MAX: 41.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyab816243240Min: 37.2 / Avg: 37.21 / Max: 37.22Min: 37.21 / Avg: 37.55 / Max: 37.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:100ab800K1600K2400K3200K4000KSE +/- 12630.85, N = 2SE +/- 6659.80, N = 23676368.913645464.561. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:100ab600K1200K1800K2400K3000KMin: 3663738.06 / Avg: 3676368.91 / Max: 3688999.76Min: 3638804.76 / Avg: 3645464.56 / Max: 3652124.361. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100ab900K1800K2700K3600K4500KSE +/- 22785.71, N = 2SE +/- 7414.05, N = 23937888.943970717.741. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100ab700K1400K2100K2800K3500KMin: 3915103.23 / Avg: 3937888.94 / Max: 3960674.64Min: 3963303.68 / Avg: 3970717.74 / Max: 3978131.791. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

VkFFT

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT R2C / C2Rab306090120150SE +/- 1.50, N = 2SE +/- 0.00, N = 21411421. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT R2C / C2Rab306090120150Min: 139 / Avg: 140.5 / Max: 142Min: 142 / Avg: 142 / Max: 1421. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyab918273645SE +/- 0.29, N = 2SE +/- 0.00, N = 237.4637.23MIN: 36.54 / MAX: 40.91MIN: 36.61 / MAX: 38.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinyab816243240Min: 37.17 / Avg: 37.46 / Max: 37.75Min: 37.23 / Avg: 37.23 / Max: 37.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesab10K20K30K40K50KSE +/- 14.00, N = 2SE +/- 22.50, N = 24434344610
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesab8K16K24K32K40KMin: 44329 / Avg: 44343 / Max: 44357Min: 44587 / Avg: 44609.5 / Max: 44632

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp64-vec4ab20406080100SE +/- 0.68, N = 2SE +/- 0.27, N = 296.3496.91
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp64-vec4ab20406080100Min: 95.66 / Avg: 96.34 / Max: 97.01Min: 96.64 / Avg: 96.91 / Max: 97.17

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab7M14M21M28M35M31156415.0630973284.30

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2ab1.31852.6373.95555.2746.5925SE +/- 0.01, N = 2SE +/- 0.06, N = 25.865.83MIN: 3.68 / MAX: 7.77MIN: 3.64 / MAX: 7.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2ab246810Min: 5.85 / Avg: 5.86 / Max: 5.86Min: 5.77 / Avg: 5.83 / Max: 5.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ab600K1200K1800K2400K3000KSE +/- 9394.44, N = 2SE +/- 22072.88, N = 22880502.942895297.951. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ab500K1000K1500K2000K2500KMin: 2871108.49 / Avg: 2880502.94 / Max: 2889897.38Min: 2873225.07 / Avg: 2895297.95 / Max: 2917370.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Keyed Algorithmsab2004006008001000SE +/- 3.99, N = 2SE +/- 0.41, N = 2819.76815.601. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Keyed Algorithmsab140280420560700Min: 815.77 / Avg: 819.76 / Max: 823.75Min: 815.19 / Avg: 815.6 / Max: 816.011. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetab246810SE +/- 0.02, N = 2SE +/- 0.00, N = 28.058.01MIN: 7.97 / MAX: 13.58MIN: 7.93 / MAX: 8.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetab3691215Min: 8.03 / Avg: 8.05 / Max: 8.07Min: 8.01 / Avg: 8.01 / Max: 8.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerab60120180240300SE +/- 0.70, N = 2SE +/- 0.24, N = 2291.38289.97MIN: 288.57 / MAX: 361.11MIN: 288.54 / MAX: 305.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerab50100150200250Min: 290.68 / Avg: 291.38 / Max: 292.08Min: 289.72 / Avg: 289.97 / Max: 290.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetab3691215SE +/- 0.04, N = 2SE +/- 0.02, N = 211.6311.68MIN: 11.5 / MAX: 12.25MIN: 11.54 / MAX: 12.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetab3691215Min: 11.59 / Avg: 11.63 / Max: 11.67Min: 11.65 / Avg: 11.68 / Max: 11.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30ab20406080100SE +/- 0.10, N = 2SE +/- 1.27, N = 281.5081.181. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30ab1632486480Min: 81.4 / Avg: 81.5 / Max: 81.59Min: 79.91 / Avg: 81.18 / Max: 82.441. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab306090120150118.12118.58MAX: 1000.25MAX: 1240.13

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ab3691215SE +/- 0.02, N = 2SE +/- 0.01, N = 211.3511.31MIN: 11.24 / MAX: 11.96MIN: 11.19 / MAX: 12.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ab3691215Min: 11.33 / Avg: 11.35 / Max: 11.37Min: 11.3 / Avg: 11.31 / Max: 11.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab246810SE +/- 0.005, N = 2SE +/- 0.004, N = 28.4668.4941. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab3691215Min: 8.46 / Avg: 8.47 / Max: 8.47Min: 8.49 / Avg: 8.49 / Max: 8.51. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetab612182430SE +/- 0.10, N = 2SE +/- 0.02, N = 227.6127.52MIN: 27.4 / MAX: 36.87MIN: 27.38 / MAX: 30.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetab612182430Min: 27.51 / Avg: 27.61 / Max: 27.71Min: 27.5 / Avg: 27.52 / Max: 27.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricab9K18K27K36K45KSE +/- 149.00, N = 2SE +/- 10.00, N = 241630414951. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricab7K14K21K28K35KMin: 41481 / Avg: 41630 / Max: 41779Min: 41485 / Avg: 41495 / Max: 415051. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdab48121620SE +/- 0.06, N = 2SE +/- 0.01, N = 216.1416.09MIN: 15.87 / MAX: 25.06MIN: 15.91 / MAX: 17.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdab48121620Min: 16.08 / Avg: 16.14 / Max: 16.19Min: 16.08 / Avg: 16.09 / Max: 16.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30ab100200300400500SE +/- 2.47, N = 2SE +/- 1.03, N = 2444.12445.481. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30ab80160240320400Min: 441.65 / Avg: 444.12 / Max: 446.58Min: 444.45 / Avg: 445.48 / Max: 446.51. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10ab800K1600K2400K3200K4000KSE +/- 8680.17, N = 2SE +/- 1947.97, N = 23697587.293708754.431. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10ab600K1200K1800K2400K3000KMin: 3688907.12 / Avg: 3697587.29 / Max: 3706267.46Min: 3706806.45 / Avg: 3708754.43 / Max: 3710702.41. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-vec4ab4080120160200SE +/- 0.52, N = 2SE +/- 0.08, N = 2194.96194.40
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-vec4ab4080120160200Min: 194.44 / Avg: 194.96 / Max: 195.47Min: 194.32 / Avg: 194.4 / Max: 194.47

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetab612182430SE +/- 0.04, N = 2SE +/- 0.02, N = 227.5027.43MIN: 27.35 / MAX: 29.65MIN: 27.28 / MAX: 28.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetab612182430Min: 27.46 / Avg: 27.5 / Max: 27.54Min: 27.41 / Avg: 27.43 / Max: 27.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetab246810SE +/- 0.01, N = 2SE +/- 0.03, N = 28.028.04MIN: 7.95 / MAX: 8.44MIN: 7.94 / MAX: 17.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetab3691215Min: 8.01 / Avg: 8.02 / Max: 8.03Min: 8.01 / Avg: 8.04 / Max: 8.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:5ab800K1600K2400K3200K4000KSE +/- 2662.62, N = 2SE +/- 22969.50, N = 23688737.763697778.341. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:5ab600K1200K1800K2400K3000KMin: 3686075.14 / Avg: 3688737.76 / Max: 3691400.38Min: 3674808.84 / Avg: 3697778.34 / Max: 3720747.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab200K400K600K800K1000K1148481.561151261.34

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp64-scalarab4080120160200SE +/- 0.00, N = 2SE +/- 0.14, N = 2161.17161.56
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp64-scalarab306090120150Min: 161.16 / Avg: 161.17 / Max: 161.17Min: 161.42 / Avg: 161.56 / Max: 161.7

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ab246810SE +/- 0.00, N = 2SE +/- 0.02, N = 28.368.34MIN: 8.25 / MAX: 9.44MIN: 8.22 / MAX: 111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ab3691215Min: 8.35 / Avg: 8.36 / Max: 8.36Min: 8.32 / Avg: 8.34 / Max: 8.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:10ab800K1600K2400K3200K4000KSE +/- 16965.66, N = 2SE +/- 717.79, N = 23591856.983600224.351. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:10ab600K1200K1800K2400K3000KMin: 3574891.32 / Avg: 3591856.98 / Max: 3608822.64Min: 3599506.56 / Avg: 3600224.35 / Max: 3600942.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetab0.98551.9712.95653.9424.9275SE +/- 0.17, N = 2SE +/- 0.15, N = 24.384.37MIN: 2.95 / MAX: 7.51MIN: 2.97 / MAX: 7.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetab246810Min: 4.21 / Avg: 4.38 / Max: 4.55Min: 4.22 / Avg: 4.37 / Max: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-vec4ab150300450600750SE +/- 0.75, N = 2SE +/- 0.24, N = 2708.28709.85
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-vec4ab120240360480600Min: 707.53 / Avg: 708.28 / Max: 709.03Min: 709.61 / Avg: 709.85 / Max: 710.09

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:5ab800K1600K2400K3200K4000KSE +/- 1520.81, N = 2SE +/- 25609.70, N = 23840222.053831976.081. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:5ab700K1400K2100K2800K3500KMin: 3838701.23 / Avg: 3840222.05 / Max: 3841742.86Min: 3806366.38 / Avg: 3831976.08 / Max: 3857585.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdab48121620SE +/- 0.00, N = 2SE +/- 0.01, N = 216.0616.09MIN: 15.88 / MAX: 16.5MIN: 15.86 / MAX: 27.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdab48121620Min: 16.06 / Avg: 16.06 / Max: 16.06Min: 16.08 / Avg: 16.09 / Max: 16.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Unkeyed Algorithmsab120240360480600SE +/- 0.06, N = 2SE +/- 1.95, N = 2548.99550.001. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Unkeyed Algorithmsab100200300400500Min: 548.93 / Avg: 548.99 / Max: 549.04Min: 548.05 / Avg: 550 / Max: 551.951. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-scalarab1020304050SE +/- 0.05, N = 2SE +/- 0.00, N = 244.5344.61
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-scalarab918273645Min: 44.48 / Avg: 44.53 / Max: 44.57Min: 44.6 / Avg: 44.61 / Max: 44.61

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-scalarab4080120160200SE +/- 1.59, N = 2SE +/- 1.02, N = 2191.61191.92
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-scalarab4080120160200Min: 190.02 / Avg: 191.61 / Max: 193.2Min: 190.9 / Avg: 191.92 / Max: 192.93

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab0.84081.68162.52243.36324.204SE +/- 0.002, N = 2SE +/- 0.006, N = 23.7313.7371. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab246810Min: 3.73 / Avg: 3.73 / Max: 3.73Min: 3.73 / Avg: 3.74 / Max: 3.741. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab51015202519.2619.29MAX: 691.58MAX: 690.69

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetab510152025SE +/- 0.01, N = 2SE +/- 0.03, N = 221.1521.12MIN: 15.49 / MAX: 22.42MIN: 15.32 / MAX: 24.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetab510152025Min: 21.14 / Avg: 21.15 / Max: 21.16Min: 21.09 / Avg: 21.12 / Max: 21.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetab510152025SE +/- 0.01, N = 2SE +/- 0.03, N = 221.1521.18MIN: 15.65 / MAX: 22.53MIN: 15.57 / MAX: 22.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetab510152025Min: 21.14 / Avg: 21.15 / Max: 21.16Min: 21.15 / Avg: 21.18 / Max: 21.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30ab60120180240300SE +/- 1.20, N = 2SE +/- 0.02, N = 2267.24266.891. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30ab50100150200250Min: 266.04 / Avg: 267.24 / Max: 268.44Min: 266.88 / Avg: 266.89 / Max: 266.911. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3ab246810SE +/- 0.00, N = 2SE +/- 0.01, N = 28.038.04MIN: 7.93 / MAX: 12.66MIN: 7.95 / MAX: 9.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3ab3691215Min: 8.02 / Avg: 8.03 / Max: 8.03Min: 8.03 / Avg: 8.04 / Max: 8.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ab700K1400K2100K2800K3500KSE +/- 1005.72, N = 2SE +/- 4007.75, N = 23050497.793054116.421. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ab500K1000K1500K2000K2500KMin: 3049492.07 / Avg: 3050497.79 / Max: 3051503.5Min: 3050108.66 / Avg: 3054116.42 / Max: 3058124.171. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab400K800K1200K1600K2000K1904407.961902392.91

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-scalarab60120180240300SE +/- 0.72, N = 2SE +/- 0.19, N = 2269.09269.36
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-scalarab50100150200250Min: 268.37 / Avg: 269.09 / Max: 269.81Min: 269.17 / Avg: 269.36 / Max: 269.54

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0ab3691215SE +/- 0.00, N = 2SE +/- 0.01, N = 211.3711.36MIN: 11.27 / MAX: 12.47MIN: 11.2 / MAX: 12.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0ab3691215Min: 11.36 / Avg: 11.37 / Max: 11.37Min: 11.34 / Avg: 11.36 / Max: 11.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab0.25990.51980.77971.03961.2995SE +/- 0.000, N = 2SE +/- 0.001, N = 21.1541.1551. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab246810Min: 1.15 / Avg: 1.15 / Max: 1.15Min: 1.15 / Avg: 1.15 / Max: 1.161. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetab3691215SE +/- 0.03, N = 2SE +/- 0.03, N = 211.6911.70MIN: 11.55 / MAX: 12.58MIN: 11.58 / MAX: 12.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetab3691215Min: 11.66 / Avg: 11.69 / Max: 11.72Min: 11.67 / Avg: 11.7 / Max: 11.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50ab918273645SE +/- 0.01, N = 2SE +/- 0.03, N = 238.2338.26MIN: 37.97 / MAX: 39.58MIN: 38.05 / MAX: 44.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50ab816243240Min: 38.22 / Avg: 38.23 / Max: 38.24Min: 38.23 / Avg: 38.26 / Max: 38.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ab48121620SE +/- 0.00, N = 2SE +/- 0.02, N = 215.2815.29MIN: 15.13 / MAX: 15.85MIN: 15.1 / MAX: 19.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ab48121620Min: 15.27 / Avg: 15.28 / Max: 15.28Min: 15.27 / Avg: 15.29 / Max: 15.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18ab48121620SE +/- 0.03, N = 2SE +/- 0.00, N = 215.3115.30MIN: 15.12 / MAX: 24.13MIN: 15.14 / MAX: 16.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18ab48121620Min: 15.28 / Avg: 15.31 / Max: 15.33Min: 15.3 / Avg: 15.3 / Max: 15.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-vec4ab306090120150SE +/- 0.12, N = 2SE +/- 0.00, N = 2114.59114.66
OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-vec4ab20406080100Min: 114.47 / Avg: 114.59 / Max: 114.71Min: 114.65 / Avg: 114.66 / Max: 114.66

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerab60120180240300SE +/- 0.34, N = 2SE +/- 0.65, N = 2290.59290.71MIN: 288.05 / MAX: 302.16MIN: 288.79 / MAX: 298.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerab50100150200250Min: 290.25 / Avg: 290.59 / Max: 290.92Min: 290.06 / Avg: 290.71 / Max: 291.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileab6001200180024003000SE +/- 2.15, N = 2SE +/- 1.22, N = 22640.782641.58
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileab5001000150020002500Min: 2638.62 / Avg: 2640.78 / Max: 2642.93Min: 2640.35 / Avg: 2641.58 / Max: 2642.8

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-scalarab60120180240300SE +/- 0.06, N = 2SE +/- 0.31, N = 2267.34267.27
OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-scalarab50100150200250Min: 267.28 / Avg: 267.34 / Max: 267.4Min: 266.96 / Avg: 267.27 / Max: 267.58

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16ab20406080100SE +/- 0.02, N = 2SE +/- 0.00, N = 275.3675.37MIN: 74.92 / MAX: 80.26MIN: 74.9 / MAX: 79.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16ab1428425670Min: 75.34 / Avg: 75.36 / Max: 75.37Min: 75.37 / Avg: 75.37 / Max: 75.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ab20406080100SE +/- 0.04, N = 2SE +/- 0.09, N = 275.3775.36MIN: 74.95 / MAX: 79.65MIN: 74.85 / MAX: 86.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ab1428425670Min: 75.33 / Avg: 75.37 / Max: 75.4Min: 75.26 / Avg: 75.36 / Max: 75.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

YugabyteDB

YugabyteDB is a high performance, cloud-native and transactional distributed SQL database. This test profile uses a single node of YugabyteDB on the local host. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgWrite Ops/sec, More Is BetterYugabyteDB 2.19Workload: CassandraBatchKeyValue, Batch 10 - Num Threads Write: 32 - Num Threads Read: 32b16K32K48K64K80K76379.98

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mab48121620SE +/- 0.05, N = 2SE +/- 0.04, N = 217.7117.71MIN: 17.56 / MAX: 26.84MIN: 17.54 / MAX: 28.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mab48121620Min: 17.66 / Avg: 17.71 / Max: 17.76Min: 17.67 / Avg: 17.71 / Max: 17.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceab0.40280.80561.20841.61122.014SE +/- 0.00, N = 2SE +/- 0.01, N = 21.791.79MIN: 1.39 / MAX: 2.89MIN: 1.39 / MAX: 2.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefaceab246810Min: 1.79 / Avg: 1.79 / Max: 1.79Min: 1.78 / Avg: 1.79 / Max: 1.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mab48121620SE +/- 0.01, N = 2SE +/- 0.01, N = 217.6617.66MIN: 17.55 / MAX: 18.2MIN: 17.54 / MAX: 18.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mab48121620Min: 17.65 / Avg: 17.66 / Max: 17.67Min: 17.65 / Avg: 17.66 / Max: 17.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceab0.40050.8011.20151.6022.0025SE +/- 0.01, N = 2SE +/- 0.01, N = 21.781.78MIN: 1.39 / MAX: 2.49MIN: 1.38 / MAX: 2.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceab246810Min: 1.77 / Avg: 1.78 / Max: 1.78Min: 1.77 / Avg: 1.78 / Max: 1.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ab246810SE +/- 0.00, N = 2SE +/- 0.00, N = 28.038.03MIN: 7.96 / MAX: 8.48MIN: 7.94 / MAX: 8.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ab3691215Min: 8.02 / Avg: 8.03 / Max: 8.03Min: 8.03 / Avg: 8.03 / Max: 8.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ab246810SE +/- 0.01, N = 2SE +/- 0.00, N = 28.328.32MIN: 8.22 / MAX: 8.87MIN: 8.21 / MAX: 8.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ab3691215Min: 8.31 / Avg: 8.32 / Max: 8.33Min: 8.31 / Avg: 8.32 / Max: 8.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab369121510.4710.47MAX: 649MAX: 667.98

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 3000 - Rounds: 30a2004006008001000SE +/- 322.76, N = 2942.501. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 1000 - Rounds: 30a50100150200250SE +/- 14.83, N = 2238.901. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

VkFFT

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precision, no reshufflingab70140210280350SE +/- 0.00, N = 2SE +/- 0.00, N = 23033031. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precision, no reshufflingab50100150200250Min: 303 / Avg: 303 / Max: 303Min: 303 / Avg: 303 / Max: 3031. (CXX) g++ options: -O3

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precisionab60120180240300SE +/- 0.50, N = 2SE +/- 0.50, N = 22802801. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in single precisionab50100150200250Min: 279 / Avg: 279.5 / Max: 280Min: 279 / Avg: 279.5 / Max: 2801. (CXX) g++ options: -O3

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in half precisionab60120180240300SE +/- 1.00, N = 2SE +/- 0.00, N = 22592591. (CXX) g++ options: -O3
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.2.31Test: FFT + iFFT C2C 1D batched in half precisionab50100150200250Min: 258 / Avg: 259 / Max: 260Min: 259 / Avg: 259 / Max: 2591. (CXX) g++ options: -O3

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

Upscale: 2x - Precision: Single

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

VkFFT

Test: FFT + iFFT C2C Bluestein benchmark in double precision

a: The test quit with a non-zero exit status. E: VkFFT System: 67x67x67 Buffer: 4 MB avg_time_per_step: 45.100 ms std_error: 0.470 num_iter: 892 benchmark: 104 bandwidth: 1.2

b: The test quit with a non-zero exit status. E: VkFFT System: 67x67x67 Buffer: 4 MB avg_time_per_step: 45.173 ms std_error: 0.499 num_iter: 892 benchmark: 104 bandwidth: 1.2

Test: FFT + iFFT C2C Bluestein in single precision

a: The test quit with a non-zero exit status. E: VkFFT System: 67x67x67 Buffer: 2 MB avg_time_per_step: 42.473 ms std_error: 0.276 num_iter: 1000 benchmark: 55 bandwidth: 0.6

b: The test quit with a non-zero exit status. E: VkFFT System: 67x67x67 Buffer: 2 MB avg_time_per_step: 42.475 ms std_error: 0.423 num_iter: 1000 benchmark: 55 bandwidth: 0.6

128 Results Shown

YugabyteDB:
  CassandraBatchKeyValue, Batch 10 - 16 - 32
  CassandraBatchKeyValue, Batch 25 - 16 - 16
  CassandraBatchKeyValue, Batch 25 - 32 - 16
  CassandraKeyValue - 32 - 16
  CassandraBatchKeyValue, Batch 10 - 32 - 32
  CassandraKeyValue - 16 - 32
  CassandraBatchKeyValue, Batch 10 - 16 - 32
  CassandraKeyValue - 32 - 1
  CassandraKeyValue - 16 - 1
  CassandraBatchKeyValue, Batch 10 - 32 - 16
  CassandraKeyValue - 1 - 1
  CassandraBatchKeyValue, Batch 25 - 32 - 32
  CassandraBatchKeyValue, Batch 25 - 32 - 1
  CassandraKeyValue - 1 - 16
  CassandraBatchKeyValue, Batch 25 - 1 - 16
  CassandraBatchKeyValue, Batch 25 - 16 - 1
  CassandraBatchKeyValue, Batch 10 - 16 - 1
  CassandraKeyValue - 1 - 32
  CassandraKeyValue - 32 - 32
Dragonflydb
YugabyteDB
Dragonflydb
YugabyteDB
Redis 7.0.12 + memtier_benchmark
NCNN
Redis 7.0.12 + memtier_benchmark
YugabyteDB:
  CassandraBatchKeyValue, Batch 10 - 1 - 1
  CassandraKeyValue - 16 - 16
Apache IoTDB:
  200 - 100 - 200:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark
Apache IoTDB
YugabyteDB
Dragonflydb
Redis 7.0.12 + memtier_benchmark
Apache IoTDB
YugabyteDB
NCNN:
  CPU - shufflenet-v2
  CPU - resnet50
YugabyteDB
Apache CouchDB
Apache IoTDB:
  200 - 100 - 500:
    Average Latency
    point/sec
Dragonflydb
YugabyteDB
Apache IoTDB
VkFFT
vkpeak
Apache IoTDB:
  100 - 100 - 500
  100 - 100 - 200
Dragonflydb
YugabyteDB
Apache IoTDB
VVenC
Dragonflydb
VkFFT
NCNN
Dragonflydb:
  60 - 1:100
  20 - 1:100
VkFFT
NCNN
Apache Cassandra
vkpeak
Apache IoTDB
NCNN
Redis 7.0.12 + memtier_benchmark
Crypto++
NCNN:
  CPU - mnasnet
  Vulkan GPU - vision_transformer
  CPU - alexnet
Apache CouchDB
Apache IoTDB
NCNN
VVenC
NCNN
BRL-CAD
NCNN
Apache CouchDB
Dragonflydb
vkpeak
NCNN:
  CPU - mobilenet
  Vulkan GPU - mnasnet
Dragonflydb
Apache IoTDB
vkpeak
NCNN
Dragonflydb
NCNN
vkpeak
Dragonflydb
NCNN
Crypto++
vkpeak:
  fp16-scalar
  fp32-scalar
VVenC
Apache IoTDB
NCNN:
  CPU - googlenet
  Vulkan GPU - googlenet
Apache CouchDB
NCNN
Redis 7.0.12 + memtier_benchmark
Apache IoTDB
vkpeak
NCNN
VVenC
NCNN:
  Vulkan GPU - alexnet
  Vulkan GPU - resnet50
  CPU - resnet18
  Vulkan GPU - resnet18
vkpeak
NCNN
Timed GCC Compilation
vkpeak
NCNN:
  Vulkan GPU - vgg16
  CPU - vgg16
YugabyteDB
NCNN:
  Vulkan GPU - regnety_400m
  Vulkan GPU - blazeface
  CPU - regnety_400m
  CPU - blazeface
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Apache IoTDB
Apache CouchDB:
  500 - 3000 - 30
  500 - 1000 - 30
VkFFT:
  FFT + iFFT C2C 1D batched in single precision, no reshuffling
  FFT + iFFT C2C 1D batched in single precision
  FFT + iFFT C2C 1D batched in half precision