auggy

AMD Ryzen 5 4500U testing with a LENOVO LNVNB161216 (EECN20WW BIOS) and AMD Renoir 512MB on Pop 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308051-NE-AUGGY363552
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
August 04 2023
  14 Hours, 7 Minutes
b
August 04 2023
  6 Hours, 59 Minutes
AMD Renoir - AMD Ryzen 5 4500U
August 05 2023
  13 Hours, 45 Minutes
Invert Behavior (Only Show Selected Data)
  11 Hours, 37 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


auggyOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 5 4500U @ 2.38GHz (6 Cores)LENOVO LNVNB161216 (EECN20WW BIOS)AMD Renoir/Cezanne16GB256GB SK hynix HFM256GDHTNI-87A0BAMD Renoir 512MB (1500/400MHz)AMD Renoir 512MB (1500MHz)AMD Renoir Radeon HD AudioRealtek RTL8822CE 802.11ac PCIePop 22.045.17.5-76051705-generic (x86_64)GNOME Shell 42.1X Server 1.21.1.34.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.44)1.2.204GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionAuggy BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balanced - GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-025 - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

abAMD Renoir - AMD Ryzen 5 4500UResult OverviewPhoronix Test Suite100%104%109%113%117%BRL-CADVVenCTimed GCC CompilationvkpeakApache CouchDBDragonflydbNeural Magic DeepSparseNCNNApache CassandraVkResample

auggydeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdragonflydb: 20 - 1:100apache-iotdb: 100 - 100 - 500vvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastdragonflydb: 50 - 1:100ncnn: Vulkan GPU - shufflenet-v2brl-cad: VGR Performance Metricapache-iotdb: 100 - 100 - 500vvenc: Bosphorus 4K - Fastdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamncnn: CPU - blazefaceapache-iotdb: 200 - 100 - 200deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamncnn: Vulkan GPU - blazefacebuild-gcc: Time To Compiledragonflydb: 50 - 1:10apache-iotdb: 100 - 100 - 200dragonflydb: 60 - 1:100vvenc: Bosphorus 1080p - Fasterapache-iotdb: 200 - 100 - 200dragonflydb: 10 - 1:5apache-iotdb: 200 - 100 - 500apache-iotdb: 100 - 100 - 200dragonflydb: 60 - 1:5vkpeak: fp64-vec4ncnn: CPU - shufflenet-v2deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: CPU - mnasnetncnn: Vulkan GPU - mnasnetdragonflydb: 20 - 1:5ncnn: Vulkan GPU - resnet18vkpeak: fp64-scalarvkpeak: int32-scalarvkpeak: int16-scalarapache-iotdb: 200 - 100 - 500deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdragonflydb: 20 - 1:10apache-iotdb: 500 - 100 - 200deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamvkpeak: int16-vec4couchdb: 100 - 3000 - 30ncnn: CPU-v2-v2 - mobilenet-v2dragonflydb: 10 - 1:100vkpeak: int32-vec4deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamapache-iotdb: 100 - 1 - 200ncnn: CPU - efficientnet-b0deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdragonflydb: 10 - 1:10ncnn: CPU - regnety_400mdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamcouchdb: 100 - 1000 - 30deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamvkpeak: fp16-vec4ncnn: Vulkan GPU - googlenetdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamncnn: Vulkan GPU - regnety_400mdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamncnn: Vulkan GPU - squeezenet_ssddragonflydb: 50 - 1:5ncnn: CPU - googlenetdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamncnn: CPU - squeezenet_ssdncnn: Vulkan GPU - FastestDetdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamapache-iotdb: 500 - 100 - 500deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamncnn: Vulkan GPU-v2-v2 - mobilenet-v2deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamcouchdb: 300 - 1000 - 30ncnn: Vulkan GPU - vision_transformerapache-iotdb: 200 - 1 - 500ncnn: CPU - vision_transformerdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamcouchdb: 300 - 3000 - 30deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamncnn: Vulkan GPU - yolov4-tinyapache-iotdb: 500 - 100 - 500ncnn: CPU - FastestDetapache-iotdb: 500 - 1 - 200couchdb: 500 - 1000 - 30apache-iotdb: 500 - 1 - 500apache-iotdb: 500 - 1 - 200cassandra: Writesdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamncnn: Vulkan GPU - efficientnet-b0apache-iotdb: 200 - 1 - 500deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamncnn: CPU-v3-v3 - mobilenet-v3deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamncnn: Vulkan GPU - resnet50dragonflydb: 60 - 1:10vkresample: 2x - Singlencnn: CPU - mobilenetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamncnn: CPU - resnet18deepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamcouchdb: 500 - 3000 - 30ncnn: CPU - alexnetdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamncnn: CPU - resnet50deepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamapache-iotdb: 500 - 100 - 200ncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - vgg16apache-iotdb: 100 - 1 - 200deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamapache-iotdb: 100 - 1 - 500ncnn: Vulkan GPU - alexnetapache-iotdb: 200 - 1 - 200deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamvkpeak: fp32-vec4apache-iotdb: 500 - 1 - 500deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamncnn: CPU - vgg16apache-iotdb: 200 - 1 - 200apache-iotdb: 100 - 1 - 500ncnn: CPU - yolov4-tinyvkpeak: fp32-scalarvkpeak: fp16-scalarabAMD Renoir - AMD Ryzen 5 4500U6.3938155.95754.7561628.0812521281.0715836698.93.4825.164603815.394.4353415288.871.5619.0426110.579815.3557196.05491.4516236661.97302.45583.30801.342725.004634736.0115789875.81672849.0813.36107.71560321.9013117829.31104.65639312.8080.605.0035.895127.85115.955.885.51555633.3315.1959.6970.9289.32354.12253.17013.949874.221440.3928619544.4911896529.9625.632939.0120229.38493.6517.67538437.76163.1969.994194.0360446723.6211.6431.929014.3163517164.969.5818.333754.5346155.070279.96824.0120.3310.701452.240219.13709.883.501218.31602213.7820.6620.4003146.857018.505.11851.960130.37786656162.1398.69317.4115.445525.746364.699338.8298277.677241.9742.32242.2532.651230.6183878.652280.41983.566041.00720.445.14870872.73401.31371.3819.342323620.9030143.255035.495184.372711.30982415.933.68256.40723.988735.50638619.9755.60226.92809.269310.4265286.904115.3568.18121252.59910.8243.957835.4524.704440.4602156.1926.9593.5728.534.1321818281.0110.7321.114.2141237.280743.8114177.28640326.8168.410893.43686458.8945.1240.6862.214.216.7742147.22473.6558814.7713549095.843.975.8287103095.21625171.8028.8655112.782315.3016195.83871.36296.56873.37181.252604.386649063.96624768.1814.531511381.84620626.0474.45.1934.959428.59396.396.315.91595869.8916.1956.0366.7884.22250.61653.9970.296242.6411587558.6424.374441.0131218.43517.7878.02515626.76156.567.149190.479411.8733.124814.89515883.699.9418.45254.184160.432284.32993.8821.0110.541553.228518.78179.963.555817.87600771.4321.2720.1215148.865818.45.23842.475629.9723100.01757.615.196925.255365.752839.5768280.539245.8241.2232.839230.4425896.375285.36753.504140.235.13404.3862353521.2112141.298535.399584.597411.443.64826.41718.165835.17636341.6355.72627.02815.383510.4703285.740915.3667.84731261.9810.7944.177835.7124.641540.562926.9892.984.149510.714.2071237.675443.7597176.3968.427493.6940.6662.254.214.7072211.80674.7697625.2824635238.8913387890.24.1086.081652808.074.5262248337.161.81710.232797.713513.3056225.73181.2614164673.03336.98402.96761.422409.240576996.2714130374.99692262.1314.673118.24514823.8311981302.92113.87673078.7879.704.8137.602326.58896.375.935.81587975.2115.1759.6370.8789.15375.23239.35454.177773.589940.7189602510.411298528.2124.589440.6553228.89512.3977.89517858.51163.1367.784590.2587465319.7511.4133.211514.7520536437.479.8319.001052.6196157.985275.03554.0020.4610.891551.530819.40029.653.449318.42618630.5420.9520.6990144.779818.05.09865.638829.57186836941.96101.34437.4315.567125.134864.192339.7664284.352240.2541.38237.1232.152131.0929891.191285.86943.498140.38706.955.05855795.41407.84570.2619.642318021.1927141.308035.859583.541811.43994052.253.63976.34717.175835.24632802.2255.24926.79816.053310.3851288.079515.4767.66501254.97610.7444.284735.6324.529140.7486155.1426.8093.5028.364.1565813547.7110.67214.2291236.443443.9816176.83637397.968.143193.72684869.0445.0640.6462.24.21OpenBenchmarking.org

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab246810SE +/- 0.0029, N = 2SE +/- 0.0017, N = 24.70726.39386.7742

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab50100150200250SE +/- 0.14, N = 2SE +/- 0.03, N = 2211.81155.96147.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab1.07322.14643.21964.29285.366SE +/- 0.0053, N = 2SE +/- 0.0029, N = 24.76974.75613.6558

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab2004006008001000SE +/- 0.89, N = 2SE +/- 0.37, N = 2625.28628.08814.77

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100AMD Renoir - AMD Ryzen 5 4500Uba140K280K420K560K700KSE +/- 16580.85, N = 2SE +/- 24782.09, N = 2635238.89549095.84521281.071. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U3M6M9M12M15M15836698.913387890.2

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

b: Test failed to run.

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterAMD Renoir - AMD Ryzen 5 4500Uba0.92431.84862.77293.69724.6215SE +/- 0.045, N = 2SE +/- 0.066, N = 24.1083.9703.4821. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FastAMD Renoir - AMD Ryzen 5 4500Uba246810SE +/- 0.061, N = 2SE +/- 0.105, N = 26.0815.8285.1641. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100bAMD Renoir - AMD Ryzen 5 4500Ua150K300K450K600K750KSE +/- 25536.45, N = 2SE +/- 10719.53, N = 2710309.00652808.07603815.391. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2aAMD Renoir - AMD Ryzen 5 4500Ub1.17232.34463.51694.68925.8615SE +/- 0.27, N = 2SE +/- 0.10, N = 24.434.525.21MIN: 3.95 / MAX: 19.63MIN: 3.96 / MAX: 20.36MIN: 4.05 / MAX: 23.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance MetricbAMD Renoir - AMD Ryzen 5 4500Ua13K26K39K52K65KSE +/- 83.00, N = 2SE +/- 2174.50, N = 26251762248534151. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500AMD Renoir - AMD Ryzen 5 4500Ua70140210280350337.16288.87MAX: 5976.26MAX: 3332.52

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

b: Test failed to run.

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastAMD Renoir - AMD Ryzen 5 4500Uba0.40880.81761.22641.63522.044SE +/- 0.031, N = 2SE +/- 0.056, N = 21.8171.8021.5611. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab3691215SE +/- 0.0180, N = 2SE +/- 0.0693, N = 210.23279.04268.8655

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab306090120150SE +/- 0.17, N = 2SE +/- 0.85, N = 297.71110.58112.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamabAMD Renoir - AMD Ryzen 5 4500U48121620SE +/- 1.07, N = 2SE +/- 0.68, N = 215.3615.3013.31

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreambaAMD Renoir - AMD Ryzen 5 4500U50100150200250SE +/- 13.53, N = 2SE +/- 11.53, N = 2195.84196.05225.73

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceAMD Renoir - AMD Ryzen 5 4500Uba0.32630.65260.97891.30521.6315SE +/- 0.01, N = 2SE +/- 0.06, N = 21.261.361.45MIN: 1.19 / MAX: 5.91MIN: 1.24 / MAX: 7.56MIN: 1.27 / MAX: 7.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U3M6M9M12M15M16236661.9714164673.03

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreambaAMD Renoir - AMD Ryzen 5 4500U70140210280350SE +/- 7.05, N = 2SE +/- 2.66, N = 2296.57302.46336.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreambaAMD Renoir - AMD Ryzen 5 4500U0.75871.51742.27613.03483.7935SE +/- 0.0772, N = 2SE +/- 0.0234, N = 23.37183.30802.9676

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefacebaAMD Renoir - AMD Ryzen 5 4500U0.31950.6390.95851.2781.5975SE +/- 0.03, N = 2SE +/- 0.17, N = 21.251.341.42MIN: 1.22 / MAX: 1.35MIN: 1.25 / MAX: 5.84MIN: 1.21 / MAX: 42.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To CompileAMD Renoir - AMD Ryzen 5 4500Uba6001200180024003000SE +/- 46.09, N = 2SE +/- 9.38, N = 22409.242604.392725.00

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10baAMD Renoir - AMD Ryzen 5 4500U140K280K420K560K700KSE +/- 23316.32, N = 2SE +/- 20222.31, N = 2649063.96634736.01576996.271. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U3M6M9M12M15M15789875.8114130374.99

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

b: Test failed to run.

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:100AMD Renoir - AMD Ryzen 5 4500Uab150K300K450K600K750KSE +/- 81324.68, N = 2SE +/- 33803.70, N = 2692262.13672849.08624768.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FasterAMD Renoir - AMD Ryzen 5 4500Uba48121620SE +/- 0.11, N = 2SE +/- 0.39, N = 214.6714.5313.361. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200AMD Renoir - AMD Ryzen 5 4500Ua306090120150118.24107.71MAX: 4097.41MAX: 2081.88

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

b: Test failed to run.

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:5aAMD Renoir - AMD Ryzen 5 4500Ub120K240K360K480K600KSE +/- 4625.99, N = 2SE +/- 7066.41, N = 2560321.90514823.83511381.841. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U3M6M9M12M15M13117829.3111981302.92

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

b: Test failed to run.

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200AMD Renoir - AMD Ryzen 5 4500Ua306090120150113.87104.65MAX: 5638.35MAX: 2607.58

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

b: Test failed to run.

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:5AMD Renoir - AMD Ryzen 5 4500Uab140K280K420K560K700KSE +/- 2499.83, N = 2SE +/- 5949.45, N = 2673078.78639312.80620626.041. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp64-vec4aAMD Renoir - AMD Ryzen 5 4500Ub20406080100SE +/- 3.81, N = 2SE +/- 3.03, N = 280.6079.7074.40

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2AMD Renoir - AMD Ryzen 5 4500Uab1.16782.33563.50344.67125.839SE +/- 0.08, N = 2SE +/- 0.11, N = 24.815.005.19MIN: 3.93 / MAX: 21.18MIN: 3.96 / MAX: 21.02MIN: 4.05 / MAX: 19.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreambaAMD Renoir - AMD Ryzen 5 4500U918273645SE +/- 0.34, N = 2SE +/- 0.48, N = 234.9635.9037.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreambaAMD Renoir - AMD Ryzen 5 4500U714212835SE +/- 0.26, N = 2SE +/- 0.34, N = 228.5927.8526.59

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3aAMD Renoir - AMD Ryzen 5 4500Ub246810SE +/- 0.36, N = 2SE +/- 0.17, N = 25.956.376.39MIN: 5.17 / MAX: 18.42MIN: 5.15 / MAX: 27.13MIN: 5.27 / MAX: 18.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetaAMD Renoir - AMD Ryzen 5 4500Ub246810SE +/- 0.25, N = 2SE +/- 0.36, N = 25.885.936.31MIN: 5.21 / MAX: 22.5MIN: 5.06 / MAX: 22.98MIN: 5.18 / MAX: 18.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetaAMD Renoir - AMD Ryzen 5 4500Ub1.32982.65963.98945.31926.649SE +/- 0.14, N = 2SE +/- 0.30, N = 25.515.815.91MIN: 5.23 / MAX: 42.79MIN: 5 / MAX: 22.14MIN: 5.25 / MAX: 22.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:5bAMD Renoir - AMD Ryzen 5 4500Ua130K260K390K520K650KSE +/- 15736.96, N = 2SE +/- 13564.23, N = 2595869.89587975.21555633.331. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18AMD Renoir - AMD Ryzen 5 4500Uab48121620SE +/- 0.02, N = 2SE +/- 0.09, N = 215.1715.1916.19MIN: 14.49 / MAX: 32.36MIN: 14.46 / MAX: 30.44MIN: 14.7 / MAX: 65.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp64-scalaraAMD Renoir - AMD Ryzen 5 4500Ub1326395265SE +/- 2.11, N = 2SE +/- 2.16, N = 259.6959.6356.03

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-scalaraAMD Renoir - AMD Ryzen 5 4500Ub1632486480SE +/- 1.69, N = 2SE +/- 2.27, N = 270.9270.8766.78

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-scalaraAMD Renoir - AMD Ryzen 5 4500Ub20406080100SE +/- 1.47, N = 2SE +/- 1.63, N = 289.3289.1584.22

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500AMD Renoir - AMD Ryzen 5 4500Ua80160240320400375.23354.12MAX: 4161.26MAX: 2940.05

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba60120180240300SE +/- 0.21, N = 2SE +/- 1.06, N = 2239.35250.62253.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba0.941.882.823.764.7SE +/- 0.0037, N = 2SE +/- 0.0165, N = 24.17773.99003.9498

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreambAMD Renoir - AMD Ryzen 5 4500Ua1632486480SE +/- 0.58, N = 2SE +/- 1.57, N = 270.3073.5974.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreambAMD Renoir - AMD Ryzen 5 4500Ua1020304050SE +/- 0.34, N = 2SE +/- 0.86, N = 242.6440.7240.39

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10aAMD Renoir - AMD Ryzen 5 4500Ub130K260K390K520K650KSE +/- 3012.94, N = 2SE +/- 3797.00, N = 2619544.49602510.40587558.641. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U3M6M9M12M15M11896529.9611298528.21

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamaAMD Renoir - AMD Ryzen 5 4500Ub612182430SE +/- 0.46, N = 2SE +/- 0.06, N = 225.6324.5924.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamaAMD Renoir - AMD Ryzen 5 4500Ub918273645SE +/- 0.70, N = 2SE +/- 0.10, N = 239.0140.6641.01

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int16-vec4aAMD Renoir - AMD Ryzen 5 4500Ub50100150200250SE +/- 3.00, N = 2SE +/- 3.57, N = 2229.38228.89218.43

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30aAMD Renoir - AMD Ryzen 5 4500Ub110220330440550SE +/- 2.87, N = 2SE +/- 0.56, N = 2493.65512.40517.791. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2aAMD Renoir - AMD Ryzen 5 4500Ub246810SE +/- 0.19, N = 2SE +/- 0.16, N = 27.677.898.02MIN: 7.1 / MAX: 21.52MIN: 6.89 / MAX: 23.06MIN: 7.08 / MAX: 22.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100aAMD Renoir - AMD Ryzen 5 4500Ub120K240K360K480K600KSE +/- 587.74, N = 2SE +/- 427.13, N = 2538437.76517858.51515626.761. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20230730int32-vec4aAMD Renoir - AMD Ryzen 5 4500Ub4080120160200SE +/- 2.63, N = 2SE +/- 3.49, N = 2163.19163.13156.50

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreambAMD Renoir - AMD Ryzen 5 4500Ua1632486480SE +/- 0.74, N = 2SE +/- 3.28, N = 267.1567.7869.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uba20406080100SE +/- 0.04, N = 2SE +/- 3.95, N = 290.2690.4894.04

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200AMD Renoir - AMD Ryzen 5 4500Ua100K200K300K400K500K465319.75446723.62

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

b: Test failed to run.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0AMD Renoir - AMD Ryzen 5 4500Uab3691215SE +/- 0.15, N = 2SE +/- 0.19, N = 211.4111.6411.87MIN: 10.41 / MAX: 63.48MIN: 10.62 / MAX: 33.04MIN: 10.56 / MAX: 29.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uba816243240SE +/- 0.02, N = 2SE +/- 1.33, N = 233.2133.1231.93

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreambAMD Renoir - AMD Ryzen 5 4500Ua48121620SE +/- 0.16, N = 2SE +/- 0.67, N = 214.8914.7514.32

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10AMD Renoir - AMD Ryzen 5 4500Uab110K220K330K440K550KSE +/- 11755.01, N = 2SE +/- 5831.39, N = 2536437.47517164.96515883.691. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400maAMD Renoir - AMD Ryzen 5 4500Ub3691215SE +/- 0.07, N = 2SE +/- 0.24, N = 29.589.839.94MIN: 9.2 / MAX: 14.22MIN: 9.12 / MAX: 28.66MIN: 9.16 / MAX: 28.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba510152025SE +/- 0.12, N = 2SE +/- 0.11, N = 219.0018.4518.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba1224364860SE +/- 0.32, N = 2SE +/- 0.32, N = 252.6254.1854.53

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30aAMD Renoir - AMD Ryzen 5 4500Ub4080120160200SE +/- 0.49, N = 2SE +/- 0.87, N = 2155.07157.99160.431. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab60120180240300SE +/- 0.31, N = 2SE +/- 1.57, N = 2275.04279.97284.33

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-vec4aAMD Renoir - AMD Ryzen 5 4500Ub0.90231.80462.70693.60924.5115SE +/- 0.06, N = 2SE +/- 0.07, N = 24.014.003.88

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetaAMD Renoir - AMD Ryzen 5 4500Ub510152025SE +/- 0.12, N = 2SE +/- 0.21, N = 220.3320.4621.01MIN: 19.72 / MAX: 47.75MIN: 19.54 / MAX: 43.69MIN: 19.64 / MAX: 46.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab3691215SE +/- 0.01, N = 2SE +/- 0.06, N = 210.8910.7010.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab1224364860SE +/- 0.07, N = 2SE +/- 0.12, N = 251.5352.2453.23

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab510152025SE +/- 0.03, N = 2SE +/- 0.04, N = 219.4019.1418.78

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mAMD Renoir - AMD Ryzen 5 4500Uab3691215SE +/- 0.08, N = 2SE +/- 0.12, N = 29.659.889.96MIN: 9.16 / MAX: 29.12MIN: 9.26 / MAX: 64.49MIN: 9.13 / MAX: 28.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreambaAMD Renoir - AMD Ryzen 5 4500U0.80011.60022.40033.20044.0005SE +/- 0.0136, N = 2SE +/- 0.0062, N = 23.55583.50123.4493

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdbaAMD Renoir - AMD Ryzen 5 4500U510152025SE +/- 0.08, N = 2SE +/- 0.10, N = 217.8718.3118.42MIN: 17.59 / MAX: 22.8MIN: 17.85 / MAX: 33.97MIN: 17.92 / MAX: 32.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:5AMD Renoir - AMD Ryzen 5 4500Uab130K260K390K520K650KSE +/- 24601.03, N = 2SE +/- 6829.92, N = 2618630.54602213.78600771.431. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetaAMD Renoir - AMD Ryzen 5 4500Ub510152025SE +/- 0.04, N = 2SE +/- 0.66, N = 220.6620.9521.27MIN: 19.67 / MAX: 46.35MIN: 19.48 / MAX: 65.77MIN: 19.64 / MAX: 48.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab510152025SE +/- 0.02, N = 2SE +/- 0.05, N = 220.7020.4020.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab306090120150SE +/- 0.21, N = 2SE +/- 0.34, N = 2144.78146.86148.87

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdAMD Renoir - AMD Ryzen 5 4500Uba510152025SE +/- 0.10, N = 2SE +/- 0.03, N = 218.0018.4018.50MIN: 17.57 / MAX: 23.86MIN: 17.89 / MAX: 32.97MIN: 17.94 / MAX: 33.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetAMD Renoir - AMD Ryzen 5 4500Uab1.17682.35363.53044.70725.884SE +/- 0.04, N = 2SE +/- 0.03, N = 25.095.115.23MIN: 4.9 / MAX: 16.22MIN: 5 / MAX: 10.03MIN: 4.92 / MAX: 16.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreambaAMD Renoir - AMD Ryzen 5 4500U2004006008001000SE +/- 1.82, N = 2SE +/- 2.42, N = 2842.48851.96865.64

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamabAMD Renoir - AMD Ryzen 5 4500U714212835SE +/- 0.06, N = 2SE +/- 0.17, N = 230.3829.9729.57

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500AMD Renoir - AMD Ryzen 5 4500Ua1.5M3M4.5M6M7.5M6836941.966656162.13

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamabAMD Renoir - AMD Ryzen 5 4500U20406080100SE +/- 0.17, N = 2SE +/- 0.60, N = 298.69100.02101.34

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2aAMD Renoir - AMD Ryzen 5 4500Ub246810SE +/- 0.07, N = 2SE +/- 0.01, N = 27.417.437.60MIN: 7.11 / MAX: 22.18MIN: 6.93 / MAX: 45.86MIN: 6.99 / MAX: 21.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreambaAMD Renoir - AMD Ryzen 5 4500U48121620SE +/- 0.12, N = 2SE +/- 0.08, N = 215.2015.4515.57

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba612182430SE +/- 0.00, N = 2SE +/- 0.33, N = 225.1325.2625.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreambaAMD Renoir - AMD Ryzen 5 4500U1530456075SE +/- 0.51, N = 2SE +/- 0.34, N = 265.7564.7064.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba918273645SE +/- 0.00, N = 2SE +/- 0.50, N = 239.7739.5838.83

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30abAMD Renoir - AMD Ryzen 5 4500U60120180240300SE +/- 5.40, N = 2SE +/- 1.56, N = 2277.68280.54284.351. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerAMD Renoir - AMD Ryzen 5 4500Uab50100150200250SE +/- 1.62, N = 2SE +/- 4.64, N = 2240.25241.97245.80MIN: 236.95 / MAX: 294.82MIN: 235.91 / MAX: 304.26MIN: 243.93 / MAX: 280.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U102030405042.3241.38MAX: 1338.68MAX: 1478.21

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

b: Test failed to run.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerAMD Renoir - AMD Ryzen 5 4500Uba50100150200250SE +/- 0.75, N = 2SE +/- 1.68, N = 2237.12241.22242.25MIN: 234.17 / MAX: 277.13MIN: 238.33 / MAX: 292.19MIN: 238.66 / MAX: 266.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab816243240SE +/- 0.08, N = 2SE +/- 0.16, N = 232.1532.6532.84

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab714212835SE +/- 0.07, N = 2SE +/- 0.15, N = 231.0930.6230.44

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30aAMD Renoir - AMD Ryzen 5 4500Ub2004006008001000SE +/- 4.72, N = 2SE +/- 1.01, N = 2878.65891.19896.381. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamabAMD Renoir - AMD Ryzen 5 4500U60120180240300SE +/- 0.95, N = 2SE +/- 1.62, N = 2280.42285.37285.87

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamabAMD Renoir - AMD Ryzen 5 4500U0.80241.60482.40723.20964.012SE +/- 0.0120, N = 2SE +/- 0.0199, N = 23.56603.50413.4981

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinybAMD Renoir - AMD Ryzen 5 4500Ua918273645SE +/- 0.02, N = 2SE +/- 0.07, N = 240.2340.3841.00MIN: 39.46 / MAX: 45.74MIN: 39.42 / MAX: 56.16MIN: 39.84 / MAX: 94.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U160320480640800720.44706.95MAX: 4246.32MAX: 4244.39

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

b: Test failed to run.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetAMD Renoir - AMD Ryzen 5 4500Uba1.15652.3133.46954.6265.7825SE +/- 0.01, N = 2SE +/- 0.04, N = 25.055.135.14MIN: 4.94 / MAX: 9.13MIN: 4.94 / MAX: 16.65MIN: 5.03 / MAX: 9.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U200K400K600K800K1000K870872.73855795.41

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

b: Test failed to run.

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 1000 - Rounds: 30abAMD Renoir - AMD Ryzen 5 4500U90180270360450SE +/- 7.43, N = 2SE +/- 3.34, N = 2401.31404.39407.851. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U163248648071.3870.26MAX: 1486.72MAX: 1642.8

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

b: Test failed to run.

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200AMD Renoir - AMD Ryzen 5 4500Ua51015202519.6419.34MAX: 1530.17MAX: 1470.64

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

b: Test failed to run.

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: WritesbaAMD Renoir - AMD Ryzen 5 4500U5K10K15K20K25KSE +/- 234.00, N = 2SE +/- 212.50, N = 2235352323623180

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreambAMD Renoir - AMD Ryzen 5 4500Ua510152025SE +/- 0.00, N = 2SE +/- 0.12, N = 221.2121.1920.90

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreambAMD Renoir - AMD Ryzen 5 4500Ua306090120150SE +/- 0.03, N = 2SE +/- 0.80, N = 2141.30141.31143.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreambaAMD Renoir - AMD Ryzen 5 4500U816243240SE +/- 0.17, N = 2SE +/- 0.29, N = 235.4035.5035.86

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreambaAMD Renoir - AMD Ryzen 5 4500U20406080100SE +/- 0.40, N = 2SE +/- 0.67, N = 284.6084.3783.54

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0aAMD Renoir - AMD Ryzen 5 4500Ub3691215SE +/- 0.01, N = 2SE +/- 0.12, N = 211.3011.4311.44MIN: 10.52 / MAX: 28.17MIN: 10.5 / MAX: 29.38MIN: 10.56 / MAX: 28.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500AMD Renoir - AMD Ryzen 5 4500Ua200K400K600K800K1000K994052.25982415.93

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamabAMD Renoir - AMD Ryzen 5 4500U0.82861.65722.48583.31444.143SE +/- 0.0254, N = 2SE +/- 0.0051, N = 23.68253.64823.6397

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3AMD Renoir - AMD Ryzen 5 4500Uab246810SE +/- 0.11, N = 2SE +/- 0.15, N = 26.346.406.41MIN: 5.17 / MAX: 24.25MIN: 5.23 / MAX: 21.11MIN: 5.28 / MAX: 22.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uba160320480640800SE +/- 1.73, N = 2SE +/- 0.36, N = 2717.18718.17723.99

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50bAMD Renoir - AMD Ryzen 5 4500Ua816243240SE +/- 0.15, N = 2SE +/- 0.25, N = 235.1735.2435.50MIN: 34.08 / MAX: 63.6MIN: 34.4 / MAX: 51.03MIN: 34.39 / MAX: 63.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 60 - Set To Get Ratio: 1:10abAMD Renoir - AMD Ryzen 5 4500U140K280K420K560K700KSE +/- 66150.22, N = 2SE +/- 8371.13, N = 2638619.97636341.63632802.221. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: SingleAMD Renoir - AMD Ryzen 5 4500Uab1326395265SE +/- 0.02, N = 2SE +/- 0.09, N = 255.2555.6055.731. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetAMD Renoir - AMD Ryzen 5 4500Uab612182430SE +/- 0.05, N = 2SE +/- 0.15, N = 226.7926.9227.02MIN: 25.89 / MAX: 86.67MIN: 26.1 / MAX: 45.94MIN: 26.45 / MAX: 32.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamabAMD Renoir - AMD Ryzen 5 4500U2004006008001000SE +/- 5.58, N = 2SE +/- 0.10, N = 2809.27815.38816.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab3691215SE +/- 0.02, N = 2SE +/- 0.02, N = 210.3910.4310.47

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab60120180240300SE +/- 0.44, N = 2SE +/- 0.47, N = 2288.08286.90285.74

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18abAMD Renoir - AMD Ryzen 5 4500U48121620SE +/- 0.22, N = 2SE +/- 0.28, N = 215.3515.3615.47MIN: 14.65 / MAX: 35.35MIN: 14.62 / MAX: 31.58MIN: 14.36 / MAX: 32.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uba1530456075SE +/- 0.26, N = 2SE +/- 0.15, N = 267.6767.8568.18

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 3000 - Rounds: 30aAMD Renoir - AMD Ryzen 5 4500Ub30060090012001500SE +/- 5.65, N = 2SE +/- 3.01, N = 21252.601254.981261.981. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetAMD Renoir - AMD Ryzen 5 4500Uba3691215SE +/- 0.01, N = 2SE +/- 0.03, N = 210.7410.7910.82MIN: 10.36 / MAX: 20.55MIN: 10.5 / MAX: 13.68MIN: 10.44 / MAX: 22.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uba1020304050SE +/- 0.17, N = 2SE +/- 0.09, N = 244.2844.1843.96

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50aAMD Renoir - AMD Ryzen 5 4500Ub816243240SE +/- 0.08, N = 2SE +/- 0.25, N = 235.4535.6335.71MIN: 34.36 / MAX: 53MIN: 34.42 / MAX: 98.87MIN: 34.37 / MAX: 52.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba612182430SE +/- 0.06, N = 2SE +/- 0.08, N = 224.5324.6424.70

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uba918273645SE +/- 0.10, N = 2SE +/- 0.13, N = 240.7540.5640.46

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U306090120150156.19155.14MAX: 2583.42MAX: 4202.36

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

b: Test failed to run.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetAMD Renoir - AMD Ryzen 5 4500Uab612182430SE +/- 0.08, N = 2SE +/- 0.12, N = 226.8026.9526.98MIN: 25.92 / MAX: 84.56MIN: 26 / MAX: 45.37MIN: 25.99 / MAX: 42.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16bAMD Renoir - AMD Ryzen 5 4500Ua20406080100SE +/- 0.02, N = 2SE +/- 0.06, N = 292.9893.5093.57MIN: 90.89 / MAX: 125.87MIN: 91.41 / MAX: 107.45MIN: 91.4 / MAX: 112.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U71421283528.5328.36MAX: 1820.94MAX: 1210

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uba0.93521.87042.80563.74084.676SE +/- 0.0284, N = 2SE +/- 0.0054, N = 24.15654.14954.1321

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U200K400K600K800K1000K818281.01813547.71

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

b: Test failed to run.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetAMD Renoir - AMD Ryzen 5 4500Uba3691215SE +/- 0.00, N = 2SE +/- 0.03, N = 210.6710.7110.73MIN: 10.37 / MAX: 21.61MIN: 10.41 / MAX: 19.42MIN: 10.47 / MAX: 16.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U51015202521.1121.00MAX: 1265.96MAX: 1320.28

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab0.95151.9032.85453.8064.7575SE +/- 0.0064, N = 2SE +/- 0.0081, N = 24.22914.21414.2071

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamAMD Renoir - AMD Ryzen 5 4500Uab50100150200250SE +/- 0.35, N = 2SE +/- 0.45, N = 2236.44237.28237.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab1020304050SE +/- 0.03, N = 2SE +/- 0.07, N = 243.9843.8143.76

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-vec4aAMD Renoir - AMD Ryzen 5 4500Ub4080120160200SE +/- 2.40, N = 2SE +/- 2.79, N = 2177.28176.83176.39

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U140K280K420K560K700K640326.81637397.90

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

b: Test failed to run.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamAMD Renoir - AMD Ryzen 5 4500Uab1530456075SE +/- 0.06, N = 2SE +/- 0.12, N = 268.1468.4168.43

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16abAMD Renoir - AMD Ryzen 5 4500U20406080100SE +/- 0.16, N = 2SE +/- 0.09, N = 293.4393.6993.72MIN: 91.03 / MAX: 140.17MIN: 91.65 / MAX: 138.85MIN: 91.26 / MAX: 109.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200aAMD Renoir - AMD Ryzen 5 4500U150K300K450K600K750K686458.89684869.04

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

b: Test failed to run.

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500aAMD Renoir - AMD Ryzen 5 4500U102030405045.1245.06MAX: 1423.03MAX: 1511.68

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

b: Test failed to run.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyAMD Renoir - AMD Ryzen 5 4500Uba918273645SE +/- 0.30, N = 2SE +/- 0.11, N = 240.6440.6640.68MIN: 39.51 / MAX: 87.1MIN: 39.41 / MAX: 82.29MIN: 39.56 / MAX: 80.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp32-scalarbaAMD Renoir - AMD Ryzen 5 4500U1428425670SE +/- 0.05, N = 2SE +/- 0.00, N = 262.2562.2162.20

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20230730fp16-scalarAMD Renoir - AMD Ryzen 5 4500Uba0.94731.89462.84193.78924.7365SE +/- 0.01, N = 2SE +/- 0.01, N = 24.214.214.21

154 Results Shown

Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    items/sec
    ms/batch
Dragonflydb
Apache IoTDB
VVenC:
  Bosphorus 4K - Faster
  Bosphorus 1080p - Fast
Dragonflydb
NCNN
BRL-CAD
Apache IoTDB
VVenC
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
NCNN
Apache IoTDB
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
Timed GCC Compilation
Dragonflydb
Apache IoTDB
Dragonflydb
VVenC
Apache IoTDB
Dragonflydb
Apache IoTDB:
  200 - 100 - 500
  100 - 100 - 200
Dragonflydb
vkpeak
NCNN
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN:
  Vulkan GPU-v3-v3 - mobilenet-v3
  CPU - mnasnet
  Vulkan GPU - mnasnet
Dragonflydb
NCNN
vkpeak:
  fp64-scalar
  int32-scalar
  int16-scalar
Apache IoTDB
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Dragonflydb
Apache IoTDB
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
vkpeak
Apache CouchDB
NCNN
Dragonflydb
vkpeak
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
Apache IoTDB
NCNN
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream
Dragonflydb
NCNN
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
Apache CouchDB
Neural Magic DeepSparse
vkpeak
NCNN
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream
NCNN
Neural Magic DeepSparse
NCNN
Dragonflydb
NCNN
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
NCNN:
  CPU - squeezenet_ssd
  Vulkan GPU - FastestDet
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream
Apache IoTDB
Neural Magic DeepSparse
NCNN
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
Apache CouchDB
NCNN
Apache IoTDB
NCNN
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
Apache CouchDB
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
Apache IoTDB
NCNN
Apache IoTDB
Apache CouchDB
Apache IoTDB:
  500 - 1 - 500
  500 - 1 - 200
Apache Cassandra
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
NCNN
Apache IoTDB
Neural Magic DeepSparse
NCNN
Neural Magic DeepSparse
NCNN
Dragonflydb
VkResample
NCNN
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
NCNN
Neural Magic DeepSparse
Apache CouchDB
NCNN
Neural Magic DeepSparse
NCNN
Neural Magic DeepSparse:
  ResNet-50, Baseline - Synchronous Single-Stream:
    ms/batch
    items/sec
Apache IoTDB
NCNN:
  Vulkan GPU - mobilenet
  Vulkan GPU - vgg16
Apache IoTDB
Neural Magic DeepSparse
Apache IoTDB
NCNN
Apache IoTDB
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
vkpeak
Apache IoTDB
Neural Magic DeepSparse
NCNN
Apache IoTDB:
  200 - 1 - 200
  100 - 1 - 500
NCNN
vkpeak:
  fp32-scalar
  fp16-scalar