AMD Ryzen 5 4500U testing with a LENOVO LNVNB161216 (EECN20WW BIOS) and AMD Renoir 512MB on Pop 22.04 via the Phoronix Test Suite.
a Processor: AMD Ryzen 5 4500U @ 2.38GHz (6 Cores), Motherboard: LENOVO LNVNB161216 (EECN20WW BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 256GB SK hynix HFM256GDHTNI-87A0B, Graphics: AMD Renoir 512MB (1500/400MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8822CE 802.11ac PCIe
OS: Pop 22.04, Kernel: 5.17.5-76051705-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.44), Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balancedGraphics Notes: GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-025Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
b Changed Graphics to AMD Renoir 512MB (1500MHz) .
AMD Renoir - AMD Ryzen 5 4500U Changed Graphics to AMD Renoir 512MB (1500/400MHz) .
auggy OpenBenchmarking.org Phoronix Test Suite AMD Ryzen 5 4500U @ 2.38GHz (6 Cores) LENOVO LNVNB161216 (EECN20WW BIOS) AMD Renoir/Cezanne 16GB 256GB SK hynix HFM256GDHTNI-87A0B AMD Renoir 512MB (1500/400MHz) AMD Renoir 512MB (1500MHz) AMD Renoir Radeon HD Audio Realtek RTL8822CE 802.11ac PCIe Pop 22.04 5.17.5-76051705-generic (x86_64) GNOME Shell 42.1 X Server 1.21.1.3 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.44) 1.2.204 GCC 11.2.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution Auggy Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balanced - GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-025 - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - Python 3.10.6 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
a b AMD Renoir - AMD Ryzen 5 4500U Result Overview Phoronix Test Suite 100% 104% 109% 113% 117% BRL-CAD VVenC Timed GCC Compilation vkpeak Apache CouchDB Dragonflydb Neural Magic DeepSparse NCNN Apache Cassandra VkResample
auggy deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream dragonflydb: 20 - 1:100 apache-iotdb: 100 - 100 - 500 vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast dragonflydb: 50 - 1:100 ncnn: Vulkan GPU - shufflenet-v2 brl-cad: VGR Performance Metric apache-iotdb: 100 - 100 - 500 vvenc: Bosphorus 4K - Fast deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream ncnn: CPU - blazeface apache-iotdb: 200 - 100 - 200 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream ncnn: Vulkan GPU - blazeface build-gcc: Time To Compile dragonflydb: 50 - 1:10 apache-iotdb: 100 - 100 - 200 dragonflydb: 60 - 1:100 vvenc: Bosphorus 1080p - Faster apache-iotdb: 200 - 100 - 200 dragonflydb: 10 - 1:5 apache-iotdb: 200 - 100 - 500 apache-iotdb: 100 - 100 - 200 dragonflydb: 60 - 1:5 vkpeak: fp64-vec4 ncnn: CPU - shufflenet-v2 deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: CPU - mnasnet ncnn: Vulkan GPU - mnasnet dragonflydb: 20 - 1:5 ncnn: Vulkan GPU - resnet18 vkpeak: fp64-scalar vkpeak: int32-scalar vkpeak: int16-scalar apache-iotdb: 200 - 100 - 500 deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream dragonflydb: 20 - 1:10 apache-iotdb: 500 - 100 - 200 deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream vkpeak: int16-vec4 couchdb: 100 - 3000 - 30 ncnn: CPU-v2-v2 - mobilenet-v2 dragonflydb: 10 - 1:100 vkpeak: int32-vec4 deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream apache-iotdb: 100 - 1 - 200 ncnn: CPU - efficientnet-b0 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream dragonflydb: 10 - 1:10 ncnn: CPU - regnety_400m deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream couchdb: 100 - 1000 - 30 deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream vkpeak: fp16-vec4 ncnn: Vulkan GPU - googlenet deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream ncnn: Vulkan GPU - regnety_400m deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream ncnn: Vulkan GPU - squeezenet_ssd dragonflydb: 50 - 1:5 ncnn: CPU - googlenet deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream ncnn: CPU - squeezenet_ssd ncnn: Vulkan GPU - FastestDet deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream apache-iotdb: 500 - 100 - 500 deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream couchdb: 300 - 1000 - 30 ncnn: Vulkan GPU - vision_transformer apache-iotdb: 200 - 1 - 500 ncnn: CPU - vision_transformer deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream couchdb: 300 - 3000 - 30 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream ncnn: Vulkan GPU - yolov4-tiny apache-iotdb: 500 - 100 - 500 ncnn: CPU - FastestDet apache-iotdb: 500 - 1 - 200 couchdb: 500 - 1000 - 30 apache-iotdb: 500 - 1 - 500 apache-iotdb: 500 - 1 - 200 cassandra: Writes deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream ncnn: Vulkan GPU - efficientnet-b0 apache-iotdb: 200 - 1 - 500 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream ncnn: CPU-v3-v3 - mobilenet-v3 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream ncnn: Vulkan GPU - resnet50 dragonflydb: 60 - 1:10 vkresample: 2x - Single ncnn: CPU - mobilenet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream ncnn: CPU - resnet18 deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream couchdb: 500 - 3000 - 30 ncnn: CPU - alexnet deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream ncnn: CPU - resnet50 deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream apache-iotdb: 500 - 100 - 200 ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU - vgg16 apache-iotdb: 100 - 1 - 200 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream apache-iotdb: 100 - 1 - 500 ncnn: Vulkan GPU - alexnet apache-iotdb: 200 - 1 - 200 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream vkpeak: fp32-vec4 apache-iotdb: 500 - 1 - 500 deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream ncnn: CPU - vgg16 apache-iotdb: 200 - 1 - 200 apache-iotdb: 100 - 1 - 500 ncnn: CPU - yolov4-tiny vkpeak: fp32-scalar vkpeak: fp16-scalar a b AMD Renoir - AMD Ryzen 5 4500U 6.3938 155.9575 4.7561 628.0812 521281.07 15836698.9 3.482 5.164 603815.39 4.43 53415 288.87 1.561 9.0426 110.5798 15.3557 196.0549 1.45 16236661.97 302.4558 3.3080 1.34 2725.004 634736.01 15789875.81 672849.08 13.36 107.71 560321.90 13117829.31 104.65 639312.80 80.60 5.00 35.8951 27.8511 5.95 5.88 5.51 555633.33 15.19 59.69 70.92 89.32 354.12 253.1701 3.9498 74.2214 40.3928 619544.49 11896529.96 25.6329 39.0120 229.38 493.651 7.67 538437.76 163.19 69.9941 94.0360 446723.62 11.64 31.9290 14.3163 517164.96 9.58 18.3337 54.5346 155.070 279.9682 4.01 20.33 10.7014 52.2402 19.1370 9.88 3.5012 18.31 602213.78 20.66 20.4003 146.8570 18.50 5.11 851.9601 30.3778 6656162.13 98.6931 7.41 15.4455 25.7463 64.6993 38.8298 277.677 241.97 42.32 242.25 32.6512 30.6183 878.652 280.4198 3.5660 41.00 720.44 5.14 870872.73 401.313 71.38 19.34 23236 20.9030 143.2550 35.4951 84.3727 11.30 982415.93 3.6825 6.40 723.9887 35.50 638619.97 55.602 26.92 809.2693 10.4265 286.9041 15.35 68.1812 1252.599 10.82 43.9578 35.45 24.7044 40.4602 156.19 26.95 93.57 28.53 4.1321 818281.01 10.73 21.11 4.2141 237.2807 43.8114 177.28 640326.81 68.4108 93.43 686458.89 45.12 40.68 62.21 4.21 6.7742 147.2247 3.6558 814.7713 549095.84 3.97 5.828 710309 5.21 62517 1.802 8.8655 112.7823 15.3016 195.8387 1.36 296.5687 3.3718 1.25 2604.386 649063.96 624768.18 14.531 511381.84 620626.04 74.4 5.19 34.9594 28.5939 6.39 6.31 5.91 595869.89 16.19 56.03 66.78 84.22 250.6165 3.99 70.2962 42.6411 587558.64 24.3744 41.0131 218.43 517.787 8.02 515626.76 156.5 67.1491 90.4794 11.87 33.1248 14.89 515883.69 9.94 18.452 54.184 160.432 284.3299 3.88 21.01 10.5415 53.2285 18.7817 9.96 3.5558 17.87 600771.43 21.27 20.1215 148.8658 18.4 5.23 842.4756 29.9723 100.0175 7.6 15.1969 25.2553 65.7528 39.5768 280.539 245.8 241.22 32.8392 30.4425 896.375 285.3675 3.5041 40.23 5.13 404.386 23535 21.2112 141.2985 35.3995 84.5974 11.44 3.6482 6.41 718.1658 35.17 636341.63 55.726 27.02 815.3835 10.4703 285.7409 15.36 67.8473 1261.98 10.79 44.1778 35.71 24.6415 40.5629 26.98 92.98 4.1495 10.71 4.2071 237.6754 43.7597 176.39 68.4274 93.69 40.66 62.25 4.21 4.7072 211.8067 4.7697 625.2824 635238.89 13387890.2 4.108 6.081 652808.07 4.52 62248 337.16 1.817 10.2327 97.7135 13.3056 225.7318 1.26 14164673.03 336.9840 2.9676 1.42 2409.240 576996.27 14130374.99 692262.13 14.673 118.24 514823.83 11981302.92 113.87 673078.78 79.70 4.81 37.6023 26.5889 6.37 5.93 5.81 587975.21 15.17 59.63 70.87 89.15 375.23 239.3545 4.1777 73.5899 40.7189 602510.4 11298528.21 24.5894 40.6553 228.89 512.397 7.89 517858.51 163.13 67.7845 90.2587 465319.75 11.41 33.2115 14.7520 536437.47 9.83 19.0010 52.6196 157.985 275.0355 4.00 20.46 10.8915 51.5308 19.4002 9.65 3.4493 18.42 618630.54 20.95 20.6990 144.7798 18.0 5.09 865.6388 29.5718 6836941.96 101.3443 7.43 15.5671 25.1348 64.1923 39.7664 284.352 240.25 41.38 237.12 32.1521 31.0929 891.191 285.8694 3.4981 40.38 706.95 5.05 855795.41 407.845 70.26 19.64 23180 21.1927 141.3080 35.8595 83.5418 11.43 994052.25 3.6397 6.34 717.1758 35.24 632802.22 55.249 26.79 816.0533 10.3851 288.0795 15.47 67.6650 1254.976 10.74 44.2847 35.63 24.5291 40.7486 155.14 26.80 93.50 28.36 4.1565 813547.71 10.67 21 4.2291 236.4434 43.9816 176.83 637397.9 68.1431 93.72 684869.04 45.06 40.64 62.2 4.21 OpenBenchmarking.org
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 2 4 6 8 10 SE +/- 0.0017, N = 2 SE +/- 0.0029, N = 2 6.3938 6.7742 4.7072
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 50 100 150 200 250 SE +/- 0.03, N = 2 SE +/- 0.14, N = 2 155.96 147.22 211.81
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 1.0732 2.1464 3.2196 4.2928 5.366 SE +/- 0.0029, N = 2 SE +/- 0.0053, N = 2 4.7561 3.6558 4.7697
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 200 400 600 800 1000 SE +/- 0.37, N = 2 SE +/- 0.89, N = 2 628.08 814.77 625.28
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:100 a b AMD Renoir - AMD Ryzen 5 4500U 140K 280K 420K 560K 700K SE +/- 24782.09, N = 2 SE +/- 16580.85, N = 2 521281.07 549095.84 635238.89 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 3M 6M 9M 12M 15M 15836698.9 13387890.2
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500
b: Test failed to run.
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b AMD Renoir - AMD Ryzen 5 4500U 0.9243 1.8486 2.7729 3.6972 4.6215 SE +/- 0.066, N = 2 SE +/- 0.045, N = 2 3.482 3.970 4.108 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a b AMD Renoir - AMD Ryzen 5 4500U 2 4 6 8 10 SE +/- 0.105, N = 2 SE +/- 0.061, N = 2 5.164 5.828 6.081 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:100 a b AMD Renoir - AMD Ryzen 5 4500U 150K 300K 450K 600K 750K SE +/- 10719.53, N = 2 SE +/- 25536.45, N = 2 603815.39 710309.00 652808.07 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 a b AMD Renoir - AMD Ryzen 5 4500U 1.1723 2.3446 3.5169 4.6892 5.8615 SE +/- 0.27, N = 2 SE +/- 0.10, N = 2 4.43 5.21 4.52 MIN: 3.95 / MAX: 19.63 MIN: 4.05 / MAX: 23.72 MIN: 3.96 / MAX: 20.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b AMD Renoir - AMD Ryzen 5 4500U 13K 26K 39K 52K 65K SE +/- 2174.50, N = 2 SE +/- 83.00, N = 2 53415 62517 62248 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 70 140 210 280 350 288.87 337.16 MAX: 3332.52 MAX: 5976.26
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500
b: Test failed to run.
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a b AMD Renoir - AMD Ryzen 5 4500U 0.4088 0.8176 1.2264 1.6352 2.044 SE +/- 0.056, N = 2 SE +/- 0.031, N = 2 1.561 1.802 1.817 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.0693, N = 2 SE +/- 0.0180, N = 2 9.0426 8.8655 10.2327
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 30 60 90 120 150 SE +/- 0.85, N = 2 SE +/- 0.17, N = 2 110.58 112.78 97.71
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 4 8 12 16 20 SE +/- 1.07, N = 2 SE +/- 0.68, N = 2 15.36 15.30 13.31
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 50 100 150 200 250 SE +/- 13.53, N = 2 SE +/- 11.53, N = 2 196.05 195.84 225.73
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface a b AMD Renoir - AMD Ryzen 5 4500U 0.3263 0.6526 0.9789 1.3052 1.6315 SE +/- 0.06, N = 2 SE +/- 0.01, N = 2 1.45 1.36 1.26 MIN: 1.27 / MAX: 7.7 MIN: 1.24 / MAX: 7.56 MIN: 1.19 / MAX: 5.91 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 3M 6M 9M 12M 15M 16236661.97 14164673.03
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 70 140 210 280 350 SE +/- 7.05, N = 2 SE +/- 2.66, N = 2 302.46 296.57 336.98
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 0.7587 1.5174 2.2761 3.0348 3.7935 SE +/- 0.0772, N = 2 SE +/- 0.0234, N = 2 3.3080 3.3718 2.9676
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: blazeface a b AMD Renoir - AMD Ryzen 5 4500U 0.3195 0.639 0.9585 1.278 1.5975 SE +/- 0.03, N = 2 SE +/- 0.17, N = 2 1.34 1.25 1.42 MIN: 1.25 / MAX: 5.84 MIN: 1.22 / MAX: 1.35 MIN: 1.21 / MAX: 42.23 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:10 a b AMD Renoir - AMD Ryzen 5 4500U 140K 280K 420K 560K 700K SE +/- 23316.32, N = 2 SE +/- 20222.31, N = 2 634736.01 649063.96 576996.27 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 3M 6M 9M 12M 15M 15789875.81 14130374.99
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200
b: Test failed to run.
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:100 a b AMD Renoir - AMD Ryzen 5 4500U 150K 300K 450K 600K 750K SE +/- 33803.70, N = 2 SE +/- 81324.68, N = 2 672849.08 624768.18 692262.13 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b AMD Renoir - AMD Ryzen 5 4500U 4 8 12 16 20 SE +/- 0.39, N = 2 SE +/- 0.11, N = 2 13.36 14.53 14.67 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 30 60 90 120 150 107.71 118.24 MAX: 2081.88 MAX: 4097.41
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200
b: Test failed to run.
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:5 a b AMD Renoir - AMD Ryzen 5 4500U 120K 240K 360K 480K 600K SE +/- 4625.99, N = 2 SE +/- 7066.41, N = 2 560321.90 511381.84 514823.83 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 3M 6M 9M 12M 15M 13117829.31 11981302.92
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500
b: Test failed to run.
OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 30 60 90 120 150 104.65 113.87 MAX: 2607.58 MAX: 5638.35
Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200
b: Test failed to run.
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:5 a b AMD Renoir - AMD Ryzen 5 4500U 140K 280K 420K 560K 700K SE +/- 5949.45, N = 2 SE +/- 2499.83, N = 2 639312.80 620626.04 673078.78 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a b AMD Renoir - AMD Ryzen 5 4500U 1.1678 2.3356 3.5034 4.6712 5.839 SE +/- 0.11, N = 2 SE +/- 0.08, N = 2 5.00 5.19 4.81 MIN: 3.96 / MAX: 21.02 MIN: 4.05 / MAX: 19.52 MIN: 3.93 / MAX: 21.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 9 18 27 36 45 SE +/- 0.34, N = 2 SE +/- 0.48, N = 2 35.90 34.96 37.60
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 7 14 21 28 35 SE +/- 0.26, N = 2 SE +/- 0.34, N = 2 27.85 28.59 26.59
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 a b AMD Renoir - AMD Ryzen 5 4500U 2 4 6 8 10 SE +/- 0.36, N = 2 SE +/- 0.17, N = 2 5.95 6.39 6.37 MIN: 5.17 / MAX: 18.42 MIN: 5.27 / MAX: 18.54 MIN: 5.15 / MAX: 27.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a b AMD Renoir - AMD Ryzen 5 4500U 2 4 6 8 10 SE +/- 0.25, N = 2 SE +/- 0.36, N = 2 5.88 6.31 5.93 MIN: 5.21 / MAX: 22.5 MIN: 5.18 / MAX: 18.06 MIN: 5.06 / MAX: 22.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mnasnet a b AMD Renoir - AMD Ryzen 5 4500U 1.3298 2.6596 3.9894 5.3192 6.649 SE +/- 0.14, N = 2 SE +/- 0.30, N = 2 5.51 5.91 5.81 MIN: 5.23 / MAX: 42.79 MIN: 5.25 / MAX: 22.88 MIN: 5 / MAX: 22.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:5 a b AMD Renoir - AMD Ryzen 5 4500U 130K 260K 390K 520K 650K SE +/- 13564.23, N = 2 SE +/- 15736.96, N = 2 555633.33 595869.89 587975.21 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet18 a b AMD Renoir - AMD Ryzen 5 4500U 4 8 12 16 20 SE +/- 0.09, N = 2 SE +/- 0.02, N = 2 15.19 16.19 15.17 MIN: 14.46 / MAX: 30.44 MIN: 14.7 / MAX: 65.71 MIN: 14.49 / MAX: 32.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 80 160 240 320 400 354.12 375.23 MAX: 2940.05 MAX: 4161.26
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 60 120 180 240 300 SE +/- 1.06, N = 2 SE +/- 0.21, N = 2 253.17 250.62 239.35
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 0.94 1.88 2.82 3.76 4.7 SE +/- 0.0165, N = 2 SE +/- 0.0037, N = 2 3.9498 3.9900 4.1777
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 16 32 48 64 80 SE +/- 1.57, N = 2 SE +/- 0.58, N = 2 74.22 70.30 73.59
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 10 20 30 40 50 SE +/- 0.86, N = 2 SE +/- 0.34, N = 2 40.39 42.64 40.72
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:10 a b AMD Renoir - AMD Ryzen 5 4500U 130K 260K 390K 520K 650K SE +/- 3012.94, N = 2 SE +/- 3797.00, N = 2 619544.49 587558.64 602510.40 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 3M 6M 9M 12M 15M 11896529.96 11298528.21
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 6 12 18 24 30 SE +/- 0.46, N = 2 SE +/- 0.06, N = 2 25.63 24.37 24.59
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 9 18 27 36 45 SE +/- 0.70, N = 2 SE +/- 0.10, N = 2 39.01 41.01 40.66
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 3000 - Rounds: 30 a b AMD Renoir - AMD Ryzen 5 4500U 110 220 330 440 550 SE +/- 2.87, N = 2 SE +/- 0.56, N = 2 493.65 517.79 512.40 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a b AMD Renoir - AMD Ryzen 5 4500U 2 4 6 8 10 SE +/- 0.19, N = 2 SE +/- 0.16, N = 2 7.67 8.02 7.89 MIN: 7.1 / MAX: 21.52 MIN: 7.08 / MAX: 22.24 MIN: 6.89 / MAX: 23.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:100 a b AMD Renoir - AMD Ryzen 5 4500U 120K 240K 360K 480K 600K SE +/- 587.74, N = 2 SE +/- 427.13, N = 2 538437.76 515626.76 517858.51 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 16 32 48 64 80 SE +/- 3.28, N = 2 SE +/- 0.74, N = 2 69.99 67.15 67.78
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 20 40 60 80 100 SE +/- 3.95, N = 2 SE +/- 0.04, N = 2 94.04 90.48 90.26
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 100K 200K 300K 400K 500K 446723.62 465319.75
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200
b: Test failed to run.
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.19, N = 2 SE +/- 0.15, N = 2 11.64 11.87 11.41 MIN: 10.62 / MAX: 33.04 MIN: 10.56 / MAX: 29.61 MIN: 10.41 / MAX: 63.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 8 16 24 32 40 SE +/- 1.33, N = 2 SE +/- 0.02, N = 2 31.93 33.12 33.21
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 4 8 12 16 20 SE +/- 0.67, N = 2 SE +/- 0.16, N = 2 14.32 14.89 14.75
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:10 a b AMD Renoir - AMD Ryzen 5 4500U 110K 220K 330K 440K 550K SE +/- 5831.39, N = 2 SE +/- 11755.01, N = 2 517164.96 515883.69 536437.47 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.07, N = 2 SE +/- 0.24, N = 2 9.58 9.94 9.83 MIN: 9.2 / MAX: 14.22 MIN: 9.16 / MAX: 28.5 MIN: 9.12 / MAX: 28.66 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.11, N = 2 SE +/- 0.12, N = 2 18.33 18.45 19.00
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 12 24 36 48 60 SE +/- 0.32, N = 2 SE +/- 0.32, N = 2 54.53 54.18 52.62
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 1000 - Rounds: 30 a b AMD Renoir - AMD Ryzen 5 4500U 40 80 120 160 200 SE +/- 0.49, N = 2 SE +/- 0.87, N = 2 155.07 160.43 157.99 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 60 120 180 240 300 SE +/- 1.57, N = 2 SE +/- 0.31, N = 2 279.97 284.33 275.04
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.12, N = 2 SE +/- 0.21, N = 2 20.33 21.01 20.46 MIN: 19.72 / MAX: 47.75 MIN: 19.64 / MAX: 46.59 MIN: 19.54 / MAX: 43.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.06, N = 2 SE +/- 0.01, N = 2 10.70 10.54 10.89
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 12 24 36 48 60 SE +/- 0.12, N = 2 SE +/- 0.07, N = 2 52.24 53.23 51.53
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.04, N = 2 SE +/- 0.03, N = 2 19.14 18.78 19.40
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.12, N = 2 SE +/- 0.08, N = 2 9.88 9.96 9.65 MIN: 9.26 / MAX: 64.49 MIN: 9.13 / MAX: 28.6 MIN: 9.16 / MAX: 29.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 0.8001 1.6002 2.4003 3.2004 4.0005 SE +/- 0.0136, N = 2 SE +/- 0.0062, N = 2 3.5012 3.5558 3.4493
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.08, N = 2 SE +/- 0.10, N = 2 18.31 17.87 18.42 MIN: 17.85 / MAX: 33.97 MIN: 17.59 / MAX: 22.8 MIN: 17.92 / MAX: 32.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:5 a b AMD Renoir - AMD Ryzen 5 4500U 130K 260K 390K 520K 650K SE +/- 6829.92, N = 2 SE +/- 24601.03, N = 2 602213.78 600771.43 618630.54 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.04, N = 2 SE +/- 0.66, N = 2 20.66 21.27 20.95 MIN: 19.67 / MAX: 46.35 MIN: 19.64 / MAX: 48.18 MIN: 19.48 / MAX: 65.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.05, N = 2 SE +/- 0.02, N = 2 20.40 20.12 20.70
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 30 60 90 120 150 SE +/- 0.34, N = 2 SE +/- 0.21, N = 2 146.86 148.87 144.78
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.03, N = 2 SE +/- 0.10, N = 2 18.50 18.40 18.00 MIN: 17.94 / MAX: 33.27 MIN: 17.89 / MAX: 32.97 MIN: 17.57 / MAX: 23.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet a b AMD Renoir - AMD Ryzen 5 4500U 1.1768 2.3536 3.5304 4.7072 5.884 SE +/- 0.03, N = 2 SE +/- 0.04, N = 2 5.11 5.23 5.09 MIN: 5 / MAX: 10.03 MIN: 4.92 / MAX: 16.39 MIN: 4.9 / MAX: 16.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 200 400 600 800 1000 SE +/- 1.82, N = 2 SE +/- 2.42, N = 2 851.96 842.48 865.64
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 7 14 21 28 35 SE +/- 0.06, N = 2 SE +/- 0.17, N = 2 30.38 29.97 29.57
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 1.5M 3M 4.5M 6M 7.5M 6656162.13 6836941.96
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 20 40 60 80 100 SE +/- 0.17, N = 2 SE +/- 0.60, N = 2 98.69 100.02 101.34
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 a b AMD Renoir - AMD Ryzen 5 4500U 2 4 6 8 10 SE +/- 0.07, N = 2 SE +/- 0.01, N = 2 7.41 7.60 7.43 MIN: 7.11 / MAX: 22.18 MIN: 6.99 / MAX: 21.57 MIN: 6.93 / MAX: 45.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 4 8 12 16 20 SE +/- 0.12, N = 2 SE +/- 0.08, N = 2 15.45 15.20 15.57
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 6 12 18 24 30 SE +/- 0.33, N = 2 SE +/- 0.00, N = 2 25.75 25.26 25.13
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 15 30 45 60 75 SE +/- 0.51, N = 2 SE +/- 0.34, N = 2 64.70 65.75 64.19
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 9 18 27 36 45 SE +/- 0.50, N = 2 SE +/- 0.00, N = 2 38.83 39.58 39.77
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 a b AMD Renoir - AMD Ryzen 5 4500U 60 120 180 240 300 SE +/- 5.40, N = 2 SE +/- 1.56, N = 2 277.68 280.54 284.35 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer a b AMD Renoir - AMD Ryzen 5 4500U 50 100 150 200 250 SE +/- 4.64, N = 2 SE +/- 1.62, N = 2 241.97 245.80 240.25 MIN: 235.91 / MAX: 304.26 MIN: 243.93 / MAX: 280.56 MIN: 236.95 / MAX: 294.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 10 20 30 40 50 42.32 41.38 MAX: 1338.68 MAX: 1478.21
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500
b: Test failed to run.
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b AMD Renoir - AMD Ryzen 5 4500U 50 100 150 200 250 SE +/- 1.68, N = 2 SE +/- 0.75, N = 2 242.25 241.22 237.12 MIN: 238.66 / MAX: 266.75 MIN: 238.33 / MAX: 292.19 MIN: 234.17 / MAX: 277.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 8 16 24 32 40 SE +/- 0.16, N = 2 SE +/- 0.08, N = 2 32.65 32.84 32.15
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 7 14 21 28 35 SE +/- 0.15, N = 2 SE +/- 0.07, N = 2 30.62 30.44 31.09
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 3000 - Rounds: 30 a b AMD Renoir - AMD Ryzen 5 4500U 200 400 600 800 1000 SE +/- 4.72, N = 2 SE +/- 1.01, N = 2 878.65 896.38 891.19 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 60 120 180 240 300 SE +/- 0.95, N = 2 SE +/- 1.62, N = 2 280.42 285.37 285.87
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 0.8024 1.6048 2.4072 3.2096 4.012 SE +/- 0.0120, N = 2 SE +/- 0.0199, N = 2 3.5660 3.5041 3.4981
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny a b AMD Renoir - AMD Ryzen 5 4500U 9 18 27 36 45 SE +/- 0.07, N = 2 SE +/- 0.02, N = 2 41.00 40.23 40.38 MIN: 39.84 / MAX: 94.71 MIN: 39.46 / MAX: 45.74 MIN: 39.42 / MAX: 56.16 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 160 320 480 640 800 720.44 706.95 MAX: 4246.32 MAX: 4244.39
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500
b: Test failed to run.
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet a b AMD Renoir - AMD Ryzen 5 4500U 1.1565 2.313 3.4695 4.626 5.7825 SE +/- 0.04, N = 2 SE +/- 0.01, N = 2 5.14 5.13 5.05 MIN: 5.03 / MAX: 9.54 MIN: 4.94 / MAX: 16.65 MIN: 4.94 / MAX: 9.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 200K 400K 600K 800K 1000K 870872.73 855795.41
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200
b: Test failed to run.
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 a b AMD Renoir - AMD Ryzen 5 4500U 90 180 270 360 450 SE +/- 7.43, N = 2 SE +/- 3.34, N = 2 401.31 404.39 407.85 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 16 32 48 64 80 71.38 70.26 MAX: 1486.72 MAX: 1642.8
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500
b: Test failed to run.
OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 19.34 19.64 MAX: 1470.64 MAX: 1530.17
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 SE +/- 0.12, N = 2 SE +/- 0.00, N = 2 20.90 21.21 21.19
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 30 60 90 120 150 SE +/- 0.80, N = 2 SE +/- 0.03, N = 2 143.26 141.30 141.31
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 8 16 24 32 40 SE +/- 0.17, N = 2 SE +/- 0.29, N = 2 35.50 35.40 35.86
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 20 40 60 80 100 SE +/- 0.40, N = 2 SE +/- 0.67, N = 2 84.37 84.60 83.54
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.01, N = 2 SE +/- 0.12, N = 2 11.30 11.44 11.43 MIN: 10.52 / MAX: 28.17 MIN: 10.56 / MAX: 28.99 MIN: 10.5 / MAX: 29.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 200K 400K 600K 800K 1000K 982415.93 994052.25
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 0.8286 1.6572 2.4858 3.3144 4.143 SE +/- 0.0254, N = 2 SE +/- 0.0051, N = 2 3.6825 3.6482 3.6397
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b AMD Renoir - AMD Ryzen 5 4500U 2 4 6 8 10 SE +/- 0.15, N = 2 SE +/- 0.11, N = 2 6.40 6.41 6.34 MIN: 5.23 / MAX: 21.11 MIN: 5.28 / MAX: 22.1 MIN: 5.17 / MAX: 24.25 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 160 320 480 640 800 SE +/- 0.36, N = 2 SE +/- 1.73, N = 2 723.99 718.17 717.18
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet50 a b AMD Renoir - AMD Ryzen 5 4500U 8 16 24 32 40 SE +/- 0.25, N = 2 SE +/- 0.15, N = 2 35.50 35.17 35.24 MIN: 34.39 / MAX: 63.28 MIN: 34.08 / MAX: 63.6 MIN: 34.4 / MAX: 51.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:10 a b AMD Renoir - AMD Ryzen 5 4500U 140K 280K 420K 560K 700K SE +/- 66150.22, N = 2 SE +/- 8371.13, N = 2 638619.97 636341.63 632802.22 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
VkResample VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better VkResample 1.0 Upscale: 2x - Precision: Single a b AMD Renoir - AMD Ryzen 5 4500U 13 26 39 52 65 SE +/- 0.09, N = 2 SE +/- 0.02, N = 2 55.60 55.73 55.25 1. (CXX) g++ options: -O3
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet a b AMD Renoir - AMD Ryzen 5 4500U 6 12 18 24 30 SE +/- 0.15, N = 2 SE +/- 0.05, N = 2 26.92 27.02 26.79 MIN: 26.1 / MAX: 45.94 MIN: 26.45 / MAX: 32.06 MIN: 25.89 / MAX: 86.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 200 400 600 800 1000 SE +/- 5.58, N = 2 SE +/- 0.10, N = 2 809.27 815.38 816.05
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 60 120 180 240 300 SE +/- 0.47, N = 2 SE +/- 0.44, N = 2 286.90 285.74 288.08
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 a b AMD Renoir - AMD Ryzen 5 4500U 4 8 12 16 20 SE +/- 0.22, N = 2 SE +/- 0.28, N = 2 15.35 15.36 15.47 MIN: 14.65 / MAX: 35.35 MIN: 14.62 / MAX: 31.58 MIN: 14.36 / MAX: 32.25 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 15 30 45 60 75 SE +/- 0.15, N = 2 SE +/- 0.26, N = 2 68.18 67.85 67.67
Apache CouchDB This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 3000 - Rounds: 30 a b AMD Renoir - AMD Ryzen 5 4500U 300 600 900 1200 1500 SE +/- 5.65, N = 2 SE +/- 3.01, N = 2 1252.60 1261.98 1254.98 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.03, N = 2 SE +/- 0.01, N = 2 10.82 10.79 10.74 MIN: 10.44 / MAX: 22.89 MIN: 10.5 / MAX: 13.68 MIN: 10.36 / MAX: 20.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 10 20 30 40 50 SE +/- 0.09, N = 2 SE +/- 0.17, N = 2 43.96 44.18 44.28
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 a b AMD Renoir - AMD Ryzen 5 4500U 8 16 24 32 40 SE +/- 0.08, N = 2 SE +/- 0.25, N = 2 35.45 35.71 35.63 MIN: 34.36 / MAX: 53 MIN: 34.37 / MAX: 52.21 MIN: 34.42 / MAX: 98.87 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 6 12 18 24 30 SE +/- 0.08, N = 2 SE +/- 0.06, N = 2 24.70 24.64 24.53
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 30 60 90 120 150 156.19 155.14 MAX: 2583.42 MAX: 4202.36
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200
b: Test failed to run.
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mobilenet a b AMD Renoir - AMD Ryzen 5 4500U 6 12 18 24 30 SE +/- 0.12, N = 2 SE +/- 0.08, N = 2 26.95 26.98 26.80 MIN: 26 / MAX: 45.37 MIN: 25.99 / MAX: 42.84 MIN: 25.92 / MAX: 84.56 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vgg16 a b AMD Renoir - AMD Ryzen 5 4500U 20 40 60 80 100 SE +/- 0.06, N = 2 SE +/- 0.02, N = 2 93.57 92.98 93.50 MIN: 91.4 / MAX: 112.49 MIN: 90.89 / MAX: 125.87 MIN: 91.41 / MAX: 107.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 7 14 21 28 35 28.53 28.36 MAX: 1820.94 MAX: 1210
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 0.9352 1.8704 2.8056 3.7408 4.676 SE +/- 0.0054, N = 2 SE +/- 0.0284, N = 2 4.1321 4.1495 4.1565
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 200K 400K 600K 800K 1000K 818281.01 813547.71
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500
b: Test failed to run.
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: alexnet a b AMD Renoir - AMD Ryzen 5 4500U 3 6 9 12 15 SE +/- 0.03, N = 2 SE +/- 0.00, N = 2 10.73 10.71 10.67 MIN: 10.47 / MAX: 16.09 MIN: 10.41 / MAX: 19.42 MIN: 10.37 / MAX: 21.61 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 5 10 15 20 25 21.11 21.00 MAX: 1265.96 MAX: 1320.28
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 0.9515 1.903 2.8545 3.806 4.7575 SE +/- 0.0081, N = 2 SE +/- 0.0064, N = 2 4.2141 4.2071 4.2291
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b AMD Renoir - AMD Ryzen 5 4500U 50 100 150 200 250 SE +/- 0.45, N = 2 SE +/- 0.35, N = 2 237.28 237.68 236.44
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 10 20 30 40 50 SE +/- 0.07, N = 2 SE +/- 0.03, N = 2 43.81 43.76 43.98
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 140K 280K 420K 560K 700K 640326.81 637397.90
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500
b: Test failed to run.
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b AMD Renoir - AMD Ryzen 5 4500U 15 30 45 60 75 SE +/- 0.12, N = 2 SE +/- 0.06, N = 2 68.41 68.43 68.14
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a b AMD Renoir - AMD Ryzen 5 4500U 20 40 60 80 100 SE +/- 0.16, N = 2 SE +/- 0.09, N = 2 93.43 93.69 93.72 MIN: 91.03 / MAX: 140.17 MIN: 91.65 / MAX: 138.85 MIN: 91.26 / MAX: 109.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 a AMD Renoir - AMD Ryzen 5 4500U 150K 300K 450K 600K 750K 686458.89 684869.04
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200
b: Test failed to run.
OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 a AMD Renoir - AMD Ryzen 5 4500U 10 20 30 40 50 45.12 45.06 MAX: 1423.03 MAX: 1511.68
Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500
b: Test failed to run.
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny a b AMD Renoir - AMD Ryzen 5 4500U 9 18 27 36 45 SE +/- 0.11, N = 2 SE +/- 0.30, N = 2 40.68 40.66 40.64 MIN: 39.56 / MAX: 80.12 MIN: 39.41 / MAX: 82.29 MIN: 39.51 / MAX: 87.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org GFLOPS, More Is Better vkpeak 20230730 fp16-scalar a b AMD Renoir - AMD Ryzen 5 4500U 0.9473 1.8946 2.8419 3.7892 4.7365 SE +/- 0.01, N = 2 SE +/- 0.01, N = 2 4.21 4.21 4.21
a Processor: AMD Ryzen 5 4500U @ 2.38GHz (6 Cores), Motherboard: LENOVO LNVNB161216 (EECN20WW BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 256GB SK hynix HFM256GDHTNI-87A0B, Graphics: AMD Renoir 512MB (1500/400MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8822CE 802.11ac PCIe
OS: Pop 22.04, Kernel: 5.17.5-76051705-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.44), Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balancedGraphics Notes: GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-025Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 4 August 2023 09:48 by user phoronix.
b Processor: AMD Ryzen 5 4500U @ 2.38GHz (6 Cores), Motherboard: LENOVO LNVNB161216 (EECN20WW BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 256GB SK hynix HFM256GDHTNI-87A0B, Graphics: AMD Renoir 512MB (1500MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8822CE 802.11ac PCIe
OS: Pop 22.04, Kernel: 5.17.5-76051705-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.44), Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balancedGraphics Notes: GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-025Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 4 August 2023 17:29 by user phoronix.
AMD Renoir - AMD Ryzen 5 4500U Processor: AMD Ryzen 5 4500U @ 2.38GHz (6 Cores), Motherboard: LENOVO LNVNB161216 (EECN20WW BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 256GB SK hynix HFM256GDHTNI-87A0B, Graphics: AMD Renoir 512MB (1500/400MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8822CE 802.11ac PCIe
OS: Pop 22.04, Kernel: 5.17.5-76051705-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.44), Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balancedGraphics Notes: GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-025Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 5 August 2023 04:37 by user phoronix.