7763 2204

AMD EPYC 7763 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2308059-NE-77632204529&rdt&grs.

7763 2204ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionabcAMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads)AMD DAYTONA_X (RYM1009B BIOS)AMD Starship/Matisse256GB800GB INTEL SSDPF21Q800GBASPEEDVE2282 x Mellanox MT27710Ubuntu 22.046.2.0-phx (x86_64)GNOME Shell 42.5X Server 1.21.1.31.3.224GCC 11.3.0 + LLVM 14.0.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 Java Details- OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

7763 2204ncnn: CPU - regnety_400mapache-iotdb: 500 - 1 - 500ncnn: CPU - shufflenet-v2apache-iotdb: 200 - 1 - 200apache-iotdb: 500 - 1 - 500apache-iotdb: 500 - 1 - 200ncnn: CPU - blazefaceapache-iotdb: 500 - 1 - 200ncnn: CPU - FastestDetapache-iotdb: 200 - 1 - 200ncnn: CPU-v3-v3 - mobilenet-v3apache-iotdb: 100 - 1 - 200srsran: Downlink Processor Benchmarkapache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 500apache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 1 - 500apache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 500apache-iotdb: 100 - 100 - 200ncnn: CPU - mnasnetapache-iotdb: 100 - 100 - 500apache-iotdb: 200 - 100 - 500apache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 1 - 200apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 1 - 500ncnn: CPU - squeezenet_ssdapache-iotdb: 200 - 100 - 500apache-iotdb: 100 - 1 - 500ncnn: CPU-v2-v2 - mobilenet-v2apache-iotdb: 200 - 1 - 500ncnn: CPU - efficientnet-b0apache-iotdb: 200 - 100 - 200vvenc: Bosphorus 4K - Fasterdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamncnn: CPU - yolov4-tinycassandra: Writessrsran: PUSCH Processor Benchmark, Throughput Threadncnn: CPU - resnet50deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamncnn: CPU - vgg16deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamncnn: CPU - resnet18ncnn: CPU - googlenetncnn: CPU - mobilenetblender: BMW27 - CPU-Onlydeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamncnn: CPU - vision_transformerdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streambrl-cad: VGR Performance Metricdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamsrsran: PUSCH Processor Benchmark, Throughput Totalblender: Pabellon Barcelona - CPU-Onlyblender: Classroom - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamvvenc: Bosphorus 1080p - Fasterdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamvvenc: Bosphorus 4K - Fastdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamncnn: CPU - alexnetdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamblender: Fishy Cat - CPU-Onlydeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamblender: Barbershop - CPU-Onlydeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streambuild-gcc: Time To Compiledeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamcouchdb: 500 - 3000 - 30couchdb: 500 - 1000 - 30couchdb: 300 - 3000 - 30couchdb: 300 - 1000 - 30couchdb: 100 - 3000 - 30couchdb: 100 - 1000 - 30abc35.2427.19.0915.241636128.7313.543.971182440.6210.25898967.087.0017.45657.735.0556935634.5536.0434.3651341708.8581.8139287432.926.0981.2109.3851316464.44644019.7235.0933.514.1742048733.221038515.626.351232509.199.9846437377.6710.646489.770965.275020.66236650211.115.491.3784723.714923.84173.94625.7478.5014.6214.1127.2797.356110.264548.7946.720473438610.577094.5097159.84926.252540.904824.4422159.96606.248118.585853.78459682.184.5568.8050.125819.946929.35239.421925.356143.0725120.36458.305611.537886.6104223.47155.9918.36653814.519453.4831597.967937.614416.083225.3104681.275649.813220.072037.61095.231105.379133.7028.9150141.70368.3423119.8041841.4779165.9980253.49467.9848192.354668.3188840.5416227.4188140.3979468.1383326.31231020.133574.955068.285255.581728.631034.910497.87112390.933339.967572.125169.505346.085101.57827.54267.9313.61686943.1611.633.431365831.59.04978176.766.5517.28658.136.159505306.5536.8532.9250045888.9879.8338401769.175.982.26110.8850507747.12648308.2733.8433.8614.1141987111.391069145.796.261226219.889.7847245476.7810.815486.172865.735920.51238161208.215.341.3623732.110123.64172.75055.78638.4214.5313.9727.597.612110.238048.4946.573972987610.629994.0445159.74816.256640.699324.5655159.92386.249518.614453.70239718.684.3568.5049.922620.027929.3939.563125.2659143.2058120.17628.318211.566686.3905223.36705.9938.34413824.304553.5531596.809137.531516.090225.4592679.824049.914420.031537.53525.221104.037733.7628.9451141.61858.3368119.8818840.2573166.2231253.77468.1087192.164568.2716840.9493227.6423140.2607467.9696326.54391020.846575.285968.276355.560128.632934.908297.843427.5931.47.6016.071446487.711.833.481367763.498.88870795.926.3416.35619.337.1256463717.5435.0134.0849201448.8183.1439945212.995.8679.16106.7352464142.83667880.9634.7932.7114.5943363203.761044153.446.171261385.899.7546674344.6910.818482.127466.283720.80234887210.815.541.3634731.509923.91172.06595.80928.5114.4714.0327.2496.882210.314648.4346.902173043410.572094.5548160.59326.223840.760624.5284160.69856.219518.52853.9529727.184.1768.7049.931220.024529.46939.511725.2989143.5714120.58138.290311.576486.3219222.81755.9768.34683823.083353.6166596.534337.581416.055225.8007679.811849.858020.053637.57555.221103.355233.7228.9644141.47758.3300119.9790840.4350166.058253.43467.6396192.214868.3388840.1234227.5740140.3036468.3293326.41081020.216575.115868.250355.583528.641634.897897.8698OpenBenchmarking.org

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mabc816243240SE +/- 5.99, N = 2SE +/- 0.15, N = 2SE +/- 0.07, N = 235.2427.5427.59MIN: 27.86 / MAX: 47.9MIN: 26.96 / MAX: 33.56MIN: 26.64 / MAX: 33.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500abc71421283527.126.031.4MAX: 934.45MAX: 873.88MAX: 890.05

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2abc3691215SE +/- 1.23, N = 2SE +/- 0.31, N = 2SE +/- 0.04, N = 29.097.937.60MIN: 7.74 / MAX: 15.92MIN: 7.51 / MAX: 11.45MIN: 7.44 / MAX: 11.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abc4812162015.2413.6016.07MAX: 583.94MAX: 586.94MAX: 592.48

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500abc400K800K1200K1600K2000K1636128.731686943.161446487.70

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200abc369121513.5411.6311.83MAX: 856.65MAX: 860.78MAX: 836.9

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceabc0.89331.78662.67993.57324.4665SE +/- 0.09, N = 2SE +/- 0.01, N = 2SE +/- 0.05, N = 23.973.433.48MIN: 3.5 / MAX: 7.61MIN: 3.35 / MAX: 3.83MIN: 3.32 / MAX: 8.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200abc300K600K900K1200K1500K1182440.621365831.501367763.49

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetabc3691215SE +/- 0.94, N = 2SE +/- 0.11, N = 2SE +/- 0.01, N = 210.259.048.88MIN: 8.95 / MAX: 17.14MIN: 8.65 / MAX: 15.19MIN: 8.58 / MAX: 13.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abc200K400K600K800K1000K898967.08978176.76870795.92

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3abc246810SE +/- 0.55, N = 2SE +/- 0.18, N = 2SE +/- 0.04, N = 27.006.556.34MIN: 6.3 / MAX: 10.2MIN: 6.24 / MAX: 7.61MIN: 6.15 / MAX: 11.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abc4812162017.4517.2816.35MAX: 645.35MAX: 644.33MAX: 668.86

srsRAN Project

Test: Downlink Processor Benchmark

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkabc140280420560700SE +/- 17.85, N = 2SE +/- 27.75, N = 2SE +/- 0.35, N = 2657.7658.1619.31. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200abc91827364535.0536.1037.12MAX: 2157.23MAX: 1990.15MAX: 2182.81

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500abc13M26M39M52M65M56935634.5559505306.5556463717.54

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abc81624324036.0436.8535.01MAX: 804.01MAX: 721.27MAX: 746.4

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abc81624324034.3632.9234.08MAX: 704.53MAX: 728.63MAX: 699.28

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200abc11M22M33M44M55M51341708.8550045888.9849201448.81

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500abc2040608010081.8179.8383.14MAX: 3018.16MAX: 1607.86MAX: 2932.1

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abc9M18M27M36M45M39287432.9238401769.1739945212.99

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetabc246810SE +/- 0.11, N = 2SE +/- 0.00, N = 2SE +/- 0.05, N = 26.095.905.86MIN: 5.89 / MAX: 10.35MIN: 5.81 / MAX: 12.33MIN: 5.73 / MAX: 11.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abc2040608010081.2082.2679.16MAX: 1009.28MAX: 864.29MAX: 1006.03

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abc20406080100109.38110.88106.73MAX: 3597.09MAX: 3569.78MAX: 3485.91

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abc11M22M33M44M55M51316464.4450507747.1252464142.83

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abc140K280K420K560K700K644019.72648308.27667880.96

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abc81624324035.0933.8434.79MAX: 804.64MAX: 773.52MAX: 780.01

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abc81624324033.5033.8632.71MAX: 690.29MAX: 659.59MAX: 725.08

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdabc48121620SE +/- 0.01, N = 2SE +/- 0.09, N = 2SE +/- 0.62, N = 214.1714.1114.59MIN: 13.53 / MAX: 18.45MIN: 13.32 / MAX: 18.63MIN: 13.37 / MAX: 277.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abc9M18M27M36M45M42048733.2241987111.3943363203.76

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abc200K400K600K800K1000K1038515.621069145.791044153.44

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2abc246810SE +/- 0.01, N = 2SE +/- 0.00, N = 2SE +/- 0.07, N = 26.356.266.17MIN: 6.19 / MAX: 12.45MIN: 6.11 / MAX: 12.76MIN: 6.02 / MAX: 6.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abc300K600K900K1200K1500K1232509.191226219.881261385.89

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0abc3691215SE +/- 0.03, N = 2SE +/- 0.01, N = 2SE +/- 0.04, N = 29.989.789.75MIN: 9.82 / MAX: 10.96MIN: 9.64 / MAX: 16.01MIN: 9.58 / MAX: 13.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abc10M20M30M40M50M46437377.6747245476.7846674344.69

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterabc3691215SE +/- 0.18, N = 2SE +/- 0.02, N = 2SE +/- 0.00, N = 210.6510.8210.821. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc110220330440550SE +/- 0.68, N = 2SE +/- 1.03, N = 2SE +/- 7.21, N = 2489.77486.17482.13

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc1530456075SE +/- 0.10, N = 2SE +/- 0.10, N = 2SE +/- 0.99, N = 265.2865.7466.28

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyabc510152025SE +/- 0.02, N = 2SE +/- 0.08, N = 2SE +/- 0.13, N = 220.6620.5120.80MIN: 20.04 / MAX: 25.04MIN: 19.87 / MAX: 24.86MIN: 20.01 / MAX: 96.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesabc50K100K150K200K250KSE +/- 633.50, N = 2SE +/- 817.50, N = 2SE +/- 669.00, N = 2236650238161234887

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadabc50100150200250SE +/- 0.10, N = 2SE +/- 1.90, N = 2SE +/- 0.20, N = 2211.1208.2210.81. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50abc48121620SE +/- 0.08, N = 2SE +/- 0.10, N = 2SE +/- 0.14, N = 215.4915.3415.54MIN: 15.24 / MAX: 21.82MIN: 15.07 / MAX: 21.69MIN: 15.15 / MAX: 27.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc0.31010.62020.93031.24041.5505SE +/- 0.0205, N = 2SE +/- 0.0055, N = 2SE +/- 0.0010, N = 21.37841.36231.3634

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc160320480640800SE +/- 10.71, N = 2SE +/- 2.91, N = 2SE +/- 0.48, N = 2723.71732.11731.51

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16abc612182430SE +/- 0.05, N = 2SE +/- 0.06, N = 2SE +/- 0.08, N = 223.8423.6423.91MIN: 23.45 / MAX: 28.55MIN: 23.33 / MAX: 28.05MIN: 23.48 / MAX: 30.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.98, N = 2SE +/- 0.63, N = 2SE +/- 0.80, N = 2173.95172.75172.07

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc1.30712.61423.92135.22846.5355SE +/- 0.0325, N = 2SE +/- 0.0212, N = 2SE +/- 0.0271, N = 25.74705.78635.8092

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18abc246810SE +/- 0.03, N = 2SE +/- 0.04, N = 2SE +/- 0.03, N = 28.508.428.51MIN: 8.33 / MAX: 14.69MIN: 8.27 / MAX: 14.6MIN: 8.3 / MAX: 13.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetabc48121620SE +/- 0.03, N = 2SE +/- 0.02, N = 2SE +/- 0.06, N = 214.6214.5314.47MIN: 14.46 / MAX: 25.51MIN: 14.31 / MAX: 24.11MIN: 14.26 / MAX: 20.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetabc48121620SE +/- 0.04, N = 2SE +/- 0.03, N = 2SE +/- 0.09, N = 214.1113.9714.03MIN: 13.76 / MAX: 19.75MIN: 13.64 / MAX: 19.68MIN: 13.68 / MAX: 18.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyabc612182430SE +/- 0.06, N = 2SE +/- 0.15, N = 2SE +/- 0.04, N = 227.2727.5027.24

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc20406080100SE +/- 0.04, N = 2SE +/- 0.17, N = 2SE +/- 0.08, N = 297.3697.6196.88

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc3691215SE +/- 0.00, N = 2SE +/- 0.02, N = 2SE +/- 0.01, N = 210.2610.2410.31

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerabc1122334455SE +/- 0.07, N = 2SE +/- 0.04, N = 2SE +/- 0.30, N = 248.7948.4948.43MIN: 47.65 / MAX: 78.36MIN: 47.44 / MAX: 58.53MIN: 47.33 / MAX: 85.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc1122334455SE +/- 0.10, N = 2SE +/- 0.03, N = 2SE +/- 0.02, N = 246.7246.5746.90

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricabc160K320K480K640K800KSE +/- 1805.50, N = 2SE +/- 357.50, N = 2SE +/- 963.50, N = 27343867298767304341. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc3691215SE +/- 0.01, N = 2SE +/- 0.07, N = 2SE +/- 0.02, N = 210.5810.6310.57

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc20406080100SE +/- 0.05, N = 2SE +/- 0.64, N = 2SE +/- 0.18, N = 294.5194.0494.55

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.43, N = 2SE +/- 0.22, N = 2SE +/- 0.80, N = 2159.85159.75160.59

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0166, N = 2SE +/- 0.0087, N = 2SE +/- 0.0308, N = 26.25256.25666.2238

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc918273645SE +/- 0.01, N = 2SE +/- 0.01, N = 2SE +/- 0.01, N = 240.9040.7040.76

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc612182430SE +/- 0.01, N = 2SE +/- 0.01, N = 2SE +/- 0.00, N = 224.4424.5724.53

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.24, N = 2SE +/- 0.32, N = 2SE +/- 0.62, N = 2159.97159.92160.70

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0097, N = 2SE +/- 0.0121, N = 2SE +/- 0.0244, N = 26.24816.24956.2195

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc510152025SE +/- 0.03, N = 2SE +/- 0.00, N = 2SE +/- 0.03, N = 218.5918.6118.53

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc1224364860SE +/- 0.08, N = 2SE +/- 0.01, N = 2SE +/- 0.10, N = 253.7853.7053.95

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalabc2K4K6K8K10KSE +/- 13.30, N = 2SE +/- 44.35, N = 2SE +/- 56.45, N = 29682.19718.69727.11. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyabc20406080100SE +/- 0.40, N = 2SE +/- 0.14, N = 2SE +/- 0.04, N = 284.5584.3584.17

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyabc1530456075SE +/- 0.14, N = 2SE +/- 0.03, N = 2SE +/- 0.13, N = 268.8068.5068.70

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc1122334455SE +/- 0.02, N = 2SE +/- 0.11, N = 2SE +/- 0.05, N = 250.1349.9249.93

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc510152025SE +/- 0.01, N = 2SE +/- 0.04, N = 2SE +/- 0.02, N = 219.9520.0320.02

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterabc714212835SE +/- 0.07, N = 2SE +/- 0.11, N = 2SE +/- 0.07, N = 229.3529.3929.471. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc918273645SE +/- 0.06, N = 2SE +/- 0.11, N = 2SE +/- 0.01, N = 239.4239.5639.51

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc612182430SE +/- 0.04, N = 2SE +/- 0.07, N = 2SE +/- 0.01, N = 225.3625.2725.30

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc306090120150SE +/- 0.01, N = 2SE +/- 0.18, N = 2SE +/- 0.03, N = 2143.07143.21143.57

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc306090120150SE +/- 0.19, N = 2SE +/- 0.02, N = 2SE +/- 0.08, N = 2120.36120.18120.58

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0127, N = 2SE +/- 0.0014, N = 2SE +/- 0.0053, N = 28.30568.31828.2903

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc3691215SE +/- 0.08, N = 2SE +/- 0.01, N = 2SE +/- 0.06, N = 211.5411.5711.58

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc20406080100SE +/- 0.59, N = 2SE +/- 0.08, N = 2SE +/- 0.42, N = 286.6186.3986.32

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc50100150200250SE +/- 0.03, N = 2SE +/- 0.26, N = 2SE +/- 0.05, N = 2223.47223.37222.82

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastabc1.34842.69684.04525.39366.742SE +/- 0.001, N = 2SE +/- 0.006, N = 2SE +/- 0.002, N = 25.9915.9935.9761. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc246810SE +/- 0.0013, N = 2SE +/- 0.0212, N = 2SE +/- 0.0366, N = 28.36658.34418.3468

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc8001600240032004000SE +/- 0.54, N = 2SE +/- 10.54, N = 2SE +/- 17.23, N = 23814.523824.303823.08

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc1224364860SE +/- 0.00, N = 2SE +/- 0.05, N = 2SE +/- 0.00, N = 253.4853.5553.62

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc130260390520650SE +/- 0.05, N = 2SE +/- 0.12, N = 2SE +/- 0.01, N = 2597.97596.81596.53

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc918273645SE +/- 0.09, N = 2SE +/- 0.01, N = 2SE +/- 0.02, N = 237.6137.5337.58

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastabc48121620SE +/- 0.02, N = 2SE +/- 0.02, N = 2SE +/- 0.02, N = 216.0816.0916.061. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc50100150200250SE +/- 0.21, N = 2SE +/- 0.10, N = 2SE +/- 0.22, N = 2225.31225.46225.80

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc150300450600750SE +/- 0.37, N = 2SE +/- 0.17, N = 2SE +/- 0.34, N = 2681.28679.82679.81

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc1122334455SE +/- 0.05, N = 2SE +/- 0.00, N = 2SE +/- 0.06, N = 249.8149.9149.86

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc510152025SE +/- 0.02, N = 2SE +/- 0.00, N = 2SE +/- 0.02, N = 220.0720.0320.05

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc918273645SE +/- 0.04, N = 2SE +/- 0.01, N = 2SE +/- 0.02, N = 237.6137.5437.58

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetabc1.17682.35363.53044.70725.884SE +/- 0.01, N = 2SE +/- 0.02, N = 2SE +/- 0.02, N = 25.235.225.22MIN: 5.12 / MAX: 11.62MIN: 5.11 / MAX: 5.77MIN: 5.12 / MAX: 7.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 1.07, N = 2SE +/- 1.15, N = 2SE +/- 0.21, N = 21105.381104.041103.36

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyabc816243240SE +/- 0.02, N = 2SE +/- 0.25, N = 2SE +/- 0.01, N = 233.7033.7633.72

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc714212835SE +/- 0.03, N = 2SE +/- 0.03, N = 2SE +/- 0.01, N = 228.9228.9528.96

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc306090120150SE +/- 0.13, N = 2SE +/- 0.09, N = 2SE +/- 0.05, N = 2141.70141.62141.48

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0056, N = 2SE +/- 0.0012, N = 2SE +/- 0.0003, N = 28.34238.33688.3300

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc306090120150SE +/- 0.08, N = 2SE +/- 0.02, N = 2SE +/- 0.01, N = 2119.80119.88119.98

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 0.49, N = 2SE +/- 0.01, N = 2SE +/- 0.44, N = 2841.48840.26840.44

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc4080120160200SE +/- 0.05, N = 2SE +/- 0.06, N = 2SE +/- 0.08, N = 2166.00166.22166.06

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyabc60120180240300SE +/- 0.11, N = 2SE +/- 0.23, N = 2SE +/- 0.52, N = 2253.49253.77253.43

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc100200300400500SE +/- 0.36, N = 2SE +/- 1.23, N = 2SE +/- 0.26, N = 2467.98468.11467.64

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc4080120160200SE +/- 0.06, N = 2SE +/- 0.00, N = 2SE +/- 0.21, N = 2192.35192.16192.21

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc1530456075SE +/- 0.04, N = 2SE +/- 0.13, N = 2SE +/- 0.01, N = 268.3268.2768.34

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 1.02, N = 2SE +/- 0.16, N = 2SE +/- 0.59, N = 2840.54840.95840.12

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc50100150200250SE +/- 0.11, N = 2SE +/- 0.19, N = 2SE +/- 0.22, N = 2227.42227.64227.57

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc306090120150SE +/- 0.07, N = 2SE +/- 0.12, N = 2SE +/- 0.23, N = 2140.40140.26140.30

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc100200300400500SE +/- 0.19, N = 2SE +/- 0.62, N = 2SE +/- 0.09, N = 2468.14467.97468.33

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc70140210280350SE +/- 0.12, N = 2SE +/- 0.48, N = 2SE +/- 0.49, N = 2326.31326.54326.41

Timed GCC Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileabc2004006008001000SE +/- 1.81, N = 2SE +/- 0.03, N = 2SE +/- 0.66, N = 21020.131020.851020.22

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc120240360480600SE +/- 0.03, N = 2SE +/- 0.68, N = 2SE +/- 0.31, N = 2574.96575.29575.12

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc1530456075SE +/- 0.04, N = 2SE +/- 0.04, N = 2SE +/- 0.01, N = 268.2968.2868.25

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc1224364860SE +/- 0.03, N = 2SE +/- 0.06, N = 2SE +/- 0.04, N = 255.5855.5655.58

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc714212835SE +/- 0.03, N = 2SE +/- 0.04, N = 2SE +/- 0.02, N = 228.6328.6328.64

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc816243240SE +/- 0.03, N = 2SE +/- 0.05, N = 2SE +/- 0.03, N = 234.9134.9134.90

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc20406080100SE +/- 0.00, N = 2SE +/- 0.15, N = 2SE +/- 0.16, N = 297.8797.8497.87

Apache CouchDB

Bulk Size: 500 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 3000 - Rounds: 30a50010001500200025002390.931. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 500 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 1000 - Rounds: 30a70140210280350SE +/- 8.78, N = 2339.971. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 300 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30a120240360480600SE +/- 0.52, N = 2572.131. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 300 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30a4080120160200SE +/- 0.64, N = 2169.511. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 100 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30a80160240320400SE +/- 0.25, N = 2346.091. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 100 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30a20406080100SE +/- 0.50, N = 2101.581. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD


Phoronix Test Suite v10.8.5