7763 2204

AMD EPYC 7763 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2308059-NE-77632204529&rdt.

7763 2204ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionabcAMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads)AMD DAYTONA_X (RYM1009B BIOS)AMD Starship/Matisse256GB800GB INTEL SSDPF21Q800GBASPEEDVE2282 x Mellanox MT27710Ubuntu 22.046.2.0-phx (x86_64)GNOME Shell 42.5X Server 1.21.1.31.3.224GCC 11.3.0 + LLVM 14.0.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 Java Details- OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

7763 2204srsran: Downlink Processor Benchmarksrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Threadvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterbuild-gcc: Time To Compilecouchdb: 100 - 1000 - 30couchdb: 100 - 3000 - 30couchdb: 300 - 1000 - 30couchdb: 300 - 3000 - 30couchdb: 500 - 1000 - 30couchdb: 500 - 3000 - 30apache-iotdb: 100 - 1 - 200apache-iotdb: 100 - 1 - 200apache-iotdb: 100 - 1 - 500apache-iotdb: 100 - 1 - 500apache-iotdb: 200 - 1 - 200apache-iotdb: 200 - 1 - 200apache-iotdb: 200 - 1 - 500apache-iotdb: 200 - 1 - 500apache-iotdb: 500 - 1 - 200apache-iotdb: 500 - 1 - 200apache-iotdb: 500 - 1 - 500apache-iotdb: 500 - 1 - 500apache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 100 - 500apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 100 - 500apache-iotdb: 200 - 100 - 500apache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 500apache-iotdb: 500 - 100 - 500deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlycassandra: Writesbrl-cad: VGR Performance Metricabc657.79682.1211.15.99110.64616.08329.3521020.133101.578346.085169.505572.125339.9672390.933644019.7217.451038515.6234.36898967.0815.241232509.1933.51182440.6213.541636128.7327.139287432.9236.0451316464.4481.246437377.6735.0942048733.22109.3851341708.8535.0556935634.5581.8137.6144840.541619.946950.12581105.379128.9150173.94625.747489.770965.275086.610411.5378143.0725223.471539.421925.356468.138368.2852159.84926.25253814.51948.3665723.71491.3784225.3104141.7036119.80418.342346.7204681.275624.442240.9048467.984868.3188159.96606.2481227.4188140.3979120.36458.3056326.312397.871197.356110.264553.4831597.967928.631034.9104574.955055.581794.509710.5770165.9980192.354653.784518.585837.6109841.477920.072049.813214.116.357.009.096.099.983.9714.6223.848.505.2315.4920.6614.1735.2448.7910.2527.2768.8033.70253.4984.55236650734386658.19718.6208.25.99310.81516.09029.391020.846648308.2717.281069145.7932.92978176.7613.61226219.8833.861365831.511.631686943.162638401769.1736.8550507747.1282.2647245476.7833.8441987111.39110.8850045888.9836.159505306.5579.8337.5315840.949320.027949.92261104.037728.9451172.75055.7863486.172865.735986.390511.5666143.2058223.367039.563125.2659467.969668.2763159.74816.25663824.30458.3441732.11011.3623225.4592141.6185119.88188.336846.5739679.824024.565540.6993468.108768.2716159.92386.2495227.6423140.2607120.17628.3182326.543997.843497.612110.238053.5531596.809128.632934.9082575.285955.560194.044510.6299166.2231192.164553.702318.614437.5352840.257320.031549.914413.976.266.557.935.99.783.4314.5323.648.425.2215.3420.5114.1127.5448.499.0427.568.5033.76253.7784.35238161729876619.39727.1210.85.97610.81816.05529.4691020.216667880.9616.351044153.4434.08870795.9216.071261385.8932.711367763.4911.831446487.731.439945212.9935.0152464142.8379.1646674344.6934.7943363203.76106.7349201448.8137.1256463717.5483.1437.5814840.123420.024549.93121103.355228.9644172.06595.8092482.127466.283786.321911.5764143.5714222.817539.511725.2989468.329368.2503160.59326.22383823.08338.3468731.50991.3634225.8007141.4775119.97908.330046.9021679.811824.528440.7606467.639668.3388160.69856.2195227.5740140.3036120.58138.2903326.410897.869896.882210.314653.6166596.534328.641634.8978575.115855.583594.554810.5720166.058192.214853.95218.52837.5755840.435020.053649.858014.036.176.347.605.869.753.4814.4723.918.515.2215.5420.8014.5927.5948.438.8827.2468.7033.72253.4384.17234887730434OpenBenchmarking.org

srsRAN Project

Test: Downlink Processor Benchmark

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkabc140280420560700SE +/- 17.85, N = 2SE +/- 27.75, N = 2SE +/- 0.35, N = 2657.7658.1619.31. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalabc2K4K6K8K10KSE +/- 13.30, N = 2SE +/- 44.35, N = 2SE +/- 56.45, N = 29682.19718.69727.11. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadabc50100150200250SE +/- 0.10, N = 2SE +/- 1.90, N = 2SE +/- 0.20, N = 2211.1208.2210.81. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastabc1.34842.69684.04525.39366.742SE +/- 0.001, N = 2SE +/- 0.006, N = 2SE +/- 0.002, N = 25.9915.9935.9761. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterabc3691215SE +/- 0.18, N = 2SE +/- 0.02, N = 2SE +/- 0.00, N = 210.6510.8210.821. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastabc48121620SE +/- 0.02, N = 2SE +/- 0.02, N = 2SE +/- 0.02, N = 216.0816.0916.061. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterabc714212835SE +/- 0.07, N = 2SE +/- 0.11, N = 2SE +/- 0.07, N = 229.3529.3929.471. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Timed GCC Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileabc2004006008001000SE +/- 1.81, N = 2SE +/- 0.03, N = 2SE +/- 0.66, N = 21020.131020.851020.22

Apache CouchDB

Bulk Size: 100 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30a20406080100SE +/- 0.50, N = 2101.581. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 100 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30a80160240320400SE +/- 0.25, N = 2346.091. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 300 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30a4080120160200SE +/- 0.64, N = 2169.511. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 300 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30a120240360480600SE +/- 0.52, N = 2572.131. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 500 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 1000 - Rounds: 30a70140210280350SE +/- 8.78, N = 2339.971. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache CouchDB

Bulk Size: 500 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 3000 - Rounds: 30a50010001500200025002390.931. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abc140K280K420K560K700K644019.72648308.27667880.96

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abc4812162017.4517.2816.35MAX: 645.35MAX: 644.33MAX: 668.86

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abc200K400K600K800K1000K1038515.621069145.791044153.44

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abc81624324034.3632.9234.08MAX: 704.53MAX: 728.63MAX: 699.28

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abc200K400K600K800K1000K898967.08978176.76870795.92

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abc4812162015.2413.6016.07MAX: 583.94MAX: 586.94MAX: 592.48

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abc300K600K900K1200K1500K1232509.191226219.881261385.89

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abc81624324033.5033.8632.71MAX: 690.29MAX: 659.59MAX: 725.08

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200abc300K600K900K1200K1500K1182440.621365831.501367763.49

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200abc369121513.5411.6311.83MAX: 856.65MAX: 860.78MAX: 836.9

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500abc400K800K1200K1600K2000K1636128.731686943.161446487.70

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500abc71421283527.126.031.4MAX: 934.45MAX: 873.88MAX: 890.05

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abc9M18M27M36M45M39287432.9238401769.1739945212.99

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abc81624324036.0436.8535.01MAX: 804.01MAX: 721.27MAX: 746.4

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abc11M22M33M44M55M51316464.4450507747.1252464142.83

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abc2040608010081.2082.2679.16MAX: 1009.28MAX: 864.29MAX: 1006.03

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abc10M20M30M40M50M46437377.6747245476.7846674344.69

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abc81624324035.0933.8434.79MAX: 804.64MAX: 773.52MAX: 780.01

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abc9M18M27M36M45M42048733.2241987111.3943363203.76

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abc20406080100109.38110.88106.73MAX: 3597.09MAX: 3569.78MAX: 3485.91

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200abc11M22M33M44M55M51341708.8550045888.9849201448.81

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200abc91827364535.0536.1037.12MAX: 2157.23MAX: 1990.15MAX: 2182.81

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500abc13M26M39M52M65M56935634.5559505306.5556463717.54

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500abc2040608010081.8179.8383.14MAX: 3018.16MAX: 1607.86MAX: 2932.1

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc918273645SE +/- 0.09, N = 2SE +/- 0.01, N = 2SE +/- 0.02, N = 237.6137.5337.58

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 1.02, N = 2SE +/- 0.16, N = 2SE +/- 0.59, N = 2840.54840.95840.12

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc510152025SE +/- 0.01, N = 2SE +/- 0.04, N = 2SE +/- 0.02, N = 219.9520.0320.02

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc1122334455SE +/- 0.02, N = 2SE +/- 0.11, N = 2SE +/- 0.05, N = 250.1349.9249.93

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 1.07, N = 2SE +/- 1.15, N = 2SE +/- 0.21, N = 21105.381104.041103.36

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc714212835SE +/- 0.03, N = 2SE +/- 0.03, N = 2SE +/- 0.01, N = 228.9228.9528.96

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.98, N = 2SE +/- 0.63, N = 2SE +/- 0.80, N = 2173.95172.75172.07

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc1.30712.61423.92135.22846.5355SE +/- 0.0325, N = 2SE +/- 0.0212, N = 2SE +/- 0.0271, N = 25.74705.78635.8092

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc110220330440550SE +/- 0.68, N = 2SE +/- 1.03, N = 2SE +/- 7.21, N = 2489.77486.17482.13

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc1530456075SE +/- 0.10, N = 2SE +/- 0.10, N = 2SE +/- 0.99, N = 265.2865.7466.28

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc20406080100SE +/- 0.59, N = 2SE +/- 0.08, N = 2SE +/- 0.42, N = 286.6186.3986.32

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc3691215SE +/- 0.08, N = 2SE +/- 0.01, N = 2SE +/- 0.06, N = 211.5411.5711.58

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc306090120150SE +/- 0.01, N = 2SE +/- 0.18, N = 2SE +/- 0.03, N = 2143.07143.21143.57

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc50100150200250SE +/- 0.03, N = 2SE +/- 0.26, N = 2SE +/- 0.05, N = 2223.47223.37222.82

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc918273645SE +/- 0.06, N = 2SE +/- 0.11, N = 2SE +/- 0.01, N = 239.4239.5639.51

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc612182430SE +/- 0.04, N = 2SE +/- 0.07, N = 2SE +/- 0.01, N = 225.3625.2725.30

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc100200300400500SE +/- 0.19, N = 2SE +/- 0.62, N = 2SE +/- 0.09, N = 2468.14467.97468.33

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc1530456075SE +/- 0.04, N = 2SE +/- 0.04, N = 2SE +/- 0.01, N = 268.2968.2868.25

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.43, N = 2SE +/- 0.22, N = 2SE +/- 0.80, N = 2159.85159.75160.59

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0166, N = 2SE +/- 0.0087, N = 2SE +/- 0.0308, N = 26.25256.25666.2238

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc8001600240032004000SE +/- 0.54, N = 2SE +/- 10.54, N = 2SE +/- 17.23, N = 23814.523824.303823.08

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc246810SE +/- 0.0013, N = 2SE +/- 0.0212, N = 2SE +/- 0.0366, N = 28.36658.34418.3468

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc160320480640800SE +/- 10.71, N = 2SE +/- 2.91, N = 2SE +/- 0.48, N = 2723.71732.11731.51

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc0.31010.62020.93031.24041.5505SE +/- 0.0205, N = 2SE +/- 0.0055, N = 2SE +/- 0.0010, N = 21.37841.36231.3634

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc50100150200250SE +/- 0.21, N = 2SE +/- 0.10, N = 2SE +/- 0.22, N = 2225.31225.46225.80

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc306090120150SE +/- 0.13, N = 2SE +/- 0.09, N = 2SE +/- 0.05, N = 2141.70141.62141.48

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc306090120150SE +/- 0.08, N = 2SE +/- 0.02, N = 2SE +/- 0.01, N = 2119.80119.88119.98

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0056, N = 2SE +/- 0.0012, N = 2SE +/- 0.0003, N = 28.34238.33688.3300

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc1122334455SE +/- 0.10, N = 2SE +/- 0.03, N = 2SE +/- 0.02, N = 246.7246.5746.90

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabc150300450600750SE +/- 0.37, N = 2SE +/- 0.17, N = 2SE +/- 0.34, N = 2681.28679.82679.81

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc612182430SE +/- 0.01, N = 2SE +/- 0.01, N = 2SE +/- 0.00, N = 224.4424.5724.53

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamabc918273645SE +/- 0.01, N = 2SE +/- 0.01, N = 2SE +/- 0.01, N = 240.9040.7040.76

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc100200300400500SE +/- 0.36, N = 2SE +/- 1.23, N = 2SE +/- 0.26, N = 2467.98468.11467.64

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc1530456075SE +/- 0.04, N = 2SE +/- 0.13, N = 2SE +/- 0.01, N = 268.3268.2768.34

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc4080120160200SE +/- 0.24, N = 2SE +/- 0.32, N = 2SE +/- 0.62, N = 2159.97159.92160.70

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0097, N = 2SE +/- 0.0121, N = 2SE +/- 0.0244, N = 26.24816.24956.2195

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc50100150200250SE +/- 0.11, N = 2SE +/- 0.19, N = 2SE +/- 0.22, N = 2227.42227.64227.57

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc306090120150SE +/- 0.07, N = 2SE +/- 0.12, N = 2SE +/- 0.23, N = 2140.40140.26140.30

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc306090120150SE +/- 0.19, N = 2SE +/- 0.02, N = 2SE +/- 0.08, N = 2120.36120.18120.58

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc246810SE +/- 0.0127, N = 2SE +/- 0.0014, N = 2SE +/- 0.0053, N = 28.30568.31828.2903

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc70140210280350SE +/- 0.12, N = 2SE +/- 0.48, N = 2SE +/- 0.49, N = 2326.31326.54326.41

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc20406080100SE +/- 0.00, N = 2SE +/- 0.15, N = 2SE +/- 0.16, N = 297.8797.8497.87

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc20406080100SE +/- 0.04, N = 2SE +/- 0.17, N = 2SE +/- 0.08, N = 297.3697.6196.88

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc3691215SE +/- 0.00, N = 2SE +/- 0.02, N = 2SE +/- 0.01, N = 210.2610.2410.31

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc1224364860SE +/- 0.00, N = 2SE +/- 0.05, N = 2SE +/- 0.00, N = 253.4853.5553.62

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc130260390520650SE +/- 0.05, N = 2SE +/- 0.12, N = 2SE +/- 0.01, N = 2597.97596.81596.53

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc714212835SE +/- 0.03, N = 2SE +/- 0.04, N = 2SE +/- 0.02, N = 228.6328.6328.64

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc816243240SE +/- 0.03, N = 2SE +/- 0.05, N = 2SE +/- 0.03, N = 234.9134.9134.90

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc120240360480600SE +/- 0.03, N = 2SE +/- 0.68, N = 2SE +/- 0.31, N = 2574.96575.29575.12

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc1224364860SE +/- 0.03, N = 2SE +/- 0.06, N = 2SE +/- 0.04, N = 255.5855.5655.58

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc20406080100SE +/- 0.05, N = 2SE +/- 0.64, N = 2SE +/- 0.18, N = 294.5194.0494.55

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc3691215SE +/- 0.01, N = 2SE +/- 0.07, N = 2SE +/- 0.02, N = 210.5810.6310.57

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc4080120160200SE +/- 0.05, N = 2SE +/- 0.06, N = 2SE +/- 0.08, N = 2166.00166.22166.06

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc4080120160200SE +/- 0.06, N = 2SE +/- 0.00, N = 2SE +/- 0.21, N = 2192.35192.16192.21

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc1224364860SE +/- 0.08, N = 2SE +/- 0.01, N = 2SE +/- 0.10, N = 253.7853.7053.95

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc510152025SE +/- 0.03, N = 2SE +/- 0.00, N = 2SE +/- 0.03, N = 218.5918.6118.53

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc918273645SE +/- 0.04, N = 2SE +/- 0.01, N = 2SE +/- 0.02, N = 237.6137.5437.58

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc2004006008001000SE +/- 0.49, N = 2SE +/- 0.01, N = 2SE +/- 0.44, N = 2841.48840.26840.44

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc510152025SE +/- 0.02, N = 2SE +/- 0.00, N = 2SE +/- 0.02, N = 220.0720.0320.05

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc1122334455SE +/- 0.05, N = 2SE +/- 0.00, N = 2SE +/- 0.06, N = 249.8149.9149.86

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetabc48121620SE +/- 0.04, N = 2SE +/- 0.03, N = 2SE +/- 0.09, N = 214.1113.9714.03MIN: 13.76 / MAX: 19.75MIN: 13.64 / MAX: 19.68MIN: 13.68 / MAX: 18.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2abc246810SE +/- 0.01, N = 2SE +/- 0.00, N = 2SE +/- 0.07, N = 26.356.266.17MIN: 6.19 / MAX: 12.45MIN: 6.11 / MAX: 12.76MIN: 6.02 / MAX: 6.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3abc246810SE +/- 0.55, N = 2SE +/- 0.18, N = 2SE +/- 0.04, N = 27.006.556.34MIN: 6.3 / MAX: 10.2MIN: 6.24 / MAX: 7.61MIN: 6.15 / MAX: 11.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2abc3691215SE +/- 1.23, N = 2SE +/- 0.31, N = 2SE +/- 0.04, N = 29.097.937.60MIN: 7.74 / MAX: 15.92MIN: 7.51 / MAX: 11.45MIN: 7.44 / MAX: 11.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetabc246810SE +/- 0.11, N = 2SE +/- 0.00, N = 2SE +/- 0.05, N = 26.095.905.86MIN: 5.89 / MAX: 10.35MIN: 5.81 / MAX: 12.33MIN: 5.73 / MAX: 11.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0abc3691215SE +/- 0.03, N = 2SE +/- 0.01, N = 2SE +/- 0.04, N = 29.989.789.75MIN: 9.82 / MAX: 10.96MIN: 9.64 / MAX: 16.01MIN: 9.58 / MAX: 13.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceabc0.89331.78662.67993.57324.4665SE +/- 0.09, N = 2SE +/- 0.01, N = 2SE +/- 0.05, N = 23.973.433.48MIN: 3.5 / MAX: 7.61MIN: 3.35 / MAX: 3.83MIN: 3.32 / MAX: 8.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetabc48121620SE +/- 0.03, N = 2SE +/- 0.02, N = 2SE +/- 0.06, N = 214.6214.5314.47MIN: 14.46 / MAX: 25.51MIN: 14.31 / MAX: 24.11MIN: 14.26 / MAX: 20.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16abc612182430SE +/- 0.05, N = 2SE +/- 0.06, N = 2SE +/- 0.08, N = 223.8423.6423.91MIN: 23.45 / MAX: 28.55MIN: 23.33 / MAX: 28.05MIN: 23.48 / MAX: 30.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18abc246810SE +/- 0.03, N = 2SE +/- 0.04, N = 2SE +/- 0.03, N = 28.508.428.51MIN: 8.33 / MAX: 14.69MIN: 8.27 / MAX: 14.6MIN: 8.3 / MAX: 13.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetabc1.17682.35363.53044.70725.884SE +/- 0.01, N = 2SE +/- 0.02, N = 2SE +/- 0.02, N = 25.235.225.22MIN: 5.12 / MAX: 11.62MIN: 5.11 / MAX: 5.77MIN: 5.12 / MAX: 7.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50abc48121620SE +/- 0.08, N = 2SE +/- 0.10, N = 2SE +/- 0.14, N = 215.4915.3415.54MIN: 15.24 / MAX: 21.82MIN: 15.07 / MAX: 21.69MIN: 15.15 / MAX: 27.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyabc510152025SE +/- 0.02, N = 2SE +/- 0.08, N = 2SE +/- 0.13, N = 220.6620.5120.80MIN: 20.04 / MAX: 25.04MIN: 19.87 / MAX: 24.86MIN: 20.01 / MAX: 96.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdabc48121620SE +/- 0.01, N = 2SE +/- 0.09, N = 2SE +/- 0.62, N = 214.1714.1114.59MIN: 13.53 / MAX: 18.45MIN: 13.32 / MAX: 18.63MIN: 13.37 / MAX: 277.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mabc816243240SE +/- 5.99, N = 2SE +/- 0.15, N = 2SE +/- 0.07, N = 235.2427.5427.59MIN: 27.86 / MAX: 47.9MIN: 26.96 / MAX: 33.56MIN: 26.64 / MAX: 33.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerabc1122334455SE +/- 0.07, N = 2SE +/- 0.04, N = 2SE +/- 0.30, N = 248.7948.4948.43MIN: 47.65 / MAX: 78.36MIN: 47.44 / MAX: 58.53MIN: 47.33 / MAX: 85.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetabc3691215SE +/- 0.94, N = 2SE +/- 0.11, N = 2SE +/- 0.01, N = 210.259.048.88MIN: 8.95 / MAX: 17.14MIN: 8.65 / MAX: 15.19MIN: 8.58 / MAX: 13.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyabc612182430SE +/- 0.06, N = 2SE +/- 0.15, N = 2SE +/- 0.04, N = 227.2727.5027.24

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyabc1530456075SE +/- 0.14, N = 2SE +/- 0.03, N = 2SE +/- 0.13, N = 268.8068.5068.70

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyabc816243240SE +/- 0.02, N = 2SE +/- 0.25, N = 2SE +/- 0.01, N = 233.7033.7633.72

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyabc60120180240300SE +/- 0.11, N = 2SE +/- 0.23, N = 2SE +/- 0.52, N = 2253.49253.77253.43

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyabc20406080100SE +/- 0.40, N = 2SE +/- 0.14, N = 2SE +/- 0.04, N = 284.5584.3584.17

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesabc50K100K150K200K250KSE +/- 633.50, N = 2SE +/- 817.50, N = 2SE +/- 669.00, N = 2236650238161234887

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricabc160K320K480K640K800KSE +/- 1805.50, N = 2SE +/- 357.50, N = 2SE +/- 963.50, N = 27343867298767304341. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6


Phoronix Test Suite v10.8.5