dfgg

AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1802 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2403165-NE-DFGG2384950&grt&sor.

dfggProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionabcAMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads)ASUS ROG ZENITH II EXTREME (1802 BIOS)AMD Starship/Matisse4 x 16GB DDR4-3600MT/s Corsair CMT64GX4M4Z3600C16Samsung SSD 980 PRO 500GBAMD Radeon RX 5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioASUS VP28UAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.046.5.0-21-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.54)1.2.204GCC 11.4.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107aPython Details- Python 3.10.12Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

dfggdraco: Liondraco: Church Facadedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionprimesieve: 1e12primesieve: 1e13srsran: PDSCH Processor Benchmark, Throughput Totalsrsran: PDSCH Processor Benchmark, Throughput Threadsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pbuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigabc5258798930.3038525.401717.642956.6725672.340323.7722168.52935.9275320.600649.8838143.44936.96351782.40278.9514754.32771.32122.49225886.95495.4552183.2881321.293249.7563144.24246.9254158.6954100.685193.011010.7442244.460165.407281.147212.316833.1639481.309121.126447.3193344.285746.416986.390511.568830.0740529.085017.627056.72462.8459055.50368.35411658.0484.54.69845.489107.183106.59412.55883.466297.094369.31146.704488.8955275819130.5421522.352217.714856.4424669.957423.8567170.45545.8612322.232549.6303147.89016.75391813.59478.797748.35251.33172.49555875.85185.4596183.1426322.041849.6608149.19986.6951158.4993100.907393.789210.6547245.345765.181984.454611.833833.4545477.02621.434146.6399344.922746.352886.503611.553230.2196525.919617.628856.71712.8114545.52168.22411575.9490.64.74646.03107.337107.77512.26984.221298.398368.72249.296490.3225199816530.444524.870517.699156.4921672.718223.7595171.44755.8272324.370449.3038148.15636.74181833.36388.7013774.05761.28752.49125888.22195.4574183.214323.607849.4186146.22996.8307158.0423101.100593.871210.646245.967665.021285.69811.662533.3294479.288921.420546.6702344.660346.364785.996611.621430.0742530.651217.541357.00022.8946495.51368.12711512.1484.44.76245.724108.614108.22912.58883.688298.839365.61349.243490.076OpenBenchmarking.org

Google Draco

Model: Lion

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lioncab11002200330044005500SE +/- 8.19, N = 35199525852751. (CXX) g++ options: -O3

Google Draco

Model: Church Facade

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadeacb2K4K6K8K10KSE +/- 22.55, N = 37989816581911. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambca714212835SE +/- 0.12, N = 330.5430.4430.30

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambca110220330440550SE +/- 2.27, N = 3522.35524.87525.40

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambca48121620SE +/- 0.01, N = 317.7117.7017.64

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambca1326395265SE +/- 0.03, N = 356.4456.4956.67

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamcab150300450600750SE +/- 0.46, N = 3672.72672.34669.96

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamcab612182430SE +/- 0.02, N = 323.7623.7723.86

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcba4080120160200SE +/- 0.52, N = 3171.45170.46168.53

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcba1.33372.66744.00115.33486.6685SE +/- 0.0182, N = 35.82725.86125.9275

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcba70140210280350SE +/- 0.69, N = 3324.37322.23320.60

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcba1122334455SE +/- 0.11, N = 349.3049.6349.88

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamcba306090120150SE +/- 0.39, N = 3148.16147.89143.45

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamcba246810SE +/- 0.0188, N = 36.74186.75396.9635

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba400800120016002000SE +/- 19.60, N = 31833.361813.591782.40

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba3691215SE +/- 0.0977, N = 38.70138.79708.9514

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcab170340510680850SE +/- 0.57, N = 3774.06754.33748.35

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcab0.29960.59920.89881.19841.498SE +/- 0.0011, N = 31.28751.32121.3317

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streambac0.56151.1231.68452.2462.8075SE +/- 0.0102, N = 32.49552.49222.4912

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streambac13002600390052006500SE +/- 22.16, N = 35875.855886.955888.22

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streambca1.22842.45683.68524.91366.142SE +/- 0.0010, N = 35.45965.45745.4552

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streambca4080120160200SE +/- 0.03, N = 3183.14183.21183.29

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba70140210280350SE +/- 0.63, N = 3323.61322.04321.29

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba1122334455SE +/- 0.11, N = 349.4249.6649.76

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambca306090120150SE +/- 0.38, N = 3149.20146.23144.24

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambca246810SE +/- 0.0182, N = 36.69516.83076.9254

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc4080120160200SE +/- 1.03, N = 3158.70158.50158.04

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc20406080100SE +/- 0.70, N = 3100.69100.91101.10

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcba20406080100SE +/- 0.10, N = 393.8793.7993.01

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcba3691215SE +/- 0.01, N = 310.6510.6510.74

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcba50100150200250SE +/- 0.41, N = 3245.97245.35244.46

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcba1530456075SE +/- 0.11, N = 365.0265.1865.41

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcba20406080100SE +/- 0.28, N = 385.7084.4581.15

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcba3691215SE +/- 0.04, N = 311.6611.8312.32

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambca816243240SE +/- 0.16, N = 333.4533.3333.16

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambca100200300400500SE +/- 2.17, N = 3477.03479.29481.31

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambca510152025SE +/- 0.02, N = 321.4321.4221.13

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambca1122334455SE +/- 0.04, N = 346.6446.6747.32

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streambca70140210280350SE +/- 0.16, N = 3344.92344.66344.29

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streambca1122334455SE +/- 0.02, N = 346.3546.3646.42

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streambac20406080100SE +/- 0.20, N = 386.5086.3986.00

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streambac3691215SE +/- 0.03, N = 311.5511.5711.62

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambca714212835SE +/- 0.22, N = 330.2230.0730.07

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambac110220330440550SE +/- 3.49, N = 3525.92529.09530.65

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambac48121620SE +/- 0.06, N = 317.6317.6317.54

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambac1326395265SE +/- 0.19, N = 356.7256.7257.00

Parallel BZIP2 Compression

FreeBSD-13.0-RELEASE-amd64-memstick.img Compression

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionbac0.65131.30261.95392.60523.2565SE +/- 0.034154, N = 152.8114542.8459052.8946491. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

Primesieve

Length: 1e12

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12acb1.24222.48443.72664.96886.211SE +/- 0.005, N = 35.5035.5135.5211. (CXX) g++ options: -O3

Primesieve

Length: 1e13

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13cba1530456075SE +/- 0.11, N = 368.1368.2268.351. (CXX) g++ options: -O3

srsRAN Project

Test: PDSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totalabc2K4K6K8K10KSE +/- 24.76, N = 311658.011575.911512.11. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

srsRAN Project

Test: PDSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threadbac110220330440550SE +/- 1.73, N = 3490.6484.5484.41. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kcba1.07152.1433.21454.2865.3575SE +/- 0.034, N = 34.7624.7464.6981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kbca1020304050SE +/- 0.14, N = 346.0345.7245.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kcba20406080100SE +/- 0.17, N = 3108.61107.34107.181. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kcba20406080100SE +/- 0.60, N = 3108.23107.78106.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pcab3691215SE +/- 0.05, N = 312.5912.5612.271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pbca20406080100SE +/- 0.49, N = 384.2283.6983.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pcba70140210280350SE +/- 2.26, N = 3298.84298.40297.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pabc80160240320400SE +/- 0.76, N = 3369.31368.72365.611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigacb1122334455SE +/- 0.46, N = 646.7049.2449.30

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigacb110220330440550SE +/- 1.04, N = 3488.90490.08490.32


Phoronix Test Suite v10.8.5