ds AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS) and NVIDIA NV174 8GB on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2312110-PTS-DS58174320 a Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601203Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c d Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB, Graphics: NVIDIA NV174 8GB, Audio: NVIDIA GA104 HD Audio, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 23.10, Kernel: 6.7.0-060700rc2daily20231127-generic (x86_64), Desktop: GNOME Shell 45.1, Display Server: X Server 1.21.1.7 + Wayland, Display Driver: nouveau, OpenGL: 4.3 Mesa 24.0~git2311260600.945288~oibaf~m (git-945288f 2023-11-26 mantic-oibaf-ppa), Compiler: GCC 13.2.0 + LLVM 16.0.6, File-System: ext4, Screen Resolution: 3840x2160
ds OpenBenchmarking.org Phoronix Test Suite AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads) ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS) AMD Device 14d8 32GB 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB NVIDIA NV174 8GB NVIDIA GA104 HD Audio DELL U2723QE Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 Ubuntu 23.10 6.7.0-060700rc2daily20231127-generic (x86_64) GNOME Shell 45.1 X Server 1.21.1.7 + Wayland nouveau 4.3 Mesa 24.0~git2311260600.945288~oibaf~m (git-945288f 2023-11-26 mantic-oibaf-ppa) GCC 13.2.0 + LLVM 16.0.6 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL Compiler File-System Screen Resolution Ds Performance System Logs - Transparent Huge Pages: madvise - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601203 - Python 3.11.6 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
a b c d Result Overview Phoronix Test Suite 100% 100% 101% 101% Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse C.D.Y.C.S.I - A.M.S C.D.Y.C.S.I - A.M.S R.5.S.I - A.M.S R.5.S.I - A.M.S R.5.S.I - S.S.S R.5.S.I - S.S.S N.T.C.D.m - S.S.S N.T.C.D.m - S.S.S R.5.B - S.S.S R.5.B - S.S.S C.S.9.P.Y.P - A.M.S C.S.9.P.Y.P - A.M.S C.D.Y.C - A.M.S C.D.Y.C - A.M.S N.T.C.B.b.u.S.S.I - S.S.S N.T.C.B.b.u.S.S.I - S.S.S B.L.N.Q.A.S.I - S.S.S B.L.N.Q.A - S.S.S B.L.N.Q.A.S.I - S.S.S N.D.C.o.b.u.o.I - A.M.S B.L.N.Q.A - S.S.S N.T.C.B.b.u.S.S.I - A.M.S N.T.C.B.b.u.S.S.I - A.M.S N.D.C.o.b.u.o.I - A.M.S N.T.C.B.b.u.c - A.M.S C.D.Y.C - S.S.S C.D.Y.C - S.S.S N.T.C.B.b.u.c - A.M.S C.C.R.5.I - A.M.S C.C.R.5.I - S.S.S C.C.R.5.I - S.S.S C.C.R.5.I - A.M.S B.L.N.Q.A.S.I - A.M.S R.5.B - A.M.S R.5.B - A.M.S B.L.N.Q.A.S.I - A.M.S N.T.C.D.m - A.M.S N.T.C.D.m - A.M.S N.T.C.B.b.u.c - S.S.S N.T.C.B.b.u.c - S.S.S B.L.N.Q.A - A.M.S C.D.Y.C.S.I - S.S.S N.D.C.o.b.u.o.I - S.S.S C.D.Y.C.S.I - S.S.S N.D.C.o.b.u.o.I - S.S.S B.L.N.Q.A - A.M.S C.S.9.P.Y.P - S.S.S C.S.9.P.Y.P - S.S.S
ds deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream a b c d 59.8641 133.5376 2314.2272 3.4471 1295.8078 0.7695 116.3673 8.5884 197.0798 5.0670 204.8572 38.9835 62.6288 127.7126 3.3312 299.9092 9.0307 47.3214 110.6228 22.7389 21.1279 983.3126 8.1243 350.7492 349.9757 101.8661 9.8076 22.7592 296.0322 5.0217 198.8967 27.0107 462.4186 27.0091 296.0378 17.2878 203.9666 39.2065 51.1656 19.5417 29.3517 9.6870 19.5685 103.1665 51.0955 272.1665 30.8613 32.3888 60.0283 133.1707 2315.5240 3.4442 1295.2310 0.7698 116.5816 8.5729 198.4779 5.0320 205.5998 38.8700 62.8602 127.2462 3.3138 301.5141 9.0802 47.2956 110.0314 22.6899 21.1400 988.2981 8.0835 351.5559 351.5076 101.7672 9.8180 22.7193 295.7606 5.0331 198.4507 27.0344 462.6802 27.0574 295.5210 17.2779 203.4893 39.2941 51.1975 19.5292 29.3894 9.6883 19.5391 103.1635 51.1690 271.9143 30.8326 32.4189 60.5733 132.0251 2317.2555 3.4420 1308.7670 0.7620 116.7136 8.5631 198.2479 5.0382 204.5755 39.0428 62.7323 127.4662 3.3173 301.1738 9.0770 47.3165 110.0550 22.7000 21.1302 986.6769 8.0963 351.4011 351.4431 101.8841 9.8068 22.6850 295.2366 5.0250 198.7653 27.0809 461.5344 27.0386 295.7285 17.3189 203.9185 39.2164 51.1932 19.5308 29.3969 9.6736 19.5454 103.3168 51.1551 271.7780 30.8484 32.4022 60.2946 132.5933 2293.2480 3.4780 1299.0909 0.7672 117.2571 8.5234 198.3620 5.0356 204.2205 39.1305 62.4691 128.0169 3.3119 301.6484 9.0414 47.0684 110.4956 22.6176 21.2410 987.4091 8.0908 352.4834 351.4079 101.4802 9.8453 22.7096 295.5731 5.0198 198.9679 27.0539 462.4074 27.0742 295.3355 17.2882 203.6193 39.2783 51.2514 19.5097 29.3591 9.6809 19.5678 103.2328 51.0971 272.1527 30.8462 32.4054 OpenBenchmarking.org
Neural Magic DeepSparse This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 14 28 42 56 70 SE +/- 0.13, N = 3 SE +/- 0.27, N = 3 SE +/- 0.21, N = 3 SE +/- 0.21, N = 3 59.86 60.03 60.57 60.29
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 30 60 90 120 150 SE +/- 0.30, N = 3 SE +/- 0.59, N = 3 SE +/- 0.45, N = 3 SE +/- 0.43, N = 3 133.54 133.17 132.03 132.59
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 500 1000 1500 2000 2500 SE +/- 1.66, N = 3 SE +/- 6.99, N = 3 SE +/- 4.26, N = 3 SE +/- 8.04, N = 3 2314.23 2315.52 2317.26 2293.25
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 0.7826 1.5652 2.3478 3.1304 3.913 SE +/- 0.0021, N = 3 SE +/- 0.0106, N = 3 SE +/- 0.0064, N = 3 SE +/- 0.0122, N = 3 3.4471 3.4442 3.4420 3.4780
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 300 600 900 1200 1500 SE +/- 10.93, N = 3 SE +/- 7.10, N = 3 SE +/- 1.41, N = 3 SE +/- 0.19, N = 3 1295.81 1295.23 1308.77 1299.09
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 0.1732 0.3464 0.5196 0.6928 0.866 SE +/- 0.0065, N = 3 SE +/- 0.0041, N = 3 SE +/- 0.0008, N = 3 SE +/- 0.0001, N = 3 0.7695 0.7698 0.7620 0.7672
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d 30 60 90 120 150 SE +/- 0.35, N = 3 SE +/- 0.80, N = 3 SE +/- 0.84, N = 3 SE +/- 0.43, N = 3 116.37 116.58 116.71 117.26
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d 2 4 6 8 10 SE +/- 0.0256, N = 3 SE +/- 0.0593, N = 3 SE +/- 0.0624, N = 3 SE +/- 0.0316, N = 3 8.5884 8.5729 8.5631 8.5234
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b c d 40 80 120 160 200 SE +/- 0.82, N = 3 SE +/- 0.91, N = 3 SE +/- 0.03, N = 3 SE +/- 0.35, N = 3 197.08 198.48 198.25 198.36
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b c d 1.1401 2.2802 3.4203 4.5604 5.7005 SE +/- 0.0211, N = 3 SE +/- 0.0228, N = 3 SE +/- 0.0010, N = 3 SE +/- 0.0087, N = 3 5.0670 5.0320 5.0382 5.0356
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d 50 100 150 200 250 SE +/- 0.47, N = 3 SE +/- 1.77, N = 3 SE +/- 0.32, N = 3 SE +/- 0.62, N = 3 204.86 205.60 204.58 204.22
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d 9 18 27 36 45 SE +/- 0.08, N = 3 SE +/- 0.36, N = 3 SE +/- 0.07, N = 3 SE +/- 0.13, N = 3 38.98 38.87 39.04 39.13
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d 14 28 42 56 70 SE +/- 0.30, N = 3 SE +/- 0.28, N = 3 SE +/- 0.37, N = 3 SE +/- 0.12, N = 3 62.63 62.86 62.73 62.47
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d 30 60 90 120 150 SE +/- 0.61, N = 3 SE +/- 0.56, N = 3 SE +/- 0.72, N = 3 SE +/- 0.24, N = 3 127.71 127.25 127.47 128.02
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 0.7495 1.499 2.2485 2.998 3.7475 SE +/- 0.0038, N = 3 SE +/- 0.0056, N = 3 SE +/- 0.0115, N = 3 SE +/- 0.0036, N = 3 3.3312 3.3138 3.3173 3.3119
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 70 140 210 280 350 SE +/- 0.35, N = 3 SE +/- 0.53, N = 3 SE +/- 1.04, N = 3 SE +/- 0.32, N = 3 299.91 301.51 301.17 301.65
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.0337, N = 3 SE +/- 0.0759, N = 3 SE +/- 0.0179, N = 3 SE +/- 0.0640, N = 3 9.0307 9.0802 9.0770 9.0414
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b c d 11 22 33 44 55 SE +/- 0.03, N = 3 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 47.32 47.30 47.32 47.07
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.42, N = 3 SE +/- 0.94, N = 3 SE +/- 0.22, N = 3 SE +/- 0.78, N = 3 110.62 110.03 110.06 110.50
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 0.08, N = 3 SE +/- 0.03, N = 3 22.74 22.69 22.70 22.62
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b c d 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 21.13 21.14 21.13 21.24
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 200 400 600 800 1000 SE +/- 0.48, N = 3 SE +/- 0.99, N = 3 SE +/- 1.16, N = 3 SE +/- 1.44, N = 3 983.31 988.30 986.68 987.41
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 2 4 6 8 10 SE +/- 0.0039, N = 3 SE +/- 0.0081, N = 3 SE +/- 0.0093, N = 3 SE +/- 0.0118, N = 3 8.1243 8.0835 8.0963 8.0908
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d 80 160 240 320 400 SE +/- 0.16, N = 3 SE +/- 0.09, N = 3 SE +/- 0.75, N = 3 SE +/- 0.20, N = 3 350.75 351.56 351.40 352.48
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d 80 160 240 320 400 SE +/- 1.12, N = 3 SE +/- 0.23, N = 3 SE +/- 0.11, N = 3 SE +/- 0.17, N = 3 349.98 351.51 351.44 351.41
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 SE +/- 0.19, N = 3 101.87 101.77 101.88 101.48
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.0059, N = 3 SE +/- 0.0053, N = 3 SE +/- 0.0035, N = 3 SE +/- 0.0182, N = 3 9.8076 9.8180 9.8068 9.8453
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d 5 10 15 20 25 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 22.76 22.72 22.69 22.71
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d 60 120 180 240 300 SE +/- 0.22, N = 3 SE +/- 0.22, N = 3 SE +/- 0.26, N = 3 SE +/- 0.38, N = 3 296.03 295.76 295.24 295.57
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d 1.1324 2.2648 3.3972 4.5296 5.662 SE +/- 0.0028, N = 3 SE +/- 0.0078, N = 3 SE +/- 0.0044, N = 3 SE +/- 0.0116, N = 3 5.0217 5.0331 5.0250 5.0198
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d 40 80 120 160 200 SE +/- 0.11, N = 3 SE +/- 0.31, N = 3 SE +/- 0.17, N = 3 SE +/- 0.45, N = 3 198.90 198.45 198.77 198.97
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 27.01 27.03 27.08 27.05
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 100 200 300 400 500 SE +/- 0.23, N = 3 SE +/- 0.43, N = 3 SE +/- 0.34, N = 3 SE +/- 0.35, N = 3 462.42 462.68 461.53 462.41
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d 6 12 18 24 30 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 27.01 27.06 27.04 27.07
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d 60 120 180 240 300 SE +/- 0.11, N = 3 SE +/- 0.12, N = 3 SE +/- 0.15, N = 3 SE +/- 0.10, N = 3 296.04 295.52 295.73 295.34
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 17.29 17.28 17.32 17.29
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d 40 80 120 160 200 SE +/- 0.15, N = 3 SE +/- 0.06, N = 3 SE +/- 0.24, N = 3 SE +/- 0.23, N = 3 203.97 203.49 203.92 203.62
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d 9 18 27 36 45 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 39.21 39.29 39.22 39.28
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d 12 24 36 48 60 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 SE +/- 0.08, N = 3 51.17 51.20 51.19 51.25
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 19.54 19.53 19.53 19.51
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 29.35 29.39 29.40 29.36
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.0055, N = 3 SE +/- 0.0056, N = 3 SE +/- 0.0004, N = 3 SE +/- 0.0125, N = 3 9.6870 9.6883 9.6736 9.6809
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 19.57 19.54 19.55 19.57
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.06, N = 3 SE +/- 0.00, N = 3 SE +/- 0.14, N = 3 103.17 103.16 103.32 103.23
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d 12 24 36 48 60 SE +/- 0.06, N = 3 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 51.10 51.17 51.16 51.10
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d 60 120 180 240 300 SE +/- 0.01, N = 3 SE +/- 0.13, N = 3 SE +/- 0.09, N = 3 SE +/- 0.09, N = 3 272.17 271.91 271.78 272.15
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d 7 14 21 28 35 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 30.86 30.83 30.85 30.85
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d 8 16 24 32 40 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 32.39 32.42 32.40 32.41
a Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601203Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 11 December 2023 22:07 by user pts.
b Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601203Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 11 December 2023 23:04 by user pts.
c Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601203Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 December 2023 00:03 by user pts.
d Processor: AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (1416 BIOS), Chipset: AMD Device 14d8, Memory: 32GB, Disk: 2000GB Samsung SSD 980 PRO 2TB + 4001GB Western Digital WD_BLACK SN850X 4000GB, Graphics: NVIDIA NV174 8GB, Audio: NVIDIA GA104 HD Audio, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 23.10, Kernel: 6.7.0-060700rc2daily20231127-generic (x86_64), Desktop: GNOME Shell 45.1, Display Server: X Server 1.21.1.7 + Wayland, Display Driver: nouveau, OpenGL: 4.3 Mesa 24.0~git2311260600.945288~oibaf~m (git-945288f 2023-11-26 mantic-oibaf-ppa), Compiler: GCC 13.2.0 + LLVM 16.0.6, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601203Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 December 2023 01:02 by user pts.