eps

2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2312241-NE-EPS60637430&grs&sro.

epsProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen Resolutionab2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a41520GB3201GB Micron_7450_MTFDKCB3T2TFSASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-13-generic (x86_64)GCC 13.2.0ext4800x600OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e Java Details- OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

epspytorch: CPU - 256 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lsvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kwebp2: Quality 100, Compression Effort 5pytorch: CPU - 256 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - ResNet-50pytorch: CPU - 16 - ResNet-50spark-tpch: 1 - Geometric Mean Of All Queriespytorch: CPU - 256 - Efficientnet_v2_lwebp2: Defaultsvt-av1: Preset 8 - Bosphorus 1080pdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamwebp2: Quality 75, Compression Effort 7xmrig: CryptoNight-Femto UPX2 - 1Mpytorch: CPU - 32 - ResNet-152pytorch: CPU - 16 - Efficientnet_v2_ljava-scimark2: Sparse Matrix Multiplyxmrig: CryptoNight-Heavy - 1Mspark-tpch: 10 - Geometric Mean Of All Queriesjava-scimark2: Dense LU Matrix Factorizationsvt-av1: Preset 4 - Bosphorus 1080pdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamsvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Kdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streampytorch: CPU - 16 - ResNet-152deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streampytorch: CPU - 32 - ResNet-50xmrig: GhostRider - 1Mdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamxmrig: Wownero - 1Mdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamsvt-av1: Preset 12 - Bosphorus 1080pdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamxmrig: Monero - 1Mdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamjava-scimark2: Compositedeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamjava-scimark2: Fast Fourier Transformdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenssl: SHA512deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamopenssl: SHA256xmrig: KawPow - 1Mspark-tpch: 50 - Geometric Mean Of All Queriesdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamopenssl: RSA4096deepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamjava-scimark2: Monte Carloopenssl: RSA4096deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamjava-scimark2: Jacobi Successive Over-Relaxationdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streampytorch: CPU - 32 - Efficientnet_v2_lwebp2: Quality 100, Lossless Compressionwebp2: Quality 95, Compression Effort 7spark-tpch: 50 - Q22spark-tpch: 50 - Q21spark-tpch: 50 - Q20spark-tpch: 50 - Q19spark-tpch: 50 - Q18spark-tpch: 50 - Q17spark-tpch: 50 - Q16spark-tpch: 50 - Q15spark-tpch: 50 - Q14spark-tpch: 50 - Q13spark-tpch: 50 - Q12spark-tpch: 50 - Q11spark-tpch: 50 - Q10spark-tpch: 50 - Q09spark-tpch: 50 - Q08spark-tpch: 50 - Q07spark-tpch: 50 - Q06spark-tpch: 50 - Q05spark-tpch: 50 - Q04spark-tpch: 50 - Q03spark-tpch: 50 - Q02spark-tpch: 50 - Q01spark-tpch: 10 - Q22spark-tpch: 10 - Q21spark-tpch: 10 - Q20spark-tpch: 10 - Q19spark-tpch: 10 - Q18spark-tpch: 10 - Q17spark-tpch: 10 - Q16spark-tpch: 10 - Q15spark-tpch: 10 - Q14spark-tpch: 10 - Q13spark-tpch: 10 - Q12spark-tpch: 10 - Q11spark-tpch: 10 - Q10spark-tpch: 10 - Q09spark-tpch: 10 - Q08spark-tpch: 10 - Q07spark-tpch: 10 - Q06spark-tpch: 10 - Q05spark-tpch: 10 - Q04spark-tpch: 10 - Q03spark-tpch: 10 - Q02spark-tpch: 10 - Q01spark-tpch: 1 - Q22spark-tpch: 1 - Q21spark-tpch: 1 - Q20spark-tpch: 1 - Q19spark-tpch: 1 - Q18spark-tpch: 1 - Q17spark-tpch: 1 - Q16spark-tpch: 1 - Q15spark-tpch: 1 - Q14spark-tpch: 1 - Q13spark-tpch: 1 - Q12spark-tpch: 1 - Q11spark-tpch: 1 - Q10spark-tpch: 1 - Q09spark-tpch: 1 - Q08spark-tpch: 1 - Q07spark-tpch: 1 - Q06spark-tpch: 1 - Q05spark-tpch: 1 - Q04spark-tpch: 1 - Q03spark-tpch: 1 - Q02spark-tpch: 1 - Q01lczero: Eigenlczero: BLASab8.966.40176.670178.9106.5121.2910.1623.5721.162.449649162.329.48165.1044.7637209.79980.83123199.08.902.322809.01123041.610.7215020813358.5321.4244.8022208.1200635.8108.24886.4342608.00908.9336.750821.0031859.7248.5770224.57984.45035.5955131141.9715.036217108.4634122.0312571.87548.491720.6157123352.815.318365.20703984.62784.5178132.6580132.0485420.7448.447620.634591630925473719.2814796.0713383.2004190.79995.2377120.206554.409732.022931.21541761.4041281869895760123558.619.587458071758.593198622.054.50641.2413804.17841631.423244390.34.71264.7188212.0955211.74481703.42156.41591136.710584.251968.265514.642217.30235540.6268607.56642.320.110.4510.6932563887.8952891020.7987613710.4528725934.5130564324.3092791214.215703979.7773386612.7045501112.7590141319.4053777113.5802825324.3600374836.6645851126.7353528324.857111615.9030920729.8362789120.9959831226.1878871914.2548720012.007956196.0541189532.9071502711.435608236.2067783718.4697119412.770445506.871312945.841380767.076223697.377283739.944004388.0029234915.1748809821.9067020415.5182476114.652009332.0510474516.4436562912.3457120313.973087637.431042837.588891511.007690479.645312313.057396170.790923955.628538452.959939241.381472592.501859662.064853311.588159362.175426481.273381353.813596655.709694072.655846444.010448060.468229154.131221613.925257453.864421842.061790714.320060817048539.656.74184.347186.6096.2820.6010.4323.1221.572.495177472.289.63162.5614.829206.9690.82122070.38.982.342792.09123777.710.6579394213434.0921.3134.7775209.1955639.0888.20886.8412596.09618.9736.914621.0931728.9249.4983225.40474.43415.6159131613.6717.593617047.639121.6067569.95548.33220.683712297115.365365.00793996.76786.8905132.2719132.4219421.9148.326420.68691835961470717.9791797.4124382.5561191.09715.2296120.037354.485931.979531.25751759.0746282211175400123411.119.564756581756.556998528.854.55461.2404804.75281632.453243345.24.71174.718212.1305211.77291703.25156.42831136.643984.249168.263614.642617.30195540.517607.57352.320.110.4510.8741006977.7067565921.0538444512.0859279633.7419815124.5578899414.975358019.4828777312.5676708213.0449647917.7000179313.3120002724.6858558736.6652679426.6290950825.860551835.8838248331.2005977621.816757229.6859054614.5304679912.868350036.0443091432.7015495311.539661416.060416717.3137016313.013741496.952706815.438705926.906023037.9408378610.038290028.2781429314.7771949822.5255279514.5576934814.896059041.855959318.8612918911.2624216114.289846427.392459877.288267141.066792139.559095383.050016880.857975965.131711482.883481981.517799142.587141752.211469651.740741612.266416071.139986873.812455425.897758482.609078173.877902750.358015573.692176343.754272463.866108182.082242014.44657946715871OpenBenchmarking.org

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152ab3691215SE +/- 0.05, N = 38.969.65MIN: 4.84 / MAX: 9.24MIN: 4.98 / MAX: 9.85

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lab246810SE +/- 0.05, N = 36.406.74MIN: 2.93 / MAX: 6.73MIN: 3.48 / MAX: 6.89

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4Kab4080120160200SE +/- 1.61, N = 15176.67184.351. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4Kab4080120160200SE +/- 1.43, N = 3178.91186.611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5ab246810SE +/- 0.04, N = 36.516.281. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50ab510152025SE +/- 0.31, N = 321.2920.60MIN: 13.22 / MAX: 22.39MIN: 13.89 / MAX: 21.35

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152ab3691215SE +/- 0.08, N = 310.1610.43MIN: 4.56 / MAX: 10.94MIN: 4.8 / MAX: 11.36

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50ab612182430SE +/- 0.19, N = 1523.5723.12MIN: 11.38 / MAX: 25.62MIN: 12.17 / MAX: 24.33

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50ab510152025SE +/- 0.25, N = 321.1621.57MIN: 12.26 / MAX: 22.24MIN: 14.06 / MAX: 22.29

Apache Spark TPC-H

Scale Factor: 1 - Geometric Mean Of All Queries

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Geometric Mean Of All Queriesab0.56141.12281.68422.24562.807SE +/- 0.02040294, N = 32.449649162.49517747MIN: 0.73 / MAX: 10.03MIN: 0.86 / MAX: 9.56

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lab0.5221.0441.5662.0882.61SE +/- 0.00, N = 32.322.28MIN: 1.83 / MAX: 2.8MIN: 1.71 / MAX: 2.84

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultab3691215SE +/- 0.08, N = 39.489.631. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pab4080120160200SE +/- 1.87, N = 3165.10162.561. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamab1.08652.1733.25954.3465.4325SE +/- 0.0118, N = 34.76374.8290

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamab50100150200250SE +/- 0.52, N = 3209.80206.97

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7ab0.18680.37360.56040.74720.934SE +/- 0.00, N = 30.830.821. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mab30K60K90K120K150KSE +/- 220.87, N = 3123199.0122070.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152ab3691215SE +/- 0.10, N = 38.908.98MIN: 4.8 / MAX: 9.23MIN: 5.1 / MAX: 9.29

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lab0.52651.0531.57952.1062.6325SE +/- 0.01, N = 32.322.34MIN: 1.77 / MAX: 2.81MIN: 1.78 / MAX: 2.78

Java SciMark

Computational Test: Sparse Matrix Multiply

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplyab6001200180024003000SE +/- 3.16, N = 32809.012792.09

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mab30K60K90K120K150KSE +/- 33.09, N = 3123041.6123777.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Apache Spark TPC-H

Scale Factor: 10 - Geometric Mean Of All Queries

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Geometric Mean Of All Queriesab3691215SE +/- 0.02, N = 310.7210.66MIN: 5.7 / MAX: 33.03MIN: 5.44 / MAX: 32.7

Java SciMark

Computational Test: Dense LU Matrix Factorization

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationab3K6K9K12K15KSE +/- 31.70, N = 313358.5313434.09

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pab510152025SE +/- 0.13, N = 321.4221.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamab1.08052.1613.24154.3225.4025SE +/- 0.0103, N = 34.80224.7775

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamab50100150200250SE +/- 0.44, N = 3208.12209.20

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pab140280420560700SE +/- 8.75, N = 3635.81639.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4Kab246810SE +/- 0.041, N = 38.2488.2081. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4Kab20406080100SE +/- 0.17, N = 386.4386.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab6001200180024003000SE +/- 6.37, N = 32608.012596.10

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152ab3691215SE +/- 0.10, N = 38.938.97MIN: 4.75 / MAX: 9.39MIN: 4.96 / MAX: 9.11

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.09, N = 336.7536.91

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50ab510152025SE +/- 0.20, N = 321.0021.09MIN: 11.39 / MAX: 21.87MIN: 13.93 / MAX: 21.71

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mab7K14K21K28K35KSE +/- 24.02, N = 331859.731728.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab50100150200250SE +/- 0.41, N = 3248.58249.50

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab50100150200250SE +/- 0.01, N = 3224.58225.40

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab1.00132.00263.00394.00525.0065SE +/- 0.0001, N = 34.45034.4341

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab1.26362.52723.79085.05446.318SE +/- 0.0055, N = 35.59555.6159

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mab30K60K90K120K150KSE +/- 621.69, N = 3131141.9131613.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab150300450600750SE +/- 4.21, N = 3715.04717.59

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab4K8K12K16K20KSE +/- 16.76, N = 317108.4617047.64

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.25, N = 3122.03121.61

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pab120240360480600SE +/- 1.39, N = 3571.88569.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab1122334455SE +/- 0.05, N = 348.4948.33

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab510152025SE +/- 0.02, N = 320.6220.68

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mab30K60K90K120K150KSE +/- 404.54, N = 3123352.8122971.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab48121620SE +/- 0.01, N = 315.3215.37

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab1530456075SE +/- 0.04, N = 365.2165.01

Java SciMark

Computational Test: Composite

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositeab9001800270036004500SE +/- 6.24, N = 33984.623996.76

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab2004006008001000SE +/- 1.42, N = 3784.52786.89

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.66, N = 3132.66132.27

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.03, N = 3132.05132.42

Java SciMark

Computational Test: Fast Fourier Transform

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformab90180270360450SE +/- 0.36, N = 3420.74421.91

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamab1122334455SE +/- 0.02, N = 348.4548.33

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamab510152025SE +/- 0.01, N = 320.6320.69

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA512ab20000M40000M60000M80000M100000MSE +/- 191332047.54, N = 391630925473918359614701. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab160320480640800SE +/- 1.53, N = 3719.28717.98

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab2004006008001000SE +/- 1.54, N = 3796.07797.41

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab80160240320400SE +/- 0.61, N = 3383.20382.56

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamab4080120160200SE +/- 0.06, N = 3190.80191.10

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamab1.17852.3573.53554.7145.8925SE +/- 0.0015, N = 35.23775.2296

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.24, N = 3120.21120.04

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab1224364860SE +/- 0.07, N = 354.4154.49

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamab714212835SE +/- 0.03, N = 332.0231.98

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamab714212835SE +/- 0.03, N = 331.2231.26

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab400800120016002000SE +/- 2.24, N = 31761.401759.07

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA256ab60000M120000M180000M240000M300000MSE +/- 548972949.20, N = 32818698957602822111754001. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mab30K60K90K120K150KSE +/- 87.00, N = 3123558.6123411.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Apache Spark TPC-H

Scale Factor: 50 - Geometric Mean Of All Queries

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Geometric Mean Of All Queriesab510152025SE +/- 0.05, N = 319.5919.56MIN: 9.71 / MAX: 103.64MIN: 9.48 / MAX: 77.71

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab400800120016002000SE +/- 1.91, N = 31758.591756.56

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLAlgorithm: RSA4096ab20K40K60K80K100KSE +/- 53.45, N = 398622.098528.81. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab1224364860SE +/- 0.06, N = 354.5154.55

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamab0.27930.55860.83791.11721.3965SE +/- 0.0046, N = 31.24131.2404

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamab2004006008001000SE +/- 3.00, N = 3804.18804.75

Java SciMark

Computational Test: Monte Carlo

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloab400800120016002000SE +/- 0.75, N = 31631.421632.45

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLAlgorithm: RSA4096ab700K1400K2100K2800K3500KSE +/- 1292.47, N = 33244390.33243345.21. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamab1.06032.12063.18094.24125.3015SE +/- 0.0079, N = 34.71264.7117

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamab1.06172.12343.18514.24685.3085SE +/- 0.0110, N = 34.71884.7180

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamab50100150200250SE +/- 0.35, N = 3212.10212.13

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamab50100150200250SE +/- 0.50, N = 3211.74211.77

Java SciMark

Computational Test: Jacobi Successive Over-Relaxation

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationab400800120016002000SE +/- 0.16, N = 31703.421703.25

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.02, N = 3156.42156.43

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab2004006008001000SE +/- 2.45, N = 31136.711136.64

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab20406080100SE +/- 0.19, N = 384.2584.25

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamab1530456075SE +/- 0.11, N = 368.2768.26

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamab48121620SE +/- 0.02, N = 314.6414.64

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab48121620SE +/- 0.01, N = 317.3017.30

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab12002400360048006000SE +/- 5.02, N = 35540.635540.52

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab130260390520650SE +/- 0.37, N = 3607.57607.57

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lab0.5221.0441.5662.0882.61SE +/- 0.01, N = 32.322.32MIN: 1.86 / MAX: 2.8MIN: 1.93 / MAX: 2.71

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressionab0.02480.04960.07440.09920.124SE +/- 0.00, N = 30.110.111. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7ab0.10130.20260.30390.40520.5065SE +/- 0.00, N = 30.450.451. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Apache Spark TPC-H

Scale Factor: 50 - Q22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q22ab3691215SE +/- 0.12, N = 310.6910.87

Apache Spark TPC-H

Scale Factor: 50 - Q21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q21ab20406080100SE +/- 7.93, N = 387.9077.71

Apache Spark TPC-H

Scale Factor: 50 - Q20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q20ab510152025SE +/- 0.10, N = 320.8021.05

Apache Spark TPC-H

Scale Factor: 50 - Q19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q19ab3691215SE +/- 0.12, N = 310.4512.09

Apache Spark TPC-H

Scale Factor: 50 - Q18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q18ab816243240SE +/- 0.33, N = 334.5133.74

Apache Spark TPC-H

Scale Factor: 50 - Q17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q17ab612182430SE +/- 0.55, N = 324.3124.56

Apache Spark TPC-H

Scale Factor: 50 - Q16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q16ab48121620SE +/- 0.26, N = 314.2214.98

Apache Spark TPC-H

Scale Factor: 50 - Q15

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q15ab3691215SE +/- 0.05343306, N = 39.777338669.48287773

Apache Spark TPC-H

Scale Factor: 50 - Q14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q14ab3691215SE +/- 0.22, N = 312.7012.57

Apache Spark TPC-H

Scale Factor: 50 - Q13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q13ab3691215SE +/- 0.08, N = 312.7613.04

Apache Spark TPC-H

Scale Factor: 50 - Q12

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q12ab510152025SE +/- 1.19, N = 319.4117.70

Apache Spark TPC-H

Scale Factor: 50 - Q11

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q11ab3691215SE +/- 0.32, N = 313.5813.31

Apache Spark TPC-H

Scale Factor: 50 - Q10

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q10ab612182430SE +/- 0.32, N = 324.3624.69

Apache Spark TPC-H

Scale Factor: 50 - Q09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q09ab816243240SE +/- 0.34, N = 336.6636.67

Apache Spark TPC-H

Scale Factor: 50 - Q08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q08ab612182430SE +/- 0.26, N = 326.7426.63

Apache Spark TPC-H

Scale Factor: 50 - Q07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q07ab612182430SE +/- 0.23, N = 324.8625.86

Apache Spark TPC-H

Scale Factor: 50 - Q06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q06ab1.32822.65643.98465.31286.641SE +/- 0.04522799, N = 35.903092075.88382483

Apache Spark TPC-H

Scale Factor: 50 - Q05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q05ab714212835SE +/- 0.67, N = 329.8431.20

Apache Spark TPC-H

Scale Factor: 50 - Q04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q04ab510152025SE +/- 0.46, N = 321.0021.82

Apache Spark TPC-H

Scale Factor: 50 - Q03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q03ab714212835SE +/- 0.94, N = 326.1929.69

Apache Spark TPC-H

Scale Factor: 50 - Q02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q02ab48121620SE +/- 0.35, N = 314.2514.53

Apache Spark TPC-H

Scale Factor: 50 - Q01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q01ab3691215SE +/- 0.21, N = 312.0112.87

Apache Spark TPC-H

Scale Factor: 10 - Q22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q22ab246810SE +/- 0.13164085, N = 36.054118956.04430914

Apache Spark TPC-H

Scale Factor: 10 - Q21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q21ab816243240SE +/- 0.11, N = 332.9132.70

Apache Spark TPC-H

Scale Factor: 10 - Q20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q20ab3691215SE +/- 0.14, N = 311.4411.54

Apache Spark TPC-H

Scale Factor: 10 - Q19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q19ab246810SE +/- 0.15363169, N = 36.206778376.06041670

Apache Spark TPC-H

Scale Factor: 10 - Q18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q18ab510152025SE +/- 0.50, N = 318.4717.31

Apache Spark TPC-H

Scale Factor: 10 - Q17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q17ab3691215SE +/- 0.07, N = 312.7713.01

Apache Spark TPC-H

Scale Factor: 10 - Q16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q16ab246810SE +/- 0.33462632, N = 36.871312946.95270681

Apache Spark TPC-H

Scale Factor: 10 - Q15

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q15ab1.31432.62863.94295.25726.5715SE +/- 0.10221447, N = 35.841380765.43870592

Apache Spark TPC-H

Scale Factor: 10 - Q14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q14ab246810SE +/- 0.33271668, N = 37.076223696.90602303

Apache Spark TPC-H

Scale Factor: 10 - Q13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q13ab246810SE +/- 0.09689769, N = 37.377283737.94083786

Apache Spark TPC-H

Scale Factor: 10 - Q12

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q12ab3691215SE +/- 0.16460967, N = 39.9440043810.03829002

Apache Spark TPC-H

Scale Factor: 10 - Q11

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q11ab246810SE +/- 0.04584382, N = 38.002923498.27814293

Apache Spark TPC-H

Scale Factor: 10 - Q10

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q10ab48121620SE +/- 0.31, N = 315.1714.78

Apache Spark TPC-H

Scale Factor: 10 - Q09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q09ab510152025SE +/- 0.51, N = 321.9122.53

Apache Spark TPC-H

Scale Factor: 10 - Q08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q08ab48121620SE +/- 0.41, N = 315.5214.56

Apache Spark TPC-H

Scale Factor: 10 - Q07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q07ab48121620SE +/- 0.33, N = 314.6514.90

Apache Spark TPC-H

Scale Factor: 10 - Q06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q06ab0.46150.9231.38451.8462.3075SE +/- 0.23574646, N = 32.051047451.85595930

Apache Spark TPC-H

Scale Factor: 10 - Q05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q05ab510152025SE +/- 0.48, N = 316.4418.86

Apache Spark TPC-H

Scale Factor: 10 - Q04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q04ab3691215SE +/- 0.21, N = 312.3511.26

Apache Spark TPC-H

Scale Factor: 10 - Q03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q03ab48121620SE +/- 0.31, N = 313.9714.29

Apache Spark TPC-H

Scale Factor: 10 - Q02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q02ab246810SE +/- 0.13824959, N = 37.431042837.39245987

Apache Spark TPC-H

Scale Factor: 10 - Q01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q01ab246810SE +/- 0.23898111, N = 37.588891517.28826714

Apache Spark TPC-H

Scale Factor: 1 - Q22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q22ab0.240.480.720.961.2SE +/- 0.03222070, N = 31.007690471.06679213

Apache Spark TPC-H

Scale Factor: 1 - Q21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q21ab3691215SE +/- 0.26119238, N = 39.645312319.55909538

Apache Spark TPC-H

Scale Factor: 1 - Q20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q20ab0.68791.37582.06372.75163.4395SE +/- 0.12035470, N = 33.057396173.05001688

Apache Spark TPC-H

Scale Factor: 1 - Q19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q19ab0.1930.3860.5790.7720.965SE +/- 0.03922839, N = 30.790923950.85797596

Apache Spark TPC-H

Scale Factor: 1 - Q18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q18ab1.26642.53283.79925.06566.332SE +/- 0.11078356, N = 35.628538455.13171148

Apache Spark TPC-H

Scale Factor: 1 - Q17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q17ab0.6661.3321.9982.6643.33SE +/- 0.10612827, N = 32.959939242.88348198

Apache Spark TPC-H

Scale Factor: 1 - Q16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q16ab0.34150.6831.02451.3661.7075SE +/- 0.06760680, N = 31.381472591.51779914

Apache Spark TPC-H

Scale Factor: 1 - Q15

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q15ab0.58211.16421.74632.32842.9105SE +/- 0.11136502, N = 32.501859662.58714175

Apache Spark TPC-H

Scale Factor: 1 - Q14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q14ab0.49760.99521.49281.99042.488SE +/- 0.16557850, N = 32.064853312.21146965

Apache Spark TPC-H

Scale Factor: 1 - Q13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q13ab0.39170.78341.17511.56681.9585SE +/- 0.15789062, N = 31.588159361.74074161

Apache Spark TPC-H

Scale Factor: 1 - Q12

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q12ab0.50991.01981.52972.03962.5495SE +/- 0.15180813, N = 32.175426482.26641607

Apache Spark TPC-H

Scale Factor: 1 - Q11

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q11ab0.28650.5730.85951.1461.4325SE +/- 0.06007206, N = 31.273381351.13998687

Apache Spark TPC-H

Scale Factor: 1 - Q10

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q10ab0.85811.71622.57433.43244.2905SE +/- 0.13264795, N = 33.813596653.81245542

Apache Spark TPC-H

Scale Factor: 1 - Q09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q09ab1.3272.6543.9815.3086.635SE +/- 0.08828966, N = 35.709694075.89775848

Apache Spark TPC-H

Scale Factor: 1 - Q08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q08ab0.59761.19521.79282.39042.988SE +/- 0.02941830, N = 32.655846442.60907817

Apache Spark TPC-H

Scale Factor: 1 - Q07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q07ab0.90241.80482.70723.60964.512SE +/- 0.02085439, N = 34.010448063.87790275

Apache Spark TPC-H

Scale Factor: 1 - Q06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q06ab0.10540.21080.31620.42160.527SE +/- 0.03244463, N = 30.468229150.35801557

Apache Spark TPC-H

Scale Factor: 1 - Q05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q05ab0.92951.8592.78853.7184.6475SE +/- 0.18898243, N = 34.131221613.69217634

Apache Spark TPC-H

Scale Factor: 1 - Q04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q04ab0.88321.76642.64963.53284.416SE +/- 0.09899955, N = 33.925257453.75427246

Apache Spark TPC-H

Scale Factor: 1 - Q03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q03ab0.86991.73982.60973.47964.3495SE +/- 0.11371323, N = 33.864421843.86610818

Apache Spark TPC-H

Scale Factor: 1 - Q02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q02ab0.46850.9371.40551.8742.3425SE +/- 0.02016184, N = 32.061790712.08224201

Apache Spark TPC-H

Scale Factor: 1 - Q01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q01ab1.00052.0013.00154.0025.0025SE +/- 0.17358727, N = 34.320060814.44657946

LeelaChessZero

Backend: Eigen

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenab150300450600750SE +/- 17.59, N = 87047151. (CXX) g++ options: -flto -pthread

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASab2004006008001000SE +/- 18.54, N = 98538711. (CXX) g++ options: -flto -pthread


Phoronix Test Suite v10.8.4