xmas

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.50 BIOS) and llvmpipe on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2312258-PTS-XMAS815235&sor.

xmasProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionabcIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.50 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBllvmpipeRealtek ALC1220Intel I219-V + Intel I211Ubuntu 22.046.2.0-36-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.44.5 Mesa 22.0.1 (LLVM 13.0.1 256 bits)1.2.204GCC 11.4.0ext41024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003604Java Details- a: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)- b: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)- c: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.12Security Details- gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

xmaslczero: BLASlczero: Eigenxmrig: KawPow - 1Mxmrig: Monero - 1Mxmrig: Wownero - 1Mxmrig: GhostRider - 1Mxmrig: CryptoNight-Heavy - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mjava-scimark2: Compositejava-scimark2: Monte Carlojava-scimark2: Fast Fourier Transformjava-scimark2: Sparse Matrix Multiplyjava-scimark2: Dense LU Matrix Factorizationjava-scimark2: Jacobi Successive Over-Relaxationwebp2: Defaultwebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Quality 100, Compression Effort 5webp2: Quality 100, Lossless Compressionembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pspark-tpch: 1 - Geometric Mean Of All Queriesspark-tpch: 1 - Q01spark-tpch: 1 - Q02spark-tpch: 1 - Q03spark-tpch: 1 - Q04spark-tpch: 1 - Q05spark-tpch: 1 - Q06spark-tpch: 1 - Q07spark-tpch: 1 - Q08spark-tpch: 1 - Q09spark-tpch: 1 - Q10spark-tpch: 1 - Q11spark-tpch: 1 - Q12spark-tpch: 1 - Q13spark-tpch: 1 - Q14spark-tpch: 1 - Q15spark-tpch: 1 - Q16spark-tpch: 1 - Q17spark-tpch: 1 - Q18spark-tpch: 1 - Q19spark-tpch: 1 - Q20spark-tpch: 1 - Q21spark-tpch: 1 - Q22spark-tpch: 10 - Geometric Mean Of All Queriesspark-tpch: 10 - Q01spark-tpch: 10 - Q02spark-tpch: 10 - Q03spark-tpch: 10 - Q04spark-tpch: 10 - Q05spark-tpch: 10 - Q06spark-tpch: 10 - Q07spark-tpch: 10 - Q08spark-tpch: 10 - Q09spark-tpch: 10 - Q10spark-tpch: 10 - Q11spark-tpch: 10 - Q12spark-tpch: 10 - Q13spark-tpch: 10 - Q14spark-tpch: 10 - Q15spark-tpch: 10 - Q16spark-tpch: 10 - Q17spark-tpch: 10 - Q18spark-tpch: 10 - Q19spark-tpch: 10 - Q20spark-tpch: 10 - Q21spark-tpch: 10 - Q22deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamscylladb: Writesabc99824305.94307.89424.31065.34303.84305.22329.081169.97710.661833.486206.961724.338.080.190.095.350.0220.205317.562123.028720.715722.861319.57533.12832.264108.781109.9438.48446.14166.804190.6131.972006763.992056612.085803032.506989482.418701412.904836420.802325132.537325861.945723183.754305122.398213151.183946491.53979171.636026741.200576311.501062751.648400192.263289453.001428131.154087311.951336987.52318431.484742648.052348499.470716485.9903135311.958260549.4513511712.43594175.942317499.7210130710.6501445813.56535539.686726572.648324977.736131674.266768466.973340516.559264184.0039286616.3847675316.954257976.646073348.6878213931.770061493.3984825618.0373497.074715.914162.8266456.678419.6845146.96196.7945255.581835.1903126.25937.90781200.72267.4763566.71281.7553104.289886.270483.531711.956423.4757383.344515.006366.6271260.428934.5226130.10687.674112.273880.102684.208511.8645151.547359.365488.747411.258722.3274403.046821.891545.6625209.783842.878552.665318.975318.6095481.545716.402260.957150834105864302.84308.79409.81047.94308.74301.22275.421169.02684.241820.445979.071724.338.270.190.095.280.0220.193917.625623.084320.734222.956719.66263.1132.181109.557108.5778.46145.484168.775193.5261.742525363.83936621.885463712.356637721.976770042.342034580.733412272.264334441.872031933.019853351.944525961.13731421.485789661.234474781.091108441.254616861.421266912.114368442.802644251.064415451.704559096.772738931.276329647.7668340410.546422964.931752212.321187979.9794063612.197178845.8046112110.206960689.9221239113.018259059.727175712.598317157.772675514.141022686.811183936.533554553.2482144816.2044448916.556819926.557401668.3949708931.070913313.2390141518.4392485.848416.011362.4453456.124219.7081145.38566.8689257.863934.8795134.83017.40461203.5667.4584584.80611.7004105.603885.19784.205411.860623.5679381.841915.101966.2056256.182635.1086121.1548.2423110.226881.626884.266511.8572148.27760.665687.128911.468921.8909408.050321.13847.2922206.476943.562957.39617.410618.1808493.556716.295761.356290383107804307.14302.29344.410814310.84299.72451.211169.33711.852453.436196.151725.37.980.190.095.180.0220.192417.756823.049820.662322.897419.59553.11132.041109.847109.5368.43246.47165.48193.6071.729948563.734766242.206153152.296066052.026780612.509186030.776137772.296468731.907357453.195964572.050014730.930948621.521212581.212598681.127186661.334101321.492031932.150347952.73754431.100467441.725702646.270213131.102087267.8137748410.409886365.1355609912.297739989.5777244612.861410145.8888950310.3035736110.7421560313.225128179.422932622.484924557.669171814.262536536.641220096.551971443.4871938216.1918907216.658258446.742002498.4676771231.416904453.0957541518.2561492.465416.253561.5153455.252119.7458148.27046.7349258.825634.7488123.72898.07071201.84537.4689569.79591.7464105.424285.34282.782612.064423.4825383.232114.943666.9075257.206734.9669129.19647.7283110.708881.271484.454311.8303153.243858.707789.473411.167322.2729404.032921.944245.5527210.209342.791856.753817.607318.1981493.526616.476360.6833150956OpenBenchmarking.org

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLAScba20406080100107105991. (CXX) g++ options: -flto -pthread

LeelaChessZero

Backend: Eigen

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenbac204060801008682801. (CXX) g++ options: -flto -pthread

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mcab90018002700360045004307.14305.94302.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mbac90018002700360045004308.74307.84302.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mabc2K4K6K8K10K9424.39409.89344.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mcab20040060080010001081.01065.31047.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mcba90018002700360045004310.84308.74303.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mabc90018002700360045004305.24301.24299.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Java SciMark

Computational Test: Composite

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositecab50010001500200025002451.212329.082275.42

Java SciMark

Computational Test: Monte Carlo

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloacb300600900120015001169.971169.331169.02

Java SciMark

Computational Test: Fast Fourier Transform

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformcab150300450600750711.85710.66684.24

Java SciMark

Computational Test: Sparse Matrix Multiply

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplycab50010001500200025002453.431833.481820.44

Java SciMark

Computational Test: Dense LU Matrix Factorization

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationacb130026003900520065006206.966196.155979.07

Java SciMark

Computational Test: Jacobi Successive Over-Relaxation

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationcba4008001200160020001725.301724.331724.33

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultbac2468108.278.087.981. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7cba0.04280.08560.12840.17120.2140.190.190.191. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7cba0.02030.04060.06090.08120.10150.090.090.091. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5abc1.20382.40763.61144.81526.0195.355.285.181. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressioncba0.00450.0090.01350.0180.02250.020.020.021. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownabc51015202520.2120.1920.19MIN: 20 / MAX: 20.52MIN: 19.96 / MAX: 20.52MIN: 19.96 / MAX: 20.55

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crowncba4812162017.7617.6317.56MIN: 17.5 / MAX: 18.06MIN: 17.37 / MAX: 17.86MIN: 17.33 / MAX: 17.82

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonbca61218243023.0823.0523.03MIN: 22.92 / MAX: 23.31MIN: 22.91 / MAX: 23.24MIN: 22.87 / MAX: 23.24

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objbac51015202520.7320.7220.66MIN: 20.6 / MAX: 20.92MIN: 20.57 / MAX: 20.97MIN: 20.52 / MAX: 20.94

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonbca51015202522.9622.9022.86MIN: 22.78 / MAX: 23.14MIN: 22.72 / MAX: 23.13MIN: 22.69 / MAX: 23.08

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objbca51015202519.6619.6019.58MIN: 19.51 / MAX: 19.86MIN: 19.43 / MAX: 19.81MIN: 19.38 / MAX: 19.81

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4Kacb0.70381.40762.11142.81523.5193.1283.1113.1101. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4Kabc71421283532.2632.1832.041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4Kcba20406080100109.85109.56108.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4Kacb20406080100109.94109.54108.581. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pabc2468108.4848.4618.4321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pcab112233445546.4746.1445.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pbac4080120160200168.78166.80165.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pcba4080120160200193.61193.53190.611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Spark TPC-H

Scale Factor: 1 - Geometric Mean Of All Queries

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Geometric Mean Of All Queriescba0.44370.88741.33111.77482.21851.729948561.742525361.97200676MIN: 0.93 / MAX: 6.27MIN: 1.06 / MAX: 6.77MIN: 1.15 / MAX: 7.52

Apache Spark TPC-H

Scale Factor: 1 - Q01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q01cba0.89821.79642.69463.59284.4913.734766243.839366203.99205661

Apache Spark TPC-H

Scale Factor: 1 - Q02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q02bac0.49640.99281.48921.98562.4821.885463712.085803032.20615315

Apache Spark TPC-H

Scale Factor: 1 - Q03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q03cba0.56411.12821.69232.25642.82052.296066052.356637722.50698948

Apache Spark TPC-H

Scale Factor: 1 - Q04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q04bca0.54421.08841.63262.17682.7211.976770042.026780612.41870141

Apache Spark TPC-H

Scale Factor: 1 - Q05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q05bca0.65361.30721.96082.61443.2682.342034582.509186032.90483642

Apache Spark TPC-H

Scale Factor: 1 - Q06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q06bca0.18050.3610.54150.7220.90250.733412270.776137770.80232513

Apache Spark TPC-H

Scale Factor: 1 - Q07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q07bca0.57091.14181.71272.28362.85452.264334442.296468732.53732586

Apache Spark TPC-H

Scale Factor: 1 - Q08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q08bca0.43780.87561.31341.75122.1891.872031931.907357451.94572318

Apache Spark TPC-H

Scale Factor: 1 - Q09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q09bca0.84471.68942.53413.37884.22353.019853353.195964573.75430512

Apache Spark TPC-H

Scale Factor: 1 - Q10

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q10bca0.53961.07921.61882.15842.6981.944525962.050014732.39821315

Apache Spark TPC-H

Scale Factor: 1 - Q11

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q11cba0.26640.53280.79921.06561.3320.930948621.137314201.18394649

Apache Spark TPC-H

Scale Factor: 1 - Q12

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q12bca0.34650.6931.03951.3861.73251.485789661.521212581.53979170

Apache Spark TPC-H

Scale Factor: 1 - Q13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q13cba0.36810.73621.10431.47241.84051.212598681.234474781.63602674

Apache Spark TPC-H

Scale Factor: 1 - Q14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q14bca0.27010.54020.81031.08041.35051.091108441.127186661.20057631

Apache Spark TPC-H

Scale Factor: 1 - Q15

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q15bca0.33770.67541.01311.35081.68851.254616861.334101321.50106275

Apache Spark TPC-H

Scale Factor: 1 - Q16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q16bca0.37090.74181.11271.48361.85451.421266911.492031931.64840019

Apache Spark TPC-H

Scale Factor: 1 - Q17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q17bca0.50921.01841.52762.03682.5462.114368442.150347952.26328945

Apache Spark TPC-H

Scale Factor: 1 - Q18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q18cba0.67531.35062.02592.70123.37652.737544302.802644253.00142813

Apache Spark TPC-H

Scale Factor: 1 - Q19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q19bca0.25970.51940.77911.03881.29851.064415451.100467441.15408731

Apache Spark TPC-H

Scale Factor: 1 - Q20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q20bca0.43910.87821.31731.75642.19551.704559091.725702641.95133698

Apache Spark TPC-H

Scale Factor: 1 - Q21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q21cba2468106.270213136.772738937.52318430

Apache Spark TPC-H

Scale Factor: 1 - Q22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q22cba0.33410.66821.00231.33641.67051.102087261.276329641.48474264

Apache Spark TPC-H

Scale Factor: 10 - Geometric Mean Of All Queries

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Geometric Mean Of All Queriesbca2468107.766834047.813774848.05234849MIN: 2.6 / MAX: 31.07MIN: 2.48 / MAX: 31.42MIN: 2.65 / MAX: 31.77

Apache Spark TPC-H

Scale Factor: 10 - Q01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q01acb36912159.4707164810.4098863610.54642296

Apache Spark TPC-H

Scale Factor: 10 - Q02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q02bca1.34782.69564.04345.39126.7394.931752205.135560995.99031353

Apache Spark TPC-H

Scale Factor: 10 - Q03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q03acb369121511.9612.3012.32

Apache Spark TPC-H

Scale Factor: 10 - Q04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q04acb36912159.451351179.577724469.97940636

Apache Spark TPC-H

Scale Factor: 10 - Q05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q05bac369121512.2012.4412.86

Apache Spark TPC-H

Scale Factor: 10 - Q06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q06bca1.3372.6744.0115.3486.6855.804611215.888895035.94231749

Apache Spark TPC-H

Scale Factor: 10 - Q07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q07abc36912159.7210130710.2069606810.30357361

Apache Spark TPC-H

Scale Factor: 10 - Q08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q08bac36912159.9221239110.6501445810.74215603

Apache Spark TPC-H

Scale Factor: 10 - Q09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q09bca369121513.0213.2313.57

Apache Spark TPC-H

Scale Factor: 10 - Q10

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q10cab36912159.422932629.686726579.72717571

Apache Spark TPC-H

Scale Factor: 10 - Q11

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q11cba0.59591.19181.78772.38362.97952.484924552.598317152.64832497

Apache Spark TPC-H

Scale Factor: 10 - Q12

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q12cab2468107.669171817.736131677.77267551

Apache Spark TPC-H

Scale Factor: 10 - Q13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q13bca0.961.922.883.844.84.141022684.262536534.26676846

Apache Spark TPC-H

Scale Factor: 10 - Q14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q14cba2468106.641220096.811183936.97334051

Apache Spark TPC-H

Scale Factor: 10 - Q15

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q15bca2468106.533554556.551971446.55926418

Apache Spark TPC-H

Scale Factor: 10 - Q16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q16bca0.90091.80182.70273.60364.50453.248214483.487193824.00392866

Apache Spark TPC-H

Scale Factor: 10 - Q17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q17cba4812162016.1916.2016.38

Apache Spark TPC-H

Scale Factor: 10 - Q18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q18bca4812162016.5616.6616.95

Apache Spark TPC-H

Scale Factor: 10 - Q19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q19bac2468106.557401666.646073346.74200249

Apache Spark TPC-H

Scale Factor: 10 - Q20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q20bca2468108.394970898.467677128.68782139

Apache Spark TPC-H

Scale Factor: 10 - Q21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q21bca71421283531.0731.4231.77

Apache Spark TPC-H

Scale Factor: 10 - Q22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q22cba0.76471.52942.29413.05883.82353.095754153.239014153.39848256

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambca51015202518.4418.2618.04

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambca110220330440550485.85492.47497.07

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcba4812162016.2516.0115.91

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcba142842567061.5262.4562.83

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc100200300400500456.68456.12455.25

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc51015202519.6819.7119.75

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcab306090120150148.27146.96145.39

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcab2468106.73496.79456.8689

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcba60120180240300258.83257.86255.58

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcba81624324034.7534.8835.19

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambac306090120150134.83126.26123.73

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambac2468107.40467.90788.0707

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streambca300600900120015001203.571201.851200.72

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streambca2468107.45847.46897.4763

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streambca130260390520650584.81569.80566.71

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streambca0.39490.78981.18471.57961.97451.70041.74641.7553

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambca20406080100105.60105.42104.29

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambca2040608010085.2085.3486.27

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambac2040608010084.2183.5382.78

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambac369121511.8611.9612.06

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streambca61218243023.5723.4823.48

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streambca80160240320400381.84383.23383.34

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streambac4812162015.1015.0114.94

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streambac153045607566.2166.6366.91

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamacb60120180240300260.43257.21256.18

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamacb81624324034.5234.9735.11

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamacb306090120150130.11129.20121.15

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamacb2468107.67407.72838.2423

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamacb306090120150112.27110.71110.23

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamacb2040608010080.1081.2781.63

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcba2040608010084.4584.2784.21

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcba369121511.8311.8611.86

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcab306090120150153.24151.55148.28

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcab142842567058.7159.3760.67

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcab2040608010089.4788.7587.13

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcab369121511.1711.2611.47

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamacb51015202522.3322.2721.89

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamacb90180270360450403.05404.03408.05

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcab51015202521.9421.8921.14

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcab112233445545.5545.6647.29

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamcab50100150200250210.21209.78206.48

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamcab102030405042.7942.8843.56

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streambca132639526557.4056.7552.67

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streambca51015202517.4117.6118.98

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamacb51015202518.6118.2018.18

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamacb110220330440550481.55493.53493.56

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcab4812162016.4816.4016.30

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcab142842567060.6860.9661.36

ScyllaDB

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterScyllaDB 5.2.9Test: Writescab30K60K90K120K150K15095615083490383


Phoronix Test Suite v10.8.4