dfgg

Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus ICL GT2 16GB on Ubuntu 23.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2312179-NE-DFGG1028382&sor&grr.

dfggProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLCompilerFile-SystemScreen ResolutionabIntel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads)Dell 06CDVY (1.0.9 BIOS)Intel Ice Lake-LP DRAM16GBToshiba KBG40ZPZ512G NVMe 512GBIntel Iris Plus ICL GT2 16GB (1100MHz)Realtek ALC289Intel Ice Lake-LP PCH CNVi WiFiUbuntu 23.046.2.0-36-generic (x86_64)GNOME Shell 44.3X Server + Wayland4.6 Mesa 23.0.4-0ubuntu1~23.04.1OpenCL 3.0GCC 12.3.0ext41920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xc2 - Thermald 2.5.2 Java Details- OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu123.04) Python Details- Python 3.11.4Security Details- gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected

dfggxmrig: GhostRider - 1Mwebp2: Quality 95, Compression Effort 7spark-tpch: 10 - Q22spark-tpch: 10 - Q21spark-tpch: 10 - Q20spark-tpch: 10 - Q19spark-tpch: 10 - Q18spark-tpch: 10 - Q17spark-tpch: 10 - Q16spark-tpch: 10 - Q15spark-tpch: 10 - Q14spark-tpch: 10 - Q13spark-tpch: 10 - Q12spark-tpch: 10 - Q11spark-tpch: 10 - Q10spark-tpch: 10 - Q09spark-tpch: 10 - Q08spark-tpch: 10 - Q07spark-tpch: 10 - Q06spark-tpch: 10 - Q05spark-tpch: 10 - Q04spark-tpch: 10 - Q03spark-tpch: 10 - Q02spark-tpch: 10 - Q01spark-tpch: 10 - Geometric Mean Of All Querieswebp2: Quality 75, Compression Effort 7xmrig: CryptoNight-Heavy - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: Monero - 1Mxmrig: KawPow - 1Mxmrig: Wownero - 1Mlczero: Eigenlczero: BLASsvt-av1: Preset 4 - Bosphorus 4Kopenssl: SHA256openssl: AES-128-GCMopenssl: ChaCha20-Poly1305openssl: AES-256-GCMopenssl: ChaCha20openssl: SHA512scylladb: Writesspark-tpch: 1 - Q22spark-tpch: 1 - Q21spark-tpch: 1 - Q20spark-tpch: 1 - Q19spark-tpch: 1 - Q18spark-tpch: 1 - Q17spark-tpch: 1 - Q16spark-tpch: 1 - Q15spark-tpch: 1 - Q14spark-tpch: 1 - Q13spark-tpch: 1 - Q12spark-tpch: 1 - Q11spark-tpch: 1 - Q10spark-tpch: 1 - Q09spark-tpch: 1 - Q08spark-tpch: 1 - Q07spark-tpch: 1 - Q06spark-tpch: 1 - Q05spark-tpch: 1 - Q04spark-tpch: 1 - Q03spark-tpch: 1 - Q02spark-tpch: 1 - Q01spark-tpch: 1 - Geometric Mean Of All Queriessvt-av1: Preset 8 - Bosphorus 4Kdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamopenssl: RSA4096openssl: RSA4096deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamsvt-av1: Preset 4 - Bosphorus 1080pdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamwebp2: Quality 100, Compression Effort 5java-scimark2: Compositesvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Kwebp2: Defaultsvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pjava-scimark2: Jacobi Successive Over-Relaxationjava-scimark2: Dense LU Matrix Factorizationjava-scimark2: Sparse Matrix Multiplyjava-scimark2: Fast Fourier Transformjava-scimark2: Monte Carloab187.40.0112.72342587196.2626190249.4873886139.19455719106.44145203100.7801284811.4086055837.0256462138.5986900320.439550448.5409698510.5974807751.2323379573.7576141462.1865158154.8296089236.5631408762.5023002648.4794311555.937057512.7919893341.6055679341.820158490.031298.81302.31308.11296.41573.620320.892213938423015202890820105360567001336410795015375647270856037790278512.2499628120.61895184.865294934.0420241410.343193058.404414181.723811753.91055253.693408013.230685475.025319581.937303665.496996887.872983934.977445136.482258323.511440526.38540226.716489326.392387873.88890917.151234634.73252577.044737.65422.7047373.76852.675345019.376953.516237.33923.572884.7081926.22292.15313.088876.321929.602833.7574946.78942.1073475.15754.2078229.59184.355465.05232.1502465.02672.15033.684103.169719.377354.866818.22158.769534.00158.5472233.268142.61714.0194141.226514.158158.303634.279280.28712.452177.806112.8530.018933.293429.977633.33944.753209.81370.952605.9526.03531.31333.8622.60192.189254.9551403.148173.691999.02518.57935.32185.20.0113.09406471189.404434247.0519523638.3745575103.6828384494.2012863211.6129798935.773826638.9642410321.2104206147.3945617710.9613199251.3171424973.3200683660.0741767955.6137847937.0597267266.5027389548.5239868255.5770492612.1880130844.6807212841.344958840.031282.11281.71290.31333.91524.224450.85821040390501508173906098303068801254837835014830093930813051380254752.0471069820.035987855.142926223.5735125510.197968488.577455522.046055323.840383053.972354412.99688224.836236952.020401725.961719048.69179635.443890095.844977383.24464257.113306527.0857837.074677943.417973287.087840564.80874366.765752.23632.6521378.9062.639143229727.254.035236.981923.943983.382931.33662.14213.30975.0629.887133.43711010.44021.9736480.04654.1657234.53624.2633468.48162.1345476.67262.09783.605103.498119.302155.43318.03560.408633.08358.7547227.7259147.351913.5693143.488513.905459.927733.34280.509712.417678.715612.701530.514332.753730.746132.50635.1288194.44860.952611.9826.96728.22631.8732.45192.665256.0841401.228211.451994.16516.53936.54OpenBenchmarking.org

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mab4080120160200187.4185.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7ba0.00230.00460.00690.00920.01150.010.011. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Apache Spark TPC-H

Scale Factor: 10 - Q22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q22ab369121512.7213.09

Apache Spark TPC-H

Scale Factor: 10 - Q21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q21ba4080120160200189.40196.26

Apache Spark TPC-H

Scale Factor: 10 - Q20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q20ba112233445547.0549.49

Apache Spark TPC-H

Scale Factor: 10 - Q19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q19ba91827364538.3739.19

Apache Spark TPC-H

Scale Factor: 10 - Q18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q18ba20406080100103.68106.44

Apache Spark TPC-H

Scale Factor: 10 - Q17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q17ba2040608010094.20100.78

Apache Spark TPC-H

Scale Factor: 10 - Q16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q16ab369121511.4111.61

Apache Spark TPC-H

Scale Factor: 10 - Q15

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q15ba91827364535.7737.03

Apache Spark TPC-H

Scale Factor: 10 - Q14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q14ab91827364538.6038.96

Apache Spark TPC-H

Scale Factor: 10 - Q13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q13ab51015202520.4421.21

Apache Spark TPC-H

Scale Factor: 10 - Q12

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q12ba112233445547.3948.54

Apache Spark TPC-H

Scale Factor: 10 - Q11

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q11ab369121510.6010.96

Apache Spark TPC-H

Scale Factor: 10 - Q10

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q10ab122436486051.2351.32

Apache Spark TPC-H

Scale Factor: 10 - Q09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q09ba163248648073.3273.76

Apache Spark TPC-H

Scale Factor: 10 - Q08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q08ba142842567060.0762.19

Apache Spark TPC-H

Scale Factor: 10 - Q07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q07ab122436486054.8355.61

Apache Spark TPC-H

Scale Factor: 10 - Q06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q06ab91827364536.5637.06

Apache Spark TPC-H

Scale Factor: 10 - Q05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q05ab153045607562.5066.50

Apache Spark TPC-H

Scale Factor: 10 - Q04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q04ab112233445548.4848.52

Apache Spark TPC-H

Scale Factor: 10 - Q03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q03ba132639526555.5855.94

Apache Spark TPC-H

Scale Factor: 10 - Q02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q02ba369121512.1912.79

Apache Spark TPC-H

Scale Factor: 10 - Q01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q01ab102030405041.6144.68

Apache Spark TPC-H

Scale Factor: 10 - Geometric Mean Of All Queries

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Geometric Mean Of All Queriesba102030405041.3441.82MIN: 10.96 / MAX: 189.4MIN: 10.6 / MAX: 196.26

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7ba0.00680.01360.02040.02720.0340.030.031. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mab300600900120015001298.81282.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mab300600900120015001302.31281.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mab300600900120015001308.11290.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mba300600900120015001333.91296.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mab300600900120015001573.61524.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

LeelaChessZero

Backend: Eigen

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenba61218243024201. (CXX) g++ options: -flto -pthread

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASba102030405045321. (CXX) g++ options: -flto -pthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4Kab0.20070.40140.60210.80281.00350.8920.8581. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA256ab500M1000M1500M2000M2500M213938423021040390501. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-128-GCMab3000M6000M9000M12000M15000M15202890820150817390601. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20-Poly1305ab2000M4000M6000M8000M10000M1053605670098303068801. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-256-GCMab3000M6000M9000M12000M15000M13364107950125483783501. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20ab3000M6000M9000M12000M15000M15375647270148300939301. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA512ab200M400M600M800M1000M8560377908130513801. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

ScyllaDB

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterScyllaDB 5.2.9Test: Writesab6K12K18K24K30K2785125475

Apache Spark TPC-H

Scale Factor: 1 - Q22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q22ba0.50621.01241.51862.02482.5312.047106982.24996281

Apache Spark TPC-H

Scale Factor: 1 - Q21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q21ba51015202520.0420.62

Apache Spark TPC-H

Scale Factor: 1 - Q20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q20ab1.15722.31443.47164.62885.7864.865294935.14292622

Apache Spark TPC-H

Scale Factor: 1 - Q19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q19ba0.90951.8192.72853.6384.54753.573512554.04202414

Apache Spark TPC-H

Scale Factor: 1 - Q18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q18ba369121510.2010.34

Apache Spark TPC-H

Scale Factor: 1 - Q17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q17ab2468108.404414188.57745552

Apache Spark TPC-H

Scale Factor: 1 - Q16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q16ab0.46040.92081.38121.84162.3021.723811752.04605532

Apache Spark TPC-H

Scale Factor: 1 - Q15

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q15ba0.87991.75982.63973.51964.39953.840383053.91055250

Apache Spark TPC-H

Scale Factor: 1 - Q14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q14ab0.89381.78762.68143.57524.4693.693408013.97235441

Apache Spark TPC-H

Scale Factor: 1 - Q13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q13ba0.72691.45382.18072.90763.63452.996882203.23068547

Apache Spark TPC-H

Scale Factor: 1 - Q12

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q12ba1.13072.26143.39214.52285.65354.836236955.02531958

Apache Spark TPC-H

Scale Factor: 1 - Q11

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q11ab0.45460.90921.36381.81842.2731.937303662.02040172

Apache Spark TPC-H

Scale Factor: 1 - Q10

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q10ab1.34142.68284.02425.36566.7075.496996885.96171904

Apache Spark TPC-H

Scale Factor: 1 - Q09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q09ab2468107.872983938.69179630

Apache Spark TPC-H

Scale Factor: 1 - Q08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q08ab1.22492.44983.67474.89966.12454.977445135.44389009

Apache Spark TPC-H

Scale Factor: 1 - Q07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q07ba2468105.844977386.48225832

Apache Spark TPC-H

Scale Factor: 1 - Q06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q06ba0.79011.58022.37033.16043.95053.244642503.51144052

Apache Spark TPC-H

Scale Factor: 1 - Q05

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q05ab2468106.385402207.11330652

Apache Spark TPC-H

Scale Factor: 1 - Q04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q04ab2468106.716489327.08578300

Apache Spark TPC-H

Scale Factor: 1 - Q03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q03ab2468106.392387877.07467794

Apache Spark TPC-H

Scale Factor: 1 - Q02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q02ba0.8751.752.6253.54.3753.417973283.88890910

Apache Spark TPC-H

Scale Factor: 1 - Q01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q01ba2468107.087840567.15123463

Apache Spark TPC-H

Scale Factor: 1 - Geometric Mean Of All Queries

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Geometric Mean Of All Queriesab1.0822.1643.2464.3285.414.73252574.8087436MIN: 1.72 / MAX: 20.62MIN: 2.02 / MAX: 20.04

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4Kab2468107.0446.7651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab160320480640800737.65752.24

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab0.60861.21721.82582.43443.0432.70472.6521

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamab80160240320400373.77378.91

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamab0.60191.20381.80572.40763.00952.67532.6391

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLAlgorithm: RSA4096ab10K20K30K40K50K45019.343229.01. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLAlgorithm: RSA4096ab170340510680850769.0727.21. OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab122436486053.5254.04

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab91827364537.3436.98

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab61218243023.5723.94

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab2040608010084.7183.38

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab2004006008001000926.22931.34

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab0.48440.96881.45321.93762.4222.1532.142

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamab369121513.0913.31

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamab2040608010076.3275.06

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamab71421283529.6029.89

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamab81624324033.7633.44

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab2004006008001000946.791010.44

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab0.47410.94821.42231.89642.37052.10731.9736

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab100200300400500475.16480.05

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab0.94681.89362.84043.78724.7344.20784.1657

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab50100150200250229.59234.54

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab0.97991.95982.93973.91964.89954.35504.2633

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab100200300400500465.05468.48

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab0.48380.96761.45141.93522.4192.15022.1345

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamab100200300400500465.03476.67

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamab0.48380.96761.45141.93522.4192.15032.0978

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pab0.82891.65782.48673.31564.14453.6843.6051. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab20406080100103.17103.50

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab51015202519.3819.30

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab122436486054.8755.43

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab4812162018.2218.04

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab142842567058.7760.41

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab81624324034.0033.08

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab2468108.54728.7547

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab50100150200250233.27227.73

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab306090120150142.62147.35

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab4812162014.0213.57

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab306090120150141.23143.49

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab4812162014.1613.91

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab132639526558.3059.93

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab81624324034.2833.34

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamab2040608010080.2980.51

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamab369121512.4512.42

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamab2040608010077.8178.72

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamab369121512.8512.70

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamab71421283530.0230.51

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamab81624324033.2932.75

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamab71421283529.9830.75

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamab81624324033.3432.51

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamab1.1542.3083.4624.6165.774.75305.1288

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamab50100150200250209.81194.45

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5ba0.21380.42760.64140.85521.0690.950.951. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Java SciMark

Computational Test: Composite

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositeba60012001800240030002611.982605.95

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pba61218243026.9726.041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4Kab71421283531.3128.231. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4Kab81624324033.8631.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultab0.5851.171.7552.342.9252.602.451. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pba4080120160200192.67192.191. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pba60120180240300256.08254.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Java SciMark

Computational Test: Jacobi Successive Over-Relaxation

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationab300600900120015001403.141401.22

Java SciMark

Computational Test: Dense LU Matrix Factorization

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationba2K4K6K8K10K8211.458173.69

Java SciMark

Computational Test: Sparse Matrix Multiply

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplyab4008001200160020001999.021994.16

Java SciMark

Computational Test: Fast Fourier Transform

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformab110220330440550518.57516.53

Java SciMark

Computational Test: Monte Carlo

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloba2004006008001000936.54935.32


Phoronix Test Suite v10.8.5