eps

2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312241-NE-EPS60637430
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 3 Tests
Java Tests 2 Tests
Machine Learning 3 Tests
Python Tests 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 24 2023
  1 Day, 26 Minutes
b
December 25 2023
  7 Hours, 39 Minutes
Invert Hiding All Results Option
  16 Hours, 3 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epsOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a41520GB3201GB Micron_7450_MTFDKCB3T2TFSASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-13-generic (x86_64)GCC 13.2.0ext4800x600ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionEps PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+7.7%+7.7%+15.4%+15.4%+23.1%+23.1%+30.8%+30.8%7.7%5.3%4.3%4.3%2.7%2.1%13.1%2.3%3.1%9.6%2%2.4%6.7%7.4%2.5%2.7%6.6%10.5%9.6%4.1%9.7%2.7%11.7%3.4%30.8%11.9%4.6%CPU - 256 - ResNet-152CPU - 1 - Efficientnet_v2_lPreset 13 - Bosphorus 4KPreset 12 - Bosphorus 4KQ.1.C.E.53.7%CPU - 256 - ResNet-503.3%CPU - 1 - ResNet-152BLAS50 - Q2150 - Q1915.6%50 - Q1850 - Q165.3%50 - Q1550 - Q132.2%50 - Q1250 - Q1150 - Q074%50 - Q054.6%50 - Q043.9%50 - Q0313.4%50 - Q017.2%10 - Q1910 - Q1810 - Q1510 - Q1410 - Q137.6%10 - Q113.4%10 - Q1010 - Q092.8%10 - Q0810 - Q0610 - Q0514.7%10 - Q0410 - Q032.3%10 - Q011 - Q225.9%1 - Q198.5%1 - Q181 - Q171 - Q169.9%1 - Q153.4%1 - Q147.1%1 - Q139.6%1 - Q124.2%1 - Q111 - Q093.3%1 - Q071 - Q061 - Q051 - Q041 - Q012.9%PyTorchPyTorchSVT-AV1SVT-AV1WebP2 Image EncodePyTorchPyTorchLeelaChessZeroApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-HApache Spark TPC-Hab

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASba2004006008001000SE +/- 18.54, N = 98718531. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenba150300450600750SE +/- 17.59, N = 87157041. (CXX) g++ options: -flto -pthread

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lba0.5221.0441.5662.0882.61SE +/- 0.00, N = 32.282.32MIN: 1.71 / MAX: 2.84MIN: 1.83 / MAX: 2.8

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lba0.52651.0531.57952.1062.6325SE +/- 0.01, N = 32.342.32MIN: 1.78 / MAX: 2.78MIN: 1.77 / MAX: 2.81

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lba0.5221.0441.5662.0882.61SE +/- 0.01, N = 32.322.32MIN: 1.93 / MAX: 2.71MIN: 1.86 / MAX: 2.8

Apache Spark TPC-H

This is a benchmark of Apache Spark using TPC-H data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration using spark-submit. The test makes use of https://github.com/ssavvides/tpch-spark/ for facilitating the TPC-H benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q22ba3691215SE +/- 0.12, N = 310.8710.69

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q21ba20406080100SE +/- 7.93, N = 377.7187.90

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q20ba510152025SE +/- 0.10, N = 321.0520.80

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q19ba3691215SE +/- 0.12, N = 312.0910.45

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q18ba816243240SE +/- 0.33, N = 333.7434.51

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q17ba612182430SE +/- 0.55, N = 324.5624.31

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q16ba48121620SE +/- 0.26, N = 314.9814.22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q15ba3691215SE +/- 0.05343306, N = 39.482877739.77733866

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q14ba3691215SE +/- 0.22, N = 312.5712.70

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q13ba3691215SE +/- 0.08, N = 313.0412.76

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q12ba510152025SE +/- 1.19, N = 317.7019.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q11ba3691215SE +/- 0.32, N = 313.3113.58

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q10ba612182430SE +/- 0.32, N = 324.6924.36

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q09ba816243240SE +/- 0.34, N = 336.6736.66

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q08ba612182430SE +/- 0.26, N = 326.6326.74

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q07ba612182430SE +/- 0.23, N = 325.8624.86

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q06ba1.32822.65643.98465.31286.641SE +/- 0.04522799, N = 35.883824835.90309207

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q05ba714212835SE +/- 0.67, N = 331.2029.84

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q04ba510152025SE +/- 0.46, N = 321.8221.00

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q03ba714212835SE +/- 0.94, N = 329.6926.19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q02ba48121620SE +/- 0.35, N = 314.5314.25

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Q01ba3691215SE +/- 0.21, N = 312.8712.01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 50 - Geometric Mean Of All Queriesba510152025SE +/- 0.05, N = 319.5619.59MIN: 9.48 / MAX: 77.71MIN: 9.71 / MAX: 103.64

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q22ba246810SE +/- 0.13164085, N = 36.044309146.05411895

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q21ba816243240SE +/- 0.11, N = 332.7032.91

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q20ba3691215SE +/- 0.14, N = 311.5411.44

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q19ba246810SE +/- 0.15363169, N = 36.060416706.20677837

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q18ba510152025SE +/- 0.50, N = 317.3118.47

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q17ba3691215SE +/- 0.07, N = 313.0112.77

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q16ba246810SE +/- 0.33462632, N = 36.952706816.87131294

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q15ba1.31432.62863.94295.25726.5715SE +/- 0.10221447, N = 35.438705925.84138076

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q14ba246810SE +/- 0.33271668, N = 36.906023037.07622369

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q13ba246810SE +/- 0.09689769, N = 37.940837867.37728373

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q12ba3691215SE +/- 0.16460967, N = 310.038290029.94400438

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q11ba246810SE +/- 0.04584382, N = 38.278142938.00292349

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q10ba48121620SE +/- 0.31, N = 314.7815.17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q09ba510152025SE +/- 0.51, N = 322.5321.91

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q08ba48121620SE +/- 0.41, N = 314.5615.52

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q07ba48121620SE +/- 0.33, N = 314.9014.65

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q06ba0.46150.9231.38451.8462.3075SE +/- 0.23574646, N = 31.855959302.05104745

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q05ba510152025SE +/- 0.48, N = 318.8616.44

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q04ba3691215SE +/- 0.21, N = 311.2612.35

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q03ba48121620SE +/- 0.31, N = 314.2913.97

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q02ba246810SE +/- 0.13824959, N = 37.392459877.43104283

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Q01ba246810SE +/- 0.23898111, N = 37.288267147.58889151

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 10 - Geometric Mean Of All Queriesba3691215SE +/- 0.02, N = 310.6610.72MIN: 5.44 / MAX: 32.7MIN: 5.7 / MAX: 33.03

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152ba3691215SE +/- 0.10, N = 38.988.90MIN: 5.1 / MAX: 9.29MIN: 4.8 / MAX: 9.23

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152ba3691215SE +/- 0.10, N = 38.978.93MIN: 4.96 / MAX: 9.11MIN: 4.75 / MAX: 9.39

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152ba3691215SE +/- 0.05, N = 39.658.96MIN: 4.98 / MAX: 9.85MIN: 4.84 / MAX: 9.24

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressionba0.02480.04960.07440.09920.124SE +/- 0.00, N = 30.110.111. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50ba612182430SE +/- 0.19, N = 1523.1223.57MIN: 12.17 / MAX: 24.33MIN: 11.38 / MAX: 25.62

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

Algorithm: AES-256-GCM

a: The test run did not produce a result. E: 40270E64087F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:

b: The test run did not produce a result. E: 408712FE017F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:

Algorithm: AES-128-GCM

a: The test run did not produce a result. E: 4097A6F7B77F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:

b: The test run did not produce a result. E: 40B7EFA3BE7F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA512ba20000M40000M60000M80000M100000MSE +/- 191332047.54, N = 391835961470916309254731. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Algorithm: ChaCha20-Poly1305

a: The test run did not produce a result.

b: The test run did not produce a result.

Algorithm: ChaCha20

a: The test run did not produce a result.

b: The test run did not produce a result.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA256ba60000M120000M180000M240000M300000MSE +/- 548972949.20, N = 32822111754002818698957601. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lba246810SE +/- 0.05, N = 36.746.40MIN: 3.48 / MAX: 6.89MIN: 2.93 / MAX: 6.73

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152ba3691215SE +/- 0.08, N = 310.4310.16MIN: 4.8 / MAX: 11.36MIN: 4.56 / MAX: 10.94

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50ba510152025SE +/- 0.31, N = 320.6021.29MIN: 13.89 / MAX: 21.35MIN: 13.22 / MAX: 22.39

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50ba510152025SE +/- 0.20, N = 321.0921.00MIN: 13.93 / MAX: 21.71MIN: 11.39 / MAX: 21.87

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50ba510152025SE +/- 0.25, N = 321.5721.16MIN: 14.06 / MAX: 22.29MIN: 12.26 / MAX: 22.24

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamba714212835SE +/- 0.03, N = 331.2631.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamba714212835SE +/- 0.03, N = 331.9832.02

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamba130260390520650SE +/- 0.37, N = 3607.57607.57

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.02, N = 3156.43156.42

Apache Spark TPC-H

This is a benchmark of Apache Spark using TPC-H data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration using spark-submit. The test makes use of https://github.com/ssavvides/tpch-spark/ for facilitating the TPC-H benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q22ba0.240.480.720.961.2SE +/- 0.03222070, N = 31.066792131.00769047

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q21ba3691215SE +/- 0.26119238, N = 39.559095389.64531231

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q20ba0.68791.37582.06372.75163.4395SE +/- 0.12035470, N = 33.050016883.05739617

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q19ba0.1930.3860.5790.7720.965SE +/- 0.03922839, N = 30.857975960.79092395

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q18ba1.26642.53283.79925.06566.332SE +/- 0.11078356, N = 35.131711485.62853845

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q17ba0.6661.3321.9982.6643.33SE +/- 0.10612827, N = 32.883481982.95993924

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q16ba0.34150.6831.02451.3661.7075SE +/- 0.06760680, N = 31.517799141.38147259

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q15ba0.58211.16421.74632.32842.9105SE +/- 0.11136502, N = 32.587141752.50185966

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q14ba0.49760.99521.49281.99042.488SE +/- 0.16557850, N = 32.211469652.06485331

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q13ba0.39170.78341.17511.56681.9585SE +/- 0.15789062, N = 31.740741611.58815936

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q12ba0.50991.01981.52972.03962.5495SE +/- 0.15180813, N = 32.266416072.17542648

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q11ba0.28650.5730.85951.1461.4325SE +/- 0.06007206, N = 31.139986871.27338135

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q10ba0.85811.71622.57433.43244.2905SE +/- 0.13264795, N = 33.812455423.81359665

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q09ba1.3272.6543.9815.3086.635SE +/- 0.08828966, N = 35.897758485.70969407

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q08ba0.59761.19521.79282.39042.988SE +/- 0.02941830, N = 32.609078172.65584644

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q07ba0.90241.80482.70723.60964.512SE +/- 0.02085439, N = 33.877902754.01044806

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q06ba0.10540.21080.31620.42160.527SE +/- 0.03244463, N = 30.358015570.46822915

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q05ba0.92951.8592.78853.7184.6475SE +/- 0.18898243, N = 33.692176344.13122161

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q04ba0.88321.76642.64963.53284.416SE +/- 0.09899955, N = 33.754272463.92525745

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q03ba0.86991.73982.60973.47964.3495SE +/- 0.11371323, N = 33.866108183.86442184

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q02ba0.46850.9371.40551.8742.3425SE +/- 0.02016184, N = 32.082242012.06179071

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Q01ba1.00052.0013.00154.0025.0025SE +/- 0.17358727, N = 34.446579464.32006081

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark TPC-H 3.5Scale Factor: 1 - Geometric Mean Of All Queriesba0.56141.12281.68422.24562.807SE +/- 0.02040294, N = 32.495177472.44964916MIN: 0.86 / MAX: 9.56MIN: 0.73 / MAX: 10.03

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLAlgorithm: RSA4096ba700K1400K2100K2800K3500KSE +/- 1292.47, N = 33243345.23244390.31. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLAlgorithm: RSA4096ba20K40K60K80K100KSE +/- 53.45, N = 398528.898622.01. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7ba0.10130.20260.30390.40520.5065SE +/- 0.00, N = 30.450.451. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamba1.17852.3573.53554.7145.8925SE +/- 0.0015, N = 35.22965.2377

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamba4080120160200SE +/- 0.06, N = 3191.10190.80

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamba150300450600750SE +/- 4.21, N = 3717.59715.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.66, N = 3132.27132.66

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamba816243240SE +/- 0.09, N = 336.9136.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamba6001200180024003000SE +/- 6.37, N = 32596.102608.01

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamba160320480640800SE +/- 1.53, N = 3717.98719.28

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.03, N = 3132.42132.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamba48121620SE +/- 0.01, N = 317.3017.30

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamba12002400360048006000SE +/- 5.02, N = 35540.525540.63

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamba510152025SE +/- 0.02, N = 320.6820.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamba1122334455SE +/- 0.05, N = 348.3348.49

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamba48121620SE +/- 0.02, N = 314.6414.64

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamba1530456075SE +/- 0.11, N = 368.2668.27

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamba510152025SE +/- 0.01, N = 320.6920.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamba1122334455SE +/- 0.02, N = 348.3348.45

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamba80160240320400SE +/- 0.61, N = 3382.56383.20

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamba50100150200250SE +/- 0.41, N = 3249.50248.58

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamba20406080100SE +/- 0.19, N = 384.2584.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamba2004006008001000SE +/- 2.45, N = 31136.641136.71

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamba48121620SE +/- 0.01, N = 315.3715.32

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamba1530456075SE +/- 0.04, N = 365.0165.21

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamba1.00132.00263.00394.00525.0065SE +/- 0.0001, N = 34.43414.4503

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamba50100150200250SE +/- 0.01, N = 3225.40224.58

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.25, N = 3121.61122.03

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba2004006008001000SE +/- 1.42, N = 3786.89784.52

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.24, N = 3120.04120.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamba2004006008001000SE +/- 1.54, N = 3797.41796.07

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamba1224364860SE +/- 0.06, N = 354.5554.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamba400800120016002000SE +/- 1.91, N = 31756.561758.59

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamba0.27930.55860.83791.11721.3965SE +/- 0.0046, N = 31.24041.2413

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamba2004006008001000SE +/- 3.00, N = 3804.75804.18

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamba1.26362.52723.79085.05446.318SE +/- 0.0055, N = 35.61595.5955

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamba4K8K12K16K20KSE +/- 16.76, N = 317047.6417108.46

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamba1224364860SE +/- 0.07, N = 354.4954.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamba400800120016002000SE +/- 2.24, N = 31759.071761.40

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamba1.06172.12343.18514.24685.3085SE +/- 0.0110, N = 34.71804.7188

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamba50100150200250SE +/- 0.50, N = 3211.77211.74

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamba1.06032.12063.18094.24125.3015SE +/- 0.0079, N = 34.71174.7126

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamba50100150200250SE +/- 0.35, N = 3212.13212.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba1.08052.1613.24154.3225.4025SE +/- 0.0103, N = 34.77754.8022

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba50100150200250SE +/- 0.44, N = 3209.20208.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamba1.08652.1733.25954.3465.4325SE +/- 0.0118, N = 34.82904.7637

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamba50100150200250SE +/- 0.52, N = 3206.97209.80

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4Kba4080120160200SE +/- 1.61, N = 15184.35176.671. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mba7K14K21K28K35KSE +/- 24.02, N = 331728.931859.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7ba0.18680.37360.56040.74720.934SE +/- 0.00, N = 30.820.831. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Java SciMark

This test runs the Java version of SciMark 2, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This benchmark is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositeba9001800270036004500SE +/- 6.24, N = 33996.763984.62

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4Kba246810SE +/- 0.041, N = 38.2088.2481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 1080pba510152025SE +/- 0.13, N = 321.3121.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4Kba20406080100SE +/- 0.17, N = 386.8486.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mba30K60K90K120K150KSE +/- 33.09, N = 3123777.7123041.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mba30K60K90K120K150KSE +/- 220.87, N = 3122070.3123199.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mba30K60K90K120K150KSE +/- 404.54, N = 3122971.0123352.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mba30K60K90K120K150KSE +/- 87.00, N = 3123411.1123558.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mba30K60K90K120K150KSE +/- 621.69, N = 3131613.6131141.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4Kba4080120160200SE +/- 1.43, N = 3186.61178.911. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 1080pba4080120160200SE +/- 1.87, N = 3162.56165.101. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5ba246810SE +/- 0.04, N = 36.286.511. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultba3691215SE +/- 0.08, N = 39.639.481. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 1080pba120240360480600SE +/- 1.39, N = 3569.96571.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 1080pba140280420560700SE +/- 8.75, N = 3639.09635.811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Java SciMark

This test runs the Java version of SciMark 2, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This benchmark is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationba400800120016002000SE +/- 0.16, N = 31703.251703.42

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationba3K6K9K12K15KSE +/- 31.70, N = 313434.0913358.53

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplyba6001200180024003000SE +/- 3.16, N = 32792.092809.01

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformba90180270360450SE +/- 0.36, N = 3421.91420.74

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloba400800120016002000SE +/- 0.75, N = 31632.451631.42

160 Results Shown

LeelaChessZero:
  BLAS
  Eigen
PyTorch:
  CPU - 256 - Efficientnet_v2_l
  CPU - 16 - Efficientnet_v2_l
  CPU - 32 - Efficientnet_v2_l
Apache Spark TPC-H:
  50 - Q22
  50 - Q21
  50 - Q20
  50 - Q19
  50 - Q18
  50 - Q17
  50 - Q16
  50 - Q15
  50 - Q14
  50 - Q13
  50 - Q12
  50 - Q11
  50 - Q10
  50 - Q09
  50 - Q08
  50 - Q07
  50 - Q06
  50 - Q05
  50 - Q04
  50 - Q03
  50 - Q02
  50 - Q01
  50 - Geometric Mean Of All Queries
  10 - Q22
  10 - Q21
  10 - Q20
  10 - Q19
  10 - Q18
  10 - Q17
  10 - Q16
  10 - Q15
  10 - Q14
  10 - Q13
  10 - Q12
  10 - Q11
  10 - Q10
  10 - Q09
  10 - Q08
  10 - Q07
  10 - Q06
  10 - Q05
  10 - Q04
  10 - Q03
  10 - Q02
  10 - Q01
  10 - Geometric Mean Of All Queries
PyTorch:
  CPU - 32 - ResNet-152
  CPU - 16 - ResNet-152
  CPU - 256 - ResNet-152
WebP2 Image Encode
PyTorch
OpenSSL:
  SHA512
  SHA256
PyTorch:
  CPU - 1 - Efficientnet_v2_l
  CPU - 1 - ResNet-152
  CPU - 256 - ResNet-50
  CPU - 32 - ResNet-50
  CPU - 16 - ResNet-50
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering - Synchronous Single-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Apache Spark TPC-H:
  1 - Q22
  1 - Q21
  1 - Q20
  1 - Q19
  1 - Q18
  1 - Q17
  1 - Q16
  1 - Q15
  1 - Q14
  1 - Q13
  1 - Q12
  1 - Q11
  1 - Q10
  1 - Q09
  1 - Q08
  1 - Q07
  1 - Q06
  1 - Q05
  1 - Q04
  1 - Q03
  1 - Q02
  1 - Q01
  1 - Geometric Mean Of All Queries
OpenSSL:
  RSA4096:
    verify/s
    sign/s
WebP2 Image Encode
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Synchronous Single-Stream:
    ms/batch
    items/sec
SVT-AV1
Xmrig
WebP2 Image Encode
Java SciMark
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 4K
Xmrig:
  CryptoNight-Heavy - 1M
  CryptoNight-Femto UPX2 - 1M
  Monero - 1M
  KawPow - 1M
  Wownero - 1M
SVT-AV1:
  Preset 12 - Bosphorus 4K
  Preset 8 - Bosphorus 1080p
WebP2 Image Encode:
  Quality 100, Compression Effort 5
  Default
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
Java SciMark:
  Jacobi Successive Over-Relaxation
  Dense LU Matrix Factorization
  Sparse Matrix Multiply
  Fast Fourier Transform
  Monte Carlo