Gigabyte G242-P36 Ampere Altra Max Server

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401176-NE-GIGABYTEG67
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 8 Tests
Cryptography 2 Tests
HPC - High Performance Computing 8 Tests
Common Kernel Benchmarks 3 Tests
Linear Algebra 2 Tests
Machine Learning 4 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 6 Tests
NVIDIA GPU Compute 2 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 3 Tests
Scientific Computing 4 Tests
Server 3 Tests
Server CPU Tests 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
G242-P36
January 16
  19 Hours, 11 Minutes
gig
January 17
  2 Hours, 33 Minutes
dd
January 17
  2 Hours, 24 Minutes
Invert Hiding All Results Option
  8 Hours, 3 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Gigabyte G242-P36 Ampere Altra Max ServerOpenBenchmarking.orgPhoronix Test SuiteARMv8 Neoverse-N1 @ 3.00GHz (128 Cores)GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCPAmpere Computing LLC Altra PCI Root Complex A16 x 32 GB DDR4-3200MT/s Samsung M393A4K40DB3-CWE800GB Micron_7450_MTFDKBA800TFSASPEEDVGA HDMI2 x Intel I350Ubuntu 23.106.5.0-13-generic (aarch64)GCC 13.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelCompilerFile-SystemScreen ResolutionGigabyte G242-P36 Ampere Altra Max Server BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - Scaling Governor: cppc_cpufreq performance (Boost: Disabled)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

G242-P36gigddResult OverviewPhoronix Test Suite100%107%114%121%StockfishLlama.cppLeelaChessZeroQuicksilverRocksDBTimed Linux Kernel CompilationStress-NGTimed LLVM CompilationSpeedbNeural Magic DeepSparse7-Zip CompressionOpenSSLCacheBench

Gigabyte G242-P36 Ampere Altra Max Serverpytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - ResNet-50xmrig: Wownero - 1Mspeedb: Seq Fillxmrig: Monero - 1Mdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streambuild-llvm: Unix Makefileslczero: BLASlczero: Eigendeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamquicksilver: CTS2build-linux-kernel: allmodconfigstress-ng: Atomicbuild-llvm: Ninjallama-cpp: llama-2-70b-chat.Q5_0.ggufstockfish: Total Timeopenssl: ChaCha20-Poly1305openssl: ChaCha20openssl: AES-256-GCMspeedb: Read While Writingrocksdb: Rand Readquicksilver: CORAL2 P2openssl: AES-128-GCMopenssl: SHA256openssl: SHA512speedb: Rand Readrocksdb: Read While Writingllama-cpp: llama-2-13b.Q4_0.ggufcachebench: Readcachebench: Read / Modify / Writecachebench: Writerocksdb: Read Rand Write Randstress-ng: Futexstress-ng: Context Switchingdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamamg: build-linux-kernel: defconfigdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamgromacs: MPI CPU - water_GMX50_barestress-ng: MEMFDspeedb: Rand Fillspeedb: Rand Fill Syncspeedb: Update Randspeedb: Read Rand Write Randrocksdb: Update Randopenssl: RSA4096openssl: RSA4096deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamquicksilver: CORAL2 P1deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingstress-ng: IO_uringstress-ng: MMAPstress-ng: Cloningstress-ng: Mallocstress-ng: CPU Cachestress-ng: Pthreadstress-ng: Zlibstress-ng: Vector Shufflestress-ng: Vector Mathstress-ng: Wide Vector Mathstress-ng: Matrix Mathstress-ng: Function Callstress-ng: Matrix 3D Mathstress-ng: CPU Stressstress-ng: AVL Treestress-ng: Cryptostress-ng: Fused Multiply-Addstress-ng: Hashstress-ng: SENDFILEstress-ng: AVX-512 VNNIstress-ng: Glibc Qsort Data Sortingstress-ng: Vector Floating Pointstress-ng: Floating Pointstress-ng: Pollstress-ng: Glibc C String Functionsstress-ng: System V Message Passingstress-ng: Forkingstress-ng: Memory Copyingstress-ng: Semaphoresstress-ng: Mutexstress-ng: Mixed Schedulerstress-ng: NUMAstress-ng: Pipestress-ng: Socket Activityllama-cpp: llama-2-7b.Q4_0.ggufminife: Smallmt-dgemm: Sustained Floating-Point RateG242-P36gigdd0.300.671.830.681.911935.22950794201.723.50042677.0708411.52162481358.177345.678816203333308.2977.29266.3333.07188653177112213448840161732226070306487842680129050354340523552554333338268820730010132296175334478769590409571625855884513.9011438.27651645034.97615638239.9707303320337343012.7520365273.281320.135447.0250105706433378.703146.7462430.13754.588574.852849872073762722752419683431406517886.06342.855.57031137.7811830.576033.74731834.579933.6229310.8403202.227925273333314.1452200.0280185.3571339.9765132.3047476.3781132.1032477.8141537647333316604943.761088.777795.96164364343.39879814.35113551.875987.8886218.95398869.872346519.63681885.3072283.185099.8133761.08299.50252315.26151220570.5115671801.481624492.924690386.642020.18102535.3522213.547330369.9662783286.4821143237.7252250.5327153.74167637763.5937172432.6636794.331419.0630330081.1828009.0721.5823996.017.78498329005924.01252624.7719408.27159471334.543346.727316460000309.4775.64267.863.13177653916112250396400161791663040306544534870132553414505009122552000038285632826010003959375034453399030418448304851606014.0211438.66616145027.47270138251.5919243449038323012.9619654874.851326.958146.5531106013600080.078149.4774421.3454.688576.532782642044102649982518519427908518115.96345.655.42331141.4511830.717433.8691832.115433.5823310.6422202.633225810000315.8962198.9064185.8675339.5239132.3228477.0964133.4484472.0699541204333057612149.931104.197312.78164067515.18882510.28112993.155993.7486375.79398993.462355564.94682490.7572298.235082.6533765.26299.1251986.12151387869.7615654462.921624969.464691697.852022.01102553.1122219.87392099.8262867317.1621054213.7950130.9727162.14167850957.6837215286.0436309.291416.0329805509.1227959.8521.924150.718.2727528576623.42092684.8341407.1960481336.392446.499816430000310.1376.8264.7443.14226859548137855304042918132446000038279302868010132123745034448701700420437471863656314.1111438.86384745041.15485338252.628443537322318037.9320708288.981327.996246.419480.243145.2837433.8593569.362853162078912647482473336443804518085.76345.355.72221135.43651843.239633.15921850.226433.2422311.5325201.755425510000316.9118198.3147183.6437343.7639132.2627477.6899130.6575483.308541552331579583751.831092.256918.49164592319.96882225.34113379.285985.6986257.77399042.092354926.97682554.3372290.815089.1933559.87299.99251996.36151037296.4615654282.581624702.094692452.82020.3102604.7422220.77395099.6462845443.5321119614.3150686.5827159.07166379337.6737267646.9136361.291426.4530776841.7327536.7926.64OpenBenchmarking.org

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lG242-P360.06750.1350.20250.270.3375SE +/- 0.00, N = 30.30MIN: 0.27 / MAX: 0.4

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152G242-P360.15080.30160.45240.60320.754SE +/- 0.00, N = 20.67MIN: 0.65 / MAX: 0.7

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50G242-P360.41180.82361.23541.64722.059SE +/- 0.02, N = 51.83MIN: 1.7 / MAX: 2.02

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152G242-P360.1530.3060.4590.6120.765SE +/- 0.00, N = 30.68MIN: 0.65 / MAX: 0.7

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50G242-P360.42980.85961.28941.71922.149SE +/- 0.00, N = 31.91MIN: 1.8 / MAX: 2.09

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MG242-P36400800120016002000SE +/- 2.92, N = 31935.21. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential FillddgigG242-P3660K120K180K240K300KSE +/- 3101.60, N = 52857662900592950791. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential FillddgigG242-P3650K100K150K200K250KMin: 287334 / Avg: 295079 / Max: 3053281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MG242-P369001800270036004500SE +/- 17.55, N = 34201.71. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P36612182430SE +/- 0.21, N = 323.4224.0123.50
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P36612182430Min: 23.24 / Avg: 23.5 / Max: 23.91

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P366001200180024003000SE +/- 25.24, N = 32684.832624.772677.07
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P365001000150020002500Min: 2627.26 / Avg: 2677.07 / Max: 2709.13

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesddgigG242-P3690180270360450SE +/- 1.15, N = 3407.19408.27411.52
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesddgigG242-P3670140210280350Min: 409.26 / Avg: 411.52 / Max: 413

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASddgigG242-P361428425670SE +/- 0.58, N = 36059621. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASddgigG242-P361224364860Min: 61 / Avg: 62 / Max: 631. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: EigenddgigG242-P361122334455SE +/- 0.33, N = 34847481. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: EigenddgigG242-P361020304050Min: 47 / Avg: 47.67 / Max: 481. (CXX) g++ options: -flto -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamddgigG242-P3630060090012001500SE +/- 8.37, N = 31336.391334.541358.18
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamddgigG242-P362004006008001000Min: 1342.89 / Avg: 1358.18 / Max: 1371.73

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamddgigG242-P361122334455SE +/- 0.32, N = 346.5046.7345.68
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamddgigG242-P361020304050Min: 45.21 / Avg: 45.68 / Max: 46.3

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2ddgigG242-P364M8M12M16M20MSE +/- 42557.15, N = 31643000016460000162033331. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2ddgigG242-P363M6M9M12M15MMin: 16120000 / Avg: 16203333.33 / Max: 162600001. (CXX) g++ options: -fopenmp -O3 -march=native

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigddgigG242-P3670140210280350SE +/- 1.01, N = 3310.14309.48308.30
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigddgigG242-P3660120180240300Min: 307.09 / Avg: 308.3 / Max: 310.3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicddgigG242-P36246810SE +/- 0.59, N = 156.805.647.291. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicddgigG242-P363691215Min: 5.24 / Avg: 7.29 / Max: 13.781. (CXX) g++ options: -O2 -std=gnu99 -lc

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaddgigG242-P3660120180240300SE +/- 0.67, N = 3264.74267.86266.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaddgigG242-P3650100150200250Min: 265.31 / Avg: 266.33 / Max: 267.59

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.ggufddgigG242-P360.70651.4132.11952.8263.5325SE +/- 0.03, N = 83.143.133.071. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas
OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.ggufddgigG242-P36246810Min: 3 / Avg: 3.07 / Max: 3.171. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeddgigG242-P3650M100M150M200M250MSE +/- 6857171.33, N = 152268595481776539161886531771. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeddgigG242-P3640M80M120M160M200MMin: 162259049 / Avg: 188653177.27 / Max: 2552487491. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305gigG242-P3620000M40000M60000M80000M100000MSE +/- 361309.16, N = 31122503964001122134488401. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305gigG242-P3620000M40000M60000M80000M100000MMin: 112212763440 / Avg: 112213448840 / Max: 1122139897901. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20gigG242-P3630000M60000M90000M120000M150000MSE +/- 10001054.79, N = 31617916630401617322260701. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20gigG242-P3630000M60000M90000M120000M150000MMin: 161712333620 / Avg: 161732226070 / Max: 1617439836801. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMgigG242-P3670000M140000M210000M280000M350000MSE +/- 40660594.45, N = 33065445348703064878426801. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMgigG242-P3650000M100000M150000M200000M250000MMin: 306408867430 / Avg: 306487842680 / Max: 3065441241801. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingddgigG242-P363M6M9M12M15MSE +/- 201662.23, N = 151378553013255341129050351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingddgigG242-P362M4M6M8M10MMin: 12030199 / Avg: 12905034.6 / Max: 143592051. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadddgigG242-P36100M200M300M400M500MSE +/- 4162622.50, N = 154042918134505009124340523551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadddgigG242-P3680M160M240M320M400MMin: 402509297 / Avg: 434052354.93 / Max: 4490414751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2ddgigG242-P365M10M15M20M25MSE +/- 84129.53, N = 32446000025520000255433331. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2ddgigG242-P364M8M12M16M20MMin: 25440000 / Avg: 25543333.33 / Max: 257100001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMddgigG242-P3680000M160000M240000M320000M400000MSE +/- 3586455.40, N = 33827930286803828563282603826882073001. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMddgigG242-P3670000M140000M210000M280000M350000MMin: 382682894060 / Avg: 382688207300 / Max: 3826950370601. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256ddgigG242-P3620000M40000M60000M80000M100000MSE +/- 64411674.99, N = 31013212374501000395937501013229617531. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256ddgigG242-P3620000M40000M60000M80000M100000MMin: 101194398110 / Avg: 101322961753.33 / Max: 1013943241001. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512ddgigG242-P367000M14000M21000M28000M35000MSE +/- 8688088.34, N = 33444870170034453399030344787695901. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512ddgigG242-P366000M12000M18000M24000M30000MMin: 34467879570 / Avg: 34478769590 / Max: 344959408201. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadddgigG242-P3690M180M270M360M450MSE +/- 2947408.87, N = 114204374714184483044095716251. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadddgigG242-P3670M140M210M280M350MMin: 389566929 / Avg: 409571624.64 / Max: 4192237781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingddgigG242-P362M4M6M8M10MSE +/- 68677.29, N = 98636563851606085588451. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingddgigG242-P361.5M3M4.5M6M7.5MMin: 8101186 / Avg: 8558845.11 / Max: 88681321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.ggufddgigG242-P3648121620SE +/- 0.16, N = 1514.1114.0213.901. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas
OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.ggufddgigG242-P3648121620Min: 13.45 / Avg: 13.9 / Max: 15.41. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

CacheBench

This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: ReadddgigG242-P362K4K6K8K10KSE +/- 0.01, N = 311438.8611438.6711438.28MIN: 11438.05 / MAX: 11439.05MIN: 11438.33 / MAX: 11438.85MIN: 11437.32 / MAX: 11438.591. (CC) gcc options: -O3 -lrt
OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: ReadddgigG242-P362K4K6K8K10KMin: 11438.26 / Avg: 11438.28 / Max: 11438.291. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / WriteddgigG242-P3610K20K30K40K50KSE +/- 2.04, N = 345041.1545027.4745034.98MIN: 43693.38 / MAX: 45647.65MIN: 43694.36 / MAX: 45640.07MIN: 43692.22 / MAX: 45639.261. (CC) gcc options: -O3 -lrt
OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / WriteddgigG242-P368K16K24K32K40KMin: 45031.08 / Avg: 45034.98 / Max: 45037.991. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: WriteddgigG242-P368K16K24K32K40KSE +/- 1.22, N = 338252.6338251.5938239.97MIN: 35291.37 / MAX: 41384.3MIN: 35289.91 / MAX: 41383.99MIN: 35288.52 / MAX: 413821. (CC) gcc options: -O3 -lrt
OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: WriteddgigG242-P367K14K21K28K35KMin: 38238.37 / Avg: 38239.97 / Max: 38242.381. (CC) gcc options: -O3 -lrt

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomddgigG242-P36800K1600K2400K3200K4000KSE +/- 30568.75, N = 73537322344903833203371. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomddgigG242-P36600K1200K1800K2400K3000KMin: 3183451 / Avg: 3320337.29 / Max: 34282721. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexddgigG242-P3670K140K210K280K350KSE +/- 7072.24, N = 15318037.93323012.96343012.751. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexddgigG242-P3660K120K180K240K300KMin: 300684.89 / Avg: 343012.75 / Max: 382750.871. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingddgigG242-P364M8M12M16M20MSE +/- 174052.70, N = 1520708288.9819654874.8520365273.281. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingddgigG242-P364M8M12M16M20MMin: 19577394.24 / Avg: 20365273.28 / Max: 21329006.491. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamddgigG242-P3630060090012001500SE +/- 0.85, N = 31328.001326.961320.14
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamddgigG242-P362004006008001000Min: 1318.5 / Avg: 1320.14 / Max: 1321.32

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamddgigG242-P361122334455SE +/- 0.03, N = 346.4246.5547.03
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamddgigG242-P361020304050Min: 46.97 / Avg: 47.03 / Max: 47.08

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2gigG242-P36200M400M600M800M1000MSE +/- 47484.50, N = 3106013600010570643331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2gigG242-P36200M400M600M800M1000MMin: 1056970000 / Avg: 1057064333.33 / Max: 10571210001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigddgigG242-P3620406080100SE +/- 0.82, N = 380.2480.0878.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigddgigG242-P361530456075Min: 77.85 / Avg: 78.7 / Max: 80.35

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P36306090120150SE +/- 1.58, N = 3145.28149.48146.75
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P36306090120150Min: 143.62 / Avg: 146.75 / Max: 148.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P3690180270360450SE +/- 4.70, N = 3433.86421.35430.14
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P3680160240320400Min: 424.47 / Avg: 430.14 / Max: 439.47

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_baregigG242-P361.05482.10963.16444.21925.274SE +/- 0.002, N = 34.6884.5881. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_baregigG242-P36246810Min: 4.59 / Avg: 4.59 / Max: 4.591. (CXX) g++ options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDddgigG242-P36120240360480600SE +/- 4.82, N = 8569.36576.53574.851. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDddgigG242-P36100200300400500Min: 560.39 / Avg: 574.85 / Max: 599.011. (CXX) g++ options: -O2 -std=gnu99 -lc

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random FillddgigG242-P3660K120K180K240K300KSE +/- 1985.22, N = 32853162782642849871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random FillddgigG242-P3650K100K150K200K250KMin: 281023 / Avg: 284987.33 / Max: 2871601. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill SyncddgigG242-P3640K80K120K160K200KSE +/- 1986.97, N = 32078912044102073761. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill SyncddgigG242-P3640K80K120K160K200KMin: 204462 / Avg: 207376 / Max: 2111731. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomddgigG242-P3660K120K180K240K300KSE +/- 1573.56, N = 32647482649982722751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomddgigG242-P3650K100K150K200K250KMin: 269748 / Avg: 272275 / Max: 2751631. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomddgigG242-P36500K1000K1500K2000K2500KSE +/- 21596.32, N = 32473336251851924196831. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomddgigG242-P36400K800K1200K1600K2000KMin: 2379318 / Avg: 2419683.33 / Max: 24531771. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomddgigG242-P36100K200K300K400K500KSE +/- 4409.44, N = 34438044279084314061. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomddgigG242-P3680K160K240K320K400KMin: 423836 / Avg: 431406 / Max: 4391091. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096ddgigG242-P36110K220K330K440K550KSE +/- 27.21, N = 3518085.7518115.9517886.01. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096ddgigG242-P3690K180K270K360K450KMin: 517846.4 / Avg: 517885.97 / Max: 517938.11. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096ddgigG242-P3614002800420056007000SE +/- 0.10, N = 36345.36345.66342.81. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096ddgigG242-P3611002200330044005500Min: 6342.6 / Avg: 6342.8 / Max: 6342.91. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P361326395265SE +/- 0.09, N = 355.7255.4255.57
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P361122334455Min: 55.46 / Avg: 55.57 / Max: 55.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P362004006008001000SE +/- 1.48, N = 31135.441141.451137.78
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P362004006008001000Min: 1135.76 / Avg: 1137.78 / Max: 1140.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamddgigG242-P36400800120016002000SE +/- 0.45, N = 31843.241830.721830.58
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamddgigG242-P3630060090012001500Min: 1829.9 / Avg: 1830.58 / Max: 1831.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamddgigG242-P36816243240SE +/- 0.08, N = 333.1633.8733.75
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamddgigG242-P36714212835Min: 33.64 / Avg: 33.75 / Max: 33.9

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamddgigG242-P36400800120016002000SE +/- 1.29, N = 31850.231832.121834.58
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamddgigG242-P3630060090012001500Min: 1832 / Avg: 1834.58 / Max: 1836.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamddgigG242-P36816243240SE +/- 0.04, N = 333.2433.5833.62
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamddgigG242-P36714212835Min: 33.55 / Avg: 33.62 / Max: 33.7

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P3670140210280350SE +/- 0.86, N = 3311.53310.64310.84
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P3660120180240300Min: 309.34 / Avg: 310.84 / Max: 312.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P364080120160200SE +/- 0.53, N = 3201.76202.63202.23
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamddgigG242-P364080120160200Min: 201.26 / Avg: 202.23 / Max: 203.08

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1ddgigG242-P366M12M18M24M30MSE +/- 81103.50, N = 32551000025810000252733331. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1ddgigG242-P364M8M12M16M20MMin: 25140000 / Avg: 25273333.33 / Max: 254200001. (CXX) g++ options: -fopenmp -O3 -march=native

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamddgigG242-P3670140210280350SE +/- 0.64, N = 3316.91315.90314.15
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamddgigG242-P3660120180240300Min: 312.97 / Avg: 314.15 / Max: 315.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamddgigG242-P364080120160200SE +/- 0.52, N = 3198.31198.91200.03
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamddgigG242-P364080120160200Min: 199.21 / Avg: 200.03 / Max: 200.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamddgigG242-P364080120160200SE +/- 0.10, N = 3183.64185.87185.36
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamddgigG242-P36306090120150Min: 185.16 / Avg: 185.36 / Max: 185.47

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamddgigG242-P3670140210280350SE +/- 0.19, N = 3343.76339.52339.98
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamddgigG242-P3660120180240300Min: 339.74 / Avg: 339.98 / Max: 340.36

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamddgigG242-P36306090120150SE +/- 0.18, N = 3132.26132.32132.30
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamddgigG242-P3620406080100Min: 131.96 / Avg: 132.3 / Max: 132.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamddgigG242-P36100200300400500SE +/- 0.42, N = 3477.69477.10476.38
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamddgigG242-P3680160240320400Min: 475.88 / Avg: 476.38 / Max: 477.21

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamddgigG242-P36306090120150SE +/- 0.15, N = 3130.66133.45132.10
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamddgigG242-P36306090120150Min: 131.84 / Avg: 132.1 / Max: 132.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamddgigG242-P36100200300400500SE +/- 0.36, N = 3483.31472.07477.81
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamddgigG242-P3690180270360450Min: 477.16 / Avg: 477.81 / Max: 478.4

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingddgigG242-P36120K240K360K480K600KSE +/- 396.38, N = 35415525412045376471. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingddgigG242-P3690K180K270K360K450KMin: 536956 / Avg: 537647 / Max: 5383291. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingddgigG242-P3670K140K210K280K350KSE +/- 991.66, N = 33315793330573333161. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingddgigG242-P3660K120K180K240K300KMin: 331684 / Avg: 333316 / Max: 3351081. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringddgigG242-P36130K260K390K520K650KSE +/- 5192.48, N = 3583751.83612149.93604943.761. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringddgigG242-P36110K220K330K440K550KMin: 594698.55 / Avg: 604943.76 / Max: 611536.881. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPddgigG242-P362004006008001000SE +/- 5.43, N = 31092.251104.191088.771. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPddgigG242-P362004006008001000Min: 1083.05 / Avg: 1088.77 / Max: 1099.621. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningddgigG242-P362K4K6K8K10KSE +/- 29.21, N = 36918.497312.787795.961. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningddgigG242-P3614002800420056007000Min: 7754.53 / Avg: 7795.96 / Max: 7852.351. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocddgigG242-P3640M80M120M160M200MSE +/- 296218.44, N = 3164592319.96164067515.18164364343.391. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocddgigG242-P3630M60M90M120M150MMin: 163808930.62 / Avg: 164364343.39 / Max: 164820581.541. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CacheddgigG242-P36200K400K600K800K1000KSE +/- 1033.74, N = 3882225.34882510.28879814.351. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CacheddgigG242-P36150K300K450K600K750KMin: 877751.75 / Avg: 879814.35 / Max: 880968.61. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PthreadddgigG242-P3620K40K60K80K100KSE +/- 65.20, N = 3113379.28112993.15113551.871. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PthreadddgigG242-P3620K40K60K80K100KMin: 113466.83 / Avg: 113551.87 / Max: 113680.011. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibddgigG242-P3613002600390052006500SE +/- 0.87, N = 35985.695993.745987.881. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibddgigG242-P3610002000300040005000Min: 5986.5 / Avg: 5987.88 / Max: 5989.481. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShuffleddgigG242-P3620K40K60K80K100KSE +/- 3.20, N = 386257.7786375.7986218.951. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShuffleddgigG242-P3615K30K45K60K75KMin: 86212.6 / Avg: 86218.95 / Max: 86222.841. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathddgigG242-P3690K180K270K360K450KSE +/- 4.53, N = 3399042.09398993.46398869.871. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathddgigG242-P3670K140K210K280K350KMin: 398860.81 / Avg: 398869.87 / Max: 398874.541. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathddgigG242-P36500K1000K1500K2000K2500KSE +/- 6960.54, N = 32354926.972355564.942346519.631. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathddgigG242-P36400K800K1200K1600K2000KMin: 2332639.78 / Avg: 2346519.63 / Max: 2354386.871. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathddgigG242-P36150K300K450K600K750KSE +/- 404.39, N = 3682554.33682490.75681885.301. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathddgigG242-P36120K240K360K480K600KMin: 681079.38 / Avg: 681885.3 / Max: 682347.171. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallddgigG242-P3615K30K45K60K75KSE +/- 1.53, N = 372290.8172298.2372283.181. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallddgigG242-P3613K26K39K52K65KMin: 72280.25 / Avg: 72283.18 / Max: 72285.411. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathddgigG242-P3611002200330044005500SE +/- 3.74, N = 35089.195082.655099.811. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathddgigG242-P369001800270036004500Min: 5092.71 / Avg: 5099.81 / Max: 5105.41. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressddgigG242-P367K14K21K28K35KSE +/- 1.60, N = 333559.8733765.2633761.081. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressddgigG242-P366K12K18K24K30KMin: 33758.11 / Avg: 33761.08 / Max: 33763.611. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreeddgigG242-P3670140210280350SE +/- 0.16, N = 3299.99299.10299.501. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreeddgigG242-P3650100150200250Min: 299.23 / Avg: 299.5 / Max: 299.771. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptoddgigG242-P3650K100K150K200K250KSE +/- 928.63, N = 3251996.36251986.12252315.261. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptoddgigG242-P3640K80K120K160K200KMin: 250934.81 / Avg: 252315.26 / Max: 254081.521. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AddddgigG242-P3630M60M90M120M150MSE +/- 110268.18, N = 3151037296.46151387869.76151220570.511. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AddddgigG242-P3630M60M90M120M150MMin: 151000628.28 / Avg: 151220570.51 / Max: 151344551.491. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashddgigG242-P363M6M9M12M15MSE +/- 9429.94, N = 315654282.5815654462.9215671801.481. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashddgigG242-P363M6M9M12M15MMin: 15653575.76 / Avg: 15671801.48 / Max: 15685114.181. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEddgigG242-P36300K600K900K1200K1500KSE +/- 18.53, N = 31624702.091624969.461624492.921. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEddgigG242-P36300K600K900K1200K1500KMin: 1624456.72 / Avg: 1624492.92 / Max: 1624517.881. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIddgigG242-P361000K2000K3000K4000K5000KSE +/- 401.84, N = 34692452.804691697.854690386.641. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIddgigG242-P36800K1600K2400K3200K4000KMin: 4689806.23 / Avg: 4690386.64 / Max: 4691158.271. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingddgigG242-P36400800120016002000SE +/- 0.78, N = 32020.302022.012020.181. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingddgigG242-P36400800120016002000Min: 2019.3 / Avg: 2020.18 / Max: 2021.731. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointddgigG242-P3620K40K60K80K100KSE +/- 25.89, N = 3102604.74102553.11102535.351. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointddgigG242-P3620K40K60K80K100KMin: 102494.95 / Avg: 102535.35 / Max: 102583.611. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointddgigG242-P365K10K15K20K25KSE +/- 0.42, N = 322220.7022219.8022213.541. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointddgigG242-P364K8K12K16K20KMin: 22213.02 / Avg: 22213.54 / Max: 22214.371. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PollddgigG242-P361.6M3.2M4.8M6.4M8MSE +/- 12697.25, N = 37395099.647392099.827330369.961. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PollddgigG242-P361.3M2.6M3.9M5.2M6.5MMin: 7306348.18 / Avg: 7330369.96 / Max: 7349513.571. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsddgigG242-P3613M26M39M52M65MSE +/- 17918.08, N = 362845443.5362867317.1662783286.481. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsddgigG242-P3611M22M33M44M55MMin: 62762580.28 / Avg: 62783286.48 / Max: 62818969.661. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingddgigG242-P365M10M15M20M25MSE +/- 32907.24, N = 321119614.3121054213.7921143237.721. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingddgigG242-P364M8M12M16M20MMin: 21090290.69 / Avg: 21143237.72 / Max: 21203565.611. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingddgigG242-P3611K22K33K44K55KSE +/- 410.62, N = 350686.5850130.9752250.531. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingddgigG242-P369K18K27K36K45KMin: 51785.13 / Avg: 52250.53 / Max: 53069.211. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingddgigG242-P366K12K18K24K30KSE +/- 1.16, N = 327159.0727162.1427153.741. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingddgigG242-P365K10K15K20K25KMin: 27152.01 / Avg: 27153.74 / Max: 27155.941. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphoresddgigG242-P3640M80M120M160M200MSE +/- 217685.76, N = 3166379337.67167850957.68167637763.591. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphoresddgigG242-P3630M60M90M120M150MMin: 167223838.82 / Avg: 167637763.59 / Max: 167961606.171. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexddgigG242-P368M16M24M32M40MSE +/- 9463.26, N = 337267646.9137215286.0437172432.661. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexddgigG242-P366M12M18M24M30MMin: 37153666.97 / Avg: 37172432.66 / Max: 37183947.81. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulerddgigG242-P368K16K24K32K40KSE +/- 141.59, N = 336361.2936309.2936794.331. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulerddgigG242-P366K12K18K24K30KMin: 36630.18 / Avg: 36794.33 / Max: 37076.231. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAddgigG242-P3630060090012001500SE +/- 2.47, N = 31426.451416.031419.061. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAddgigG242-P362004006008001000Min: 1414.83 / Avg: 1419.06 / Max: 1423.391. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipeddgigG242-P367M14M21M28M35MSE +/- 95784.06, N = 330776841.7329805509.1230330081.181. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipeddgigG242-P365M10M15M20M25MMin: 30175649.97 / Avg: 30330081.18 / Max: 30505465.051. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivityddgigG242-P366K12K18K24K30KSE +/- 159.43, N = 327536.7927959.8528009.071. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivityddgigG242-P365K10K15K20K25KMin: 27773.84 / Avg: 28009.07 / Max: 28313.11. (CXX) g++ options: -O2 -std=gnu99 -lc

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.ggufddgigG242-P36612182430SE +/- 0.21, N = 626.6421.9021.581. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas
OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.ggufddgigG242-P36612182430Min: 21 / Avg: 21.58 / Max: 22.231. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallgigG242-P365K10K15K20K25KSE +/- 14.30, N = 424150.723996.01. (CXX) g++ options: -O3 -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallgigG242-P364K8K12K16K20KMin: 23973.6 / Avg: 23996.03 / Max: 24034.41. (CXX) g++ options: -O3 -fopenmp -lmpi_cxx -lmpi

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RategigG242-P3648121620SE +/- 0.09, N = 418.2717.781. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RategigG242-P36510152025Min: 17.56 / Avg: 17.78 / Max: 181. (CC) gcc options: -O3 -march=native -fopenmp

110 Results Shown

PyTorch:
  CPU - 1 - Efficientnet_v2_l
  CPU - 16 - ResNet-152
  CPU - 16 - ResNet-50
  CPU - 1 - ResNet-152
  CPU - 1 - ResNet-50
Xmrig
Speedb
Xmrig
Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Timed LLVM Compilation
LeelaChessZero:
  BLAS
  Eigen
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Quicksilver
Timed Linux Kernel Compilation
Stress-NG
Timed LLVM Compilation
Llama.cpp
Stockfish
OpenSSL:
  ChaCha20-Poly1305
  ChaCha20
  AES-256-GCM
Speedb
RocksDB
Quicksilver
OpenSSL:
  AES-128-GCM
  SHA256
  SHA512
Speedb
RocksDB
Llama.cpp
CacheBench:
  Read
  Read / Modify / Write
  Write
RocksDB
Stress-NG:
  Futex
  Context Switching
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Algebraic Multi-Grid Benchmark
Timed Linux Kernel Compilation
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
GROMACS
Stress-NG
Speedb:
  Rand Fill
  Rand Fill Sync
  Update Rand
  Read Rand Write Rand
RocksDB
OpenSSL:
  RSA4096:
    verify/s
    sign/s
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Quicksilver
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
7-Zip Compression:
  Decompression Rating
  Compression Rating
Stress-NG:
  IO_uring
  MMAP
  Cloning
  Malloc
  CPU Cache
  Pthread
  Zlib
  Vector Shuffle
  Vector Math
  Wide Vector Math
  Matrix Math
  Function Call
  Matrix 3D Math
  CPU Stress
  AVL Tree
  Crypto
  Fused Multiply-Add
  Hash
  SENDFILE
  AVX-512 VNNI
  Glibc Qsort Data Sorting
  Vector Floating Point
  Floating Point
  Poll
  Glibc C String Functions
  System V Message Passing
  Forking
  Memory Copying
  Semaphores
  Mutex
  Mixed Scheduler
  NUMA
  Pipe
  Socket Activity
Llama.cpp
miniFE
ACES DGEMM