Gigabyte G242-P36 Ampere Altra Max Server

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401176-NE-GIGABYTEG67
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 8 Tests
Cryptography 2 Tests
HPC - High Performance Computing 8 Tests
Common Kernel Benchmarks 3 Tests
Linear Algebra 2 Tests
Machine Learning 4 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 6 Tests
NVIDIA GPU Compute 2 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 3 Tests
Scientific Computing 4 Tests
Server 3 Tests
Server CPU Tests 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
G242-P36
January 16
  19 Hours, 11 Minutes
gig
January 17
  2 Hours, 33 Minutes
dd
January 17
  2 Hours, 24 Minutes
Invert Hiding All Results Option
  8 Hours, 3 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Gigabyte G242-P36 Ampere Altra Max ServerOpenBenchmarking.orgPhoronix Test SuiteARMv8 Neoverse-N1 @ 3.00GHz (128 Cores)GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCPAmpere Computing LLC Altra PCI Root Complex A16 x 32 GB DDR4-3200MT/s Samsung M393A4K40DB3-CWE800GB Micron_7450_MTFDKBA800TFSASPEEDVGA HDMI2 x Intel I350Ubuntu 23.106.5.0-13-generic (aarch64)GCC 13.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelCompilerFile-SystemScreen ResolutionGigabyte G242-P36 Ampere Altra Max Server BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - Scaling Governor: cppc_cpufreq performance (Boost: Disabled)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

G242-P36gigddResult OverviewPhoronix Test Suite100%107%114%121%StockfishLlama.cppLeelaChessZeroQuicksilverRocksDBTimed Linux Kernel CompilationStress-NGTimed LLVM CompilationSpeedbNeural Magic DeepSparse7-Zip CompressionOpenSSLCacheBench

Gigabyte G242-P36 Ampere Altra Max Serverstress-ng: CPU Stressstress-ng: Cryptostress-ng: Memory Copyingstress-ng: Glibc Qsort Data Sortingstress-ng: Glibc C String Functionsstress-ng: Vector Mathstress-ng: Matrix Mathstress-ng: Forkingstress-ng: System V Message Passingstress-ng: Semaphoresstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Atomicstress-ng: CPU Cachestress-ng: Mallocstress-ng: MEMFDstress-ng: MMAPstress-ng: NUMAstress-ng: SENDFILEstress-ng: IO_uringstress-ng: Futexstress-ng: Mutexstress-ng: Function Callstress-ng: Pollstress-ng: Hashstress-ng: Pthreadstress-ng: Zlibstress-ng: Floating Pointstress-ng: Fused Multiply-Addstress-ng: Pipestress-ng: Matrix 3D Mathstress-ng: AVL Treestress-ng: Vector Floating Pointstress-ng: Vector Shufflequicksilver: CORAL2 P2quicksilver: CTS2quicksilver: CORAL2 P1stress-ng: Wide Vector Mathstress-ng: Cloningstress-ng: AVX-512 VNNIstress-ng: Mixed Schedulercachebench: Readcachebench: Writecachebench: Read / Modify / Writexmrig: Monero - 1Mxmrig: Wownero - 1Mlczero: BLASlczero: Eigendeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streampytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-152llama-cpp: llama-2-7b.Q4_0.ggufllama-cpp: llama-2-13b.Q4_0.ggufllama-cpp: llama-2-70b-chat.Q5_0.ggufgromacs: MPI CPU - water_GMX50_baremt-dgemm: Sustained Floating-Point Rateamg: minife: Smallstockfish: Total Timecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingbuild-llvm: Ninjabuild-llvm: Unix Makefilesbuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigspeedb: Seq Fillspeedb: Rand Fillspeedb: Rand Fill Syncspeedb: Rand Readspeedb: Read While Writingspeedb: Read Rand Write Randspeedb: Update Randopenssl: RSA4096openssl: RSA4096openssl: SHA256openssl: SHA512openssl: AES-128-GCMopenssl: AES-256-GCMopenssl: ChaCha20openssl: ChaCha20-Poly1305rocksdb: Rand Readrocksdb: Read While Writingrocksdb: Read Rand Write Randrocksdb: Update RandG242-P36gigdd33761.08252315.2627153.742020.1862783286.48398869.87681885.3052250.5321143237.72167637763.5928009.0720365273.287.29879814.35164364343.39574.851088.771419.061624492.92604943.76343012.7537172432.6672283.187330369.9615671801.48113551.875987.8822213.54151220570.5130330081.185099.81299.50102535.3586218.952554333316203333252733332346519.637795.964690386.6436794.3311438.27651638239.97073045034.9761564201.71935.262481137.78155.5703339.9765185.3571477.8141132.103233.62291834.5799200.0280314.1452202.2279310.840333.74731830.576045.67881358.1773476.3781132.30472677.070823.500447.02501320.1354430.1375146.74621.910.680.301.830.6721.5813.903.074.58817.784983105706433323996.0188653177333316537647266.333411.52178.703308.2972950792849872073764095716251290503524196832722756342.8517886.0101322961753344787695903826882073003064878426801617322260701122134488404340523558558845332033743140633765.26251986.1227162.142022.0162867317.16398993.46682490.7550130.9721054213.79167850957.6827959.8519654874.855.64882510.28164067515.18576.531104.191416.031624969.46612149.93323012.9637215286.0472298.237392099.8215654462.92112993.155993.7422219.8151387869.7629805509.125082.65299.1102553.1186375.792552000016460000258100002355564.947312.784691697.8536309.2911438.66616138251.59192445027.47270159471141.45155.4233339.5239185.8675472.0699133.448433.58231832.1154198.9064315.8962202.6332310.642233.8691830.717446.72731334.5433477.0964132.32282624.771924.012546.55311326.9581421.345149.477421.914.023.134.68818.27275106013600024150.7177653916333057541204267.86408.27180.078309.4772900592782642044104184483041325534125185192649986345.6518115.9100039593750344533990303828563282603065445348701617916630401122503964004505009128516060344903842790833559.87251996.3627159.072020.362845443.53399042.09682554.3350686.5821119614.31166379337.6727536.7920708288.986.8882225.34164592319.96569.361092.251426.451624702.09583751.83318037.9337267646.9172290.817395099.6415654282.58113379.285985.6922220.7151037296.4630776841.735089.19299.99102604.7486257.772446000016430000255100002354926.976918.494692452.836361.2911438.86384738252.6284445041.15485360481135.436555.7222343.7639183.6437483.308130.657533.24221850.2264198.3147316.9118201.7554311.532533.15921843.239646.49981336.3924477.6899132.26272684.834123.420946.41941327.9962433.8593145.283726.6414.113.14226859548331579541552264.744407.1980.243310.1372857662853162078914204374711378553024733362647486345.3518085.71013212374503444870170038279302868040429181386365633537322443804OpenBenchmarking.org

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressG242-P36ddgig7K14K21K28K35KSE +/- 1.60, N = 333761.0833559.8733765.261. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressG242-P36ddgig6K12K18K24K30KMin: 33758.11 / Avg: 33761.08 / Max: 33763.611. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptoG242-P36ddgig50K100K150K200K250KSE +/- 928.63, N = 3252315.26251996.36251986.121. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptoG242-P36ddgig40K80K120K160K200KMin: 250934.81 / Avg: 252315.26 / Max: 254081.521. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingG242-P36ddgig6K12K18K24K30KSE +/- 1.16, N = 327153.7427159.0727162.141. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingG242-P36ddgig5K10K15K20K25KMin: 27152.01 / Avg: 27153.74 / Max: 27155.941. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingG242-P36ddgig400800120016002000SE +/- 0.78, N = 32020.182020.302022.011. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingG242-P36ddgig400800120016002000Min: 2019.3 / Avg: 2020.18 / Max: 2021.731. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsG242-P36ddgig13M26M39M52M65MSE +/- 17918.08, N = 362783286.4862845443.5362867317.161. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsG242-P36ddgig11M22M33M44M55MMin: 62762580.28 / Avg: 62783286.48 / Max: 62818969.661. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathG242-P36ddgig90K180K270K360K450KSE +/- 4.53, N = 3398869.87399042.09398993.461. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathG242-P36ddgig70K140K210K280K350KMin: 398860.81 / Avg: 398869.87 / Max: 398874.541. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathG242-P36ddgig150K300K450K600K750KSE +/- 404.39, N = 3681885.30682554.33682490.751. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathG242-P36ddgig120K240K360K480K600KMin: 681079.38 / Avg: 681885.3 / Max: 682347.171. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingG242-P36ddgig11K22K33K44K55KSE +/- 410.62, N = 352250.5350686.5850130.971. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingG242-P36ddgig9K18K27K36K45KMin: 51785.13 / Avg: 52250.53 / Max: 53069.211. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingG242-P36ddgig5M10M15M20M25MSE +/- 32907.24, N = 321143237.7221119614.3121054213.791. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingG242-P36ddgig4M8M12M16M20MMin: 21090290.69 / Avg: 21143237.72 / Max: 21203565.611. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphoresG242-P36ddgig40M80M120M160M200MSE +/- 217685.76, N = 3167637763.59166379337.67167850957.681. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphoresG242-P36ddgig30M60M90M120M150MMin: 167223838.82 / Avg: 167637763.59 / Max: 167961606.171. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivityG242-P36ddgig6K12K18K24K30KSE +/- 159.43, N = 328009.0727536.7927959.851. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivityG242-P36ddgig5K10K15K20K25KMin: 27773.84 / Avg: 28009.07 / Max: 28313.11. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingG242-P36ddgig4M8M12M16M20MSE +/- 174052.70, N = 1520365273.2820708288.9819654874.851. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingG242-P36ddgig4M8M12M16M20MMin: 19577394.24 / Avg: 20365273.28 / Max: 21329006.491. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicG242-P36ddgig246810SE +/- 0.59, N = 157.296.805.641. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicG242-P36ddgig3691215Min: 5.24 / Avg: 7.29 / Max: 13.781. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CacheG242-P36ddgig200K400K600K800K1000KSE +/- 1033.74, N = 3879814.35882225.34882510.281. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CacheG242-P36ddgig150K300K450K600K750KMin: 877751.75 / Avg: 879814.35 / Max: 880968.61. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocG242-P36ddgig40M80M120M160M200MSE +/- 296218.44, N = 3164364343.39164592319.96164067515.181. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocG242-P36ddgig30M60M90M120M150MMin: 163808930.62 / Avg: 164364343.39 / Max: 164820581.541. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDG242-P36ddgig120240360480600SE +/- 4.82, N = 8574.85569.36576.531. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDG242-P36ddgig100200300400500Min: 560.39 / Avg: 574.85 / Max: 599.011. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPG242-P36ddgig2004006008001000SE +/- 5.43, N = 31088.771092.251104.191. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPG242-P36ddgig2004006008001000Min: 1083.05 / Avg: 1088.77 / Max: 1099.621. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAG242-P36ddgig30060090012001500SE +/- 2.47, N = 31419.061426.451416.031. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAG242-P36ddgig2004006008001000Min: 1414.83 / Avg: 1419.06 / Max: 1423.391. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEG242-P36ddgig300K600K900K1200K1500KSE +/- 18.53, N = 31624492.921624702.091624969.461. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEG242-P36ddgig300K600K900K1200K1500KMin: 1624456.72 / Avg: 1624492.92 / Max: 1624517.881. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringG242-P36ddgig130K260K390K520K650KSE +/- 5192.48, N = 3604943.76583751.83612149.931. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringG242-P36ddgig110K220K330K440K550KMin: 594698.55 / Avg: 604943.76 / Max: 611536.881. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexG242-P36ddgig70K140K210K280K350KSE +/- 7072.24, N = 15343012.75318037.93323012.961. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexG242-P36ddgig60K120K180K240K300KMin: 300684.89 / Avg: 343012.75 / Max: 382750.871. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexG242-P36ddgig8M16M24M32M40MSE +/- 9463.26, N = 337172432.6637267646.9137215286.041. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexG242-P36ddgig6M12M18M24M30MMin: 37153666.97 / Avg: 37172432.66 / Max: 37183947.81. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallG242-P36ddgig15K30K45K60K75KSE +/- 1.53, N = 372283.1872290.8172298.231. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallG242-P36ddgig13K26K39K52K65KMin: 72280.25 / Avg: 72283.18 / Max: 72285.411. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PollG242-P36ddgig1.6M3.2M4.8M6.4M8MSE +/- 12697.25, N = 37330369.967395099.647392099.821. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PollG242-P36ddgig1.3M2.6M3.9M5.2M6.5MMin: 7306348.18 / Avg: 7330369.96 / Max: 7349513.571. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashG242-P36ddgig3M6M9M12M15MSE +/- 9429.94, N = 315671801.4815654282.5815654462.921. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashG242-P36ddgig3M6M9M12M15MMin: 15653575.76 / Avg: 15671801.48 / Max: 15685114.181. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PthreadG242-P36ddgig20K40K60K80K100KSE +/- 65.20, N = 3113551.87113379.28112993.151. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PthreadG242-P36ddgig20K40K60K80K100KMin: 113466.83 / Avg: 113551.87 / Max: 113680.011. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibG242-P36ddgig13002600390052006500SE +/- 0.87, N = 35987.885985.695993.741. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibG242-P36ddgig10002000300040005000Min: 5986.5 / Avg: 5987.88 / Max: 5989.481. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointG242-P36ddgig5K10K15K20K25KSE +/- 0.42, N = 322213.5422220.7022219.801. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointG242-P36ddgig4K8K12K16K20KMin: 22213.02 / Avg: 22213.54 / Max: 22214.371. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AddG242-P36ddgig30M60M90M120M150MSE +/- 110268.18, N = 3151220570.51151037296.46151387869.761. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AddG242-P36ddgig30M60M90M120M150MMin: 151000628.28 / Avg: 151220570.51 / Max: 151344551.491. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipeG242-P36ddgig7M14M21M28M35MSE +/- 95784.06, N = 330330081.1830776841.7329805509.121. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipeG242-P36ddgig5M10M15M20M25MMin: 30175649.97 / Avg: 30330081.18 / Max: 30505465.051. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathG242-P36ddgig11002200330044005500SE +/- 3.74, N = 35099.815089.195082.651. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathG242-P36ddgig9001800270036004500Min: 5092.71 / Avg: 5099.81 / Max: 5105.41. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreeG242-P36ddgig70140210280350SE +/- 0.16, N = 3299.50299.99299.101. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreeG242-P36ddgig50100150200250Min: 299.23 / Avg: 299.5 / Max: 299.771. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointG242-P36ddgig20K40K60K80K100KSE +/- 25.89, N = 3102535.35102604.74102553.111. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointG242-P36ddgig20K40K60K80K100KMin: 102494.95 / Avg: 102535.35 / Max: 102583.611. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShuffleG242-P36ddgig20K40K60K80K100KSE +/- 3.20, N = 386218.9586257.7786375.791. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShuffleG242-P36ddgig15K30K45K60K75KMin: 86212.6 / Avg: 86218.95 / Max: 86222.841. (CXX) g++ options: -O2 -std=gnu99 -lc

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2G242-P36ddgig5M10M15M20M25MSE +/- 84129.53, N = 32554333324460000255200001. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2G242-P36ddgig4M8M12M16M20MMin: 25440000 / Avg: 25543333.33 / Max: 257100001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2G242-P36ddgig4M8M12M16M20MSE +/- 42557.15, N = 31620333316430000164600001. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2G242-P36ddgig3M6M9M12M15MMin: 16120000 / Avg: 16203333.33 / Max: 162600001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1G242-P36ddgig6M12M18M24M30MSE +/- 81103.50, N = 32527333325510000258100001. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1G242-P36ddgig4M8M12M16M20MMin: 25140000 / Avg: 25273333.33 / Max: 254200001. (CXX) g++ options: -fopenmp -O3 -march=native

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathG242-P36ddgig500K1000K1500K2000K2500KSE +/- 6960.54, N = 32346519.632354926.972355564.941. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathG242-P36ddgig400K800K1200K1600K2000KMin: 2332639.78 / Avg: 2346519.63 / Max: 2354386.871. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningG242-P36ddgig2K4K6K8K10KSE +/- 29.21, N = 37795.966918.497312.781. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningG242-P36ddgig14002800420056007000Min: 7754.53 / Avg: 7795.96 / Max: 7852.351. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIG242-P36ddgig1000K2000K3000K4000K5000KSE +/- 401.84, N = 34690386.644692452.804691697.851. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIG242-P36ddgig800K1600K2400K3200K4000KMin: 4689806.23 / Avg: 4690386.64 / Max: 4691158.271. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulerG242-P36ddgig8K16K24K32K40KSE +/- 141.59, N = 336794.3336361.2936309.291. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulerG242-P36ddgig6K12K18K24K30KMin: 36630.18 / Avg: 36794.33 / Max: 37076.231. (CXX) g++ options: -O2 -std=gnu99 -lc

CacheBench

This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: ReadG242-P36ddgig2K4K6K8K10KSE +/- 0.01, N = 311438.2811438.8611438.67MIN: 11437.32 / MAX: 11438.59MIN: 11438.05 / MAX: 11439.05MIN: 11438.33 / MAX: 11438.851. (CC) gcc options: -O3 -lrt
OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: ReadG242-P36ddgig2K4K6K8K10KMin: 11438.26 / Avg: 11438.28 / Max: 11438.291. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: WriteG242-P36ddgig8K16K24K32K40KSE +/- 1.22, N = 338239.9738252.6338251.59MIN: 35288.52 / MAX: 41382MIN: 35291.37 / MAX: 41384.3MIN: 35289.91 / MAX: 41383.991. (CC) gcc options: -O3 -lrt
OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: WriteG242-P36ddgig7K14K21K28K35KMin: 38238.37 / Avg: 38239.97 / Max: 38242.381. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / WriteG242-P36ddgig10K20K30K40K50KSE +/- 2.04, N = 345034.9845041.1545027.47MIN: 43692.22 / MAX: 45639.26MIN: 43693.38 / MAX: 45647.65MIN: 43694.36 / MAX: 45640.071. (CC) gcc options: -O3 -lrt
OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / WriteG242-P36ddgig8K16K24K32K40KMin: 45031.08 / Avg: 45034.98 / Max: 45037.991. (CC) gcc options: -O3 -lrt

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MG242-P369001800270036004500SE +/- 17.55, N = 34201.71. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MG242-P36400800120016002000SE +/- 2.92, N = 31935.21. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASG242-P36ddgig1428425670SE +/- 0.58, N = 36260591. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASG242-P36ddgig1224364860Min: 61 / Avg: 62 / Max: 631. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: EigenG242-P36ddgig1122334455SE +/- 0.33, N = 34848471. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: EigenG242-P36ddgig1020304050Min: 47 / Avg: 47.67 / Max: 481. (CXX) g++ options: -flto -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig2004006008001000SE +/- 1.48, N = 31137.781135.441141.45
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig2004006008001000Min: 1135.76 / Avg: 1137.78 / Max: 1140.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig1326395265SE +/- 0.09, N = 355.5755.7255.42
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig1122334455Min: 55.46 / Avg: 55.57 / Max: 55.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamG242-P36ddgig70140210280350SE +/- 0.19, N = 3339.98343.76339.52
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamG242-P36ddgig60120180240300Min: 339.74 / Avg: 339.98 / Max: 340.36

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamG242-P36ddgig4080120160200SE +/- 0.10, N = 3185.36183.64185.87
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamG242-P36ddgig306090120150Min: 185.16 / Avg: 185.36 / Max: 185.47

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamG242-P36ddgig100200300400500SE +/- 0.36, N = 3477.81483.31472.07
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamG242-P36ddgig90180270360450Min: 477.16 / Avg: 477.81 / Max: 478.4

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamG242-P36ddgig306090120150SE +/- 0.15, N = 3132.10130.66133.45
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamG242-P36ddgig306090120150Min: 131.84 / Avg: 132.1 / Max: 132.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamG242-P36ddgig816243240SE +/- 0.04, N = 333.6233.2433.58
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamG242-P36ddgig714212835Min: 33.55 / Avg: 33.62 / Max: 33.7

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamG242-P36ddgig400800120016002000SE +/- 1.29, N = 31834.581850.231832.12
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamG242-P36ddgig30060090012001500Min: 1832 / Avg: 1834.58 / Max: 1836.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamG242-P36ddgig4080120160200SE +/- 0.52, N = 3200.03198.31198.91
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamG242-P36ddgig4080120160200Min: 199.21 / Avg: 200.03 / Max: 200.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamG242-P36ddgig70140210280350SE +/- 0.64, N = 3314.15316.91315.90
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamG242-P36ddgig60120180240300Min: 312.97 / Avg: 314.15 / Max: 315.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig4080120160200SE +/- 0.53, N = 3202.23201.76202.63
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig4080120160200Min: 201.26 / Avg: 202.23 / Max: 203.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig70140210280350SE +/- 0.86, N = 3310.84311.53310.64
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig60120180240300Min: 309.34 / Avg: 310.84 / Max: 312.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamG242-P36ddgig816243240SE +/- 0.08, N = 333.7533.1633.87
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamG242-P36ddgig714212835Min: 33.64 / Avg: 33.75 / Max: 33.9

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamG242-P36ddgig400800120016002000SE +/- 0.45, N = 31830.581843.241830.72
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamG242-P36ddgig30060090012001500Min: 1829.9 / Avg: 1830.58 / Max: 1831.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamG242-P36ddgig1122334455SE +/- 0.32, N = 345.6846.5046.73
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamG242-P36ddgig1020304050Min: 45.21 / Avg: 45.68 / Max: 46.3

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamG242-P36ddgig30060090012001500SE +/- 8.37, N = 31358.181336.391334.54
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamG242-P36ddgig2004006008001000Min: 1342.89 / Avg: 1358.18 / Max: 1371.73

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamG242-P36ddgig100200300400500SE +/- 0.42, N = 3476.38477.69477.10
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamG242-P36ddgig80160240320400Min: 475.88 / Avg: 476.38 / Max: 477.21

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamG242-P36ddgig306090120150SE +/- 0.18, N = 3132.30132.26132.32
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamG242-P36ddgig20406080100Min: 131.96 / Avg: 132.3 / Max: 132.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig6001200180024003000SE +/- 25.24, N = 32677.072684.832624.77
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig5001000150020002500Min: 2627.26 / Avg: 2677.07 / Max: 2709.13

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig612182430SE +/- 0.21, N = 323.5023.4224.01
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig612182430Min: 23.24 / Avg: 23.5 / Max: 23.91

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamG242-P36ddgig1122334455SE +/- 0.03, N = 347.0346.4246.55
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamG242-P36ddgig1020304050Min: 46.97 / Avg: 47.03 / Max: 47.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamG242-P36ddgig30060090012001500SE +/- 0.85, N = 31320.141328.001326.96
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamG242-P36ddgig2004006008001000Min: 1318.5 / Avg: 1320.14 / Max: 1321.32

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig90180270360450SE +/- 4.70, N = 3430.14433.86421.35
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig80160240320400Min: 424.47 / Avg: 430.14 / Max: 439.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig306090120150SE +/- 1.58, N = 3146.75145.28149.48
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36ddgig306090120150Min: 143.62 / Avg: 146.75 / Max: 148.68

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50G242-P360.42980.85961.28941.71922.149SE +/- 0.00, N = 31.91MIN: 1.8 / MAX: 2.09

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152G242-P360.1530.3060.4590.6120.765SE +/- 0.00, N = 30.68MIN: 0.65 / MAX: 0.7

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lG242-P360.06750.1350.20250.270.3375SE +/- 0.00, N = 30.30MIN: 0.27 / MAX: 0.4

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50G242-P360.41180.82361.23541.64722.059SE +/- 0.02, N = 51.83MIN: 1.7 / MAX: 2.02

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152G242-P360.15080.30160.45240.60320.754SE +/- 0.00, N = 20.67MIN: 0.65 / MAX: 0.7

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.ggufG242-P36ddgig612182430SE +/- 0.21, N = 621.5826.6421.901. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas
OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.ggufG242-P36ddgig612182430Min: 21 / Avg: 21.58 / Max: 22.231. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.ggufG242-P36ddgig48121620SE +/- 0.16, N = 1513.9014.1114.021. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas
OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.ggufG242-P36ddgig48121620Min: 13.45 / Avg: 13.9 / Max: 15.41. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.ggufG242-P36ddgig0.70651.4132.11952.8263.5325SE +/- 0.03, N = 83.073.143.131. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas
OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.ggufG242-P36ddgig246810Min: 3 / Avg: 3.07 / Max: 3.171. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareG242-P36gig1.05482.10963.16444.21925.274SE +/- 0.002, N = 34.5884.6881. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareG242-P36gig246810Min: 4.59 / Avg: 4.59 / Max: 4.591. (CXX) g++ options: -O3

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateG242-P36gig48121620SE +/- 0.09, N = 417.7818.271. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateG242-P36gig510152025Min: 17.56 / Avg: 17.78 / Max: 181. (CC) gcc options: -O3 -march=native -fopenmp

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2G242-P36gig200M400M600M800M1000MSE +/- 47484.50, N = 3105706433310601360001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2G242-P36gig200M400M600M800M1000MMin: 1056970000 / Avg: 1057064333.33 / Max: 10571210001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallG242-P36gig5K10K15K20K25KSE +/- 14.30, N = 423996.024150.71. (CXX) g++ options: -O3 -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallG242-P36gig4K8K12K16K20KMin: 23973.6 / Avg: 23996.03 / Max: 24034.41. (CXX) g++ options: -O3 -fopenmp -lmpi_cxx -lmpi

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeG242-P36ddgig50M100M150M200M250MSE +/- 6857171.33, N = 151886531772268595481776539161. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeG242-P36ddgig40M80M120M160M200MMin: 162259049 / Avg: 188653177.27 / Max: 2552487491. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingG242-P36ddgig70K140K210K280K350KSE +/- 991.66, N = 33333163315793330571. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingG242-P36ddgig60K120K180K240K300KMin: 331684 / Avg: 333316 / Max: 3351081. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingG242-P36ddgig120K240K360K480K600KSE +/- 396.38, N = 35376475415525412041. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingG242-P36ddgig90K180K270K360K450KMin: 536956 / Avg: 537647 / Max: 5383291. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaG242-P36ddgig60120180240300SE +/- 0.67, N = 3266.33264.74267.86
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaG242-P36ddgig50100150200250Min: 265.31 / Avg: 266.33 / Max: 267.59

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesG242-P36ddgig90180270360450SE +/- 1.15, N = 3411.52407.19408.27
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesG242-P36ddgig70140210280350Min: 409.26 / Avg: 411.52 / Max: 413

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigG242-P36ddgig20406080100SE +/- 0.82, N = 378.7080.2480.08
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigG242-P36ddgig1530456075Min: 77.85 / Avg: 78.7 / Max: 80.35

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigG242-P36ddgig70140210280350SE +/- 1.01, N = 3308.30310.14309.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigG242-P36ddgig60120180240300Min: 307.09 / Avg: 308.3 / Max: 310.3

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential FillG242-P36ddgig60K120K180K240K300KSE +/- 3101.60, N = 52950792857662900591. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential FillG242-P36ddgig50K100K150K200K250KMin: 287334 / Avg: 295079 / Max: 3053281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random FillG242-P36ddgig60K120K180K240K300KSE +/- 1985.22, N = 32849872853162782641. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random FillG242-P36ddgig50K100K150K200K250KMin: 281023 / Avg: 284987.33 / Max: 2871601. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill SyncG242-P36ddgig40K80K120K160K200KSE +/- 1986.97, N = 32073762078912044101. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill SyncG242-P36ddgig40K80K120K160K200KMin: 204462 / Avg: 207376 / Max: 2111731. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadG242-P36ddgig90M180M270M360M450MSE +/- 2947408.87, N = 114095716254204374714184483041. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadG242-P36ddgig70M140M210M280M350MMin: 389566929 / Avg: 409571624.64 / Max: 4192237781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingG242-P36ddgig3M6M9M12M15MSE +/- 201662.23, N = 151290503513785530132553411. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingG242-P36ddgig2M4M6M8M10MMin: 12030199 / Avg: 12905034.6 / Max: 143592051. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomG242-P36ddgig500K1000K1500K2000K2500KSE +/- 21596.32, N = 32419683247333625185191. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomG242-P36ddgig400K800K1200K1600K2000KMin: 2379318 / Avg: 2419683.33 / Max: 24531771. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomG242-P36ddgig60K120K180K240K300KSE +/- 1573.56, N = 32722752647482649981. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomG242-P36ddgig50K100K150K200K250KMin: 269748 / Avg: 272275 / Max: 2751631. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096G242-P36ddgig14002800420056007000SE +/- 0.10, N = 36342.86345.36345.61. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096G242-P36ddgig11002200330044005500Min: 6342.6 / Avg: 6342.8 / Max: 6342.91. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096G242-P36ddgig110K220K330K440K550KSE +/- 27.21, N = 3517886.0518085.7518115.91. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096G242-P36ddgig90K180K270K360K450KMin: 517846.4 / Avg: 517885.97 / Max: 517938.11. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256G242-P36ddgig20000M40000M60000M80000M100000MSE +/- 64411674.99, N = 31013229617531013212374501000395937501. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256G242-P36ddgig20000M40000M60000M80000M100000MMin: 101194398110 / Avg: 101322961753.33 / Max: 1013943241001. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512G242-P36ddgig7000M14000M21000M28000M35000MSE +/- 8688088.34, N = 33447876959034448701700344533990301. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512G242-P36ddgig6000M12000M18000M24000M30000MMin: 34467879570 / Avg: 34478769590 / Max: 344959408201. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMG242-P36ddgig80000M160000M240000M320000M400000MSE +/- 3586455.40, N = 33826882073003827930286803828563282601. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMG242-P36ddgig70000M140000M210000M280000M350000MMin: 382682894060 / Avg: 382688207300 / Max: 3826950370601. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMG242-P36gig70000M140000M210000M280000M350000MSE +/- 40660594.45, N = 33064878426803065445348701. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMG242-P36gig50000M100000M150000M200000M250000MMin: 306408867430 / Avg: 306487842680 / Max: 3065441241801. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20G242-P36gig30000M60000M90000M120000M150000MSE +/- 10001054.79, N = 31617322260701617916630401. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20G242-P36gig30000M60000M90000M120000M150000MMin: 161712333620 / Avg: 161732226070 / Max: 1617439836801. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305G242-P36gig20000M40000M60000M80000M100000MSE +/- 361309.16, N = 31122134488401122503964001. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305G242-P36gig20000M40000M60000M80000M100000MMin: 112212763440 / Avg: 112213448840 / Max: 1122139897901. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadG242-P36ddgig100M200M300M400M500MSE +/- 4162622.50, N = 154340523554042918134505009121. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadG242-P36ddgig80M160M240M320M400MMin: 402509297 / Avg: 434052354.93 / Max: 4490414751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingG242-P36ddgig2M4M6M8M10MSE +/- 68677.29, N = 98558845863656385160601. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingG242-P36ddgig1.5M3M4.5M6M7.5MMin: 8101186 / Avg: 8558845.11 / Max: 88681321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomG242-P36ddgig800K1600K2400K3200K4000KSE +/- 30568.75, N = 73320337353732234490381. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomG242-P36ddgig600K1200K1800K2400K3000KMin: 3183451 / Avg: 3320337.29 / Max: 34282721. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomG242-P36ddgig100K200K300K400K500KSE +/- 4409.44, N = 34314064438044279081. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomG242-P36ddgig80K160K240K320K400KMin: 423836 / Avg: 431406 / Max: 4391091. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

110 Results Shown

Stress-NG:
  CPU Stress
  Crypto
  Memory Copying
  Glibc Qsort Data Sorting
  Glibc C String Functions
  Vector Math
  Matrix Math
  Forking
  System V Message Passing
  Semaphores
  Socket Activity
  Context Switching
  Atomic
  CPU Cache
  Malloc
  MEMFD
  MMAP
  NUMA
  SENDFILE
  IO_uring
  Futex
  Mutex
  Function Call
  Poll
  Hash
  Pthread
  Zlib
  Floating Point
  Fused Multiply-Add
  Pipe
  Matrix 3D Math
  AVL Tree
  Vector Floating Point
  Vector Shuffle
Quicksilver:
  CORAL2 P2
  CTS2
  CORAL2 P1
Stress-NG:
  Wide Vector Math
  Cloning
  AVX-512 VNNI
  Mixed Scheduler
CacheBench:
  Read
  Write
  Read / Modify / Write
Xmrig:
  Monero - 1M
  Wownero - 1M
LeelaChessZero:
  BLAS
  Eigen
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
PyTorch:
  CPU - 1 - ResNet-50
  CPU - 1 - ResNet-152
  CPU - 1 - Efficientnet_v2_l
  CPU - 16 - ResNet-50
  CPU - 16 - ResNet-152
Llama.cpp:
  llama-2-7b.Q4_0.gguf
  llama-2-13b.Q4_0.gguf
  llama-2-70b-chat.Q5_0.gguf
GROMACS
ACES DGEMM
Algebraic Multi-Grid Benchmark
miniFE
Stockfish
7-Zip Compression:
  Compression Rating
  Decompression Rating
Timed LLVM Compilation:
  Ninja
  Unix Makefiles
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
Speedb:
  Seq Fill
  Rand Fill
  Rand Fill Sync
  Rand Read
  Read While Writing
  Read Rand Write Rand
  Update Rand
OpenSSL:
  RSA4096:
    sign/s
    verify/s
  SHA256:
    byte/s
  SHA512:
    byte/s
  AES-128-GCM:
    byte/s
  AES-256-GCM:
    byte/s
  ChaCha20:
    byte/s
  ChaCha20-Poly1305:
    byte/s
RocksDB:
  Rand Read
  Read While Writing
  Read Rand Write Rand
  Update Rand