Gigabyte G242-P36 Ampere Altra Max Server

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401176-NE-GIGABYTEG67
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
G242-P36
January 16
  19 Hours, 11 Minutes
gig
January 17
  2 Hours, 33 Minutes
dd
January 17
  2 Hours, 24 Minutes
Invert Behavior (Only Show Selected Data)
  8 Hours, 3 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Gigabyte G242-P36 Ampere Altra Max ServerOpenBenchmarking.orgPhoronix Test SuiteARMv8 Neoverse-N1 @ 3.00GHz (128 Cores)GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 (F31k SCPAmpere Computing LLC Altra PCI Root Complex A16 x 32 GB DDR4-3200MT/s Samsung M393A4K40DB3-CWE800GB Micron_7450_MTFDKBA800TFSASPEEDVGA HDMI2 x Intel I350Ubuntu 23.106.5.0-13-generic (aarch64)GCC 13.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelCompilerFile-SystemScreen ResolutionGigabyte G242-P36 Ampere Altra Max Server BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - Scaling Governor: cppc_cpufreq performance (Boost: Disabled)- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

G242-P36gigddResult OverviewPhoronix Test Suite100%107%114%121%StockfishLlama.cppLeelaChessZeroQuicksilverRocksDBTimed Linux Kernel CompilationStress-NGTimed LLVM CompilationSpeedbNeural Magic DeepSparse7-Zip CompressionOpenSSLCacheBench

Gigabyte G242-P36 Ampere Altra Max Serverpytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-152stress-ng: CPU Stressstress-ng: Cryptostress-ng: Memory Copyingstress-ng: Glibc Qsort Data Sortingstress-ng: Glibc C String Functionsstress-ng: Vector Mathstress-ng: Matrix Mathstress-ng: Forkingstress-ng: System V Message Passingstress-ng: Semaphoresstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Atomicstress-ng: CPU Cachestress-ng: Mallocstress-ng: MEMFDstress-ng: MMAPstress-ng: NUMAstress-ng: SENDFILEstress-ng: IO_uringstress-ng: Futexstress-ng: Mutexstress-ng: Function Callstress-ng: Pollstress-ng: Hashstress-ng: Pthreadstress-ng: Zlibstress-ng: Floating Pointstress-ng: Fused Multiply-Addstress-ng: Pipestress-ng: Matrix 3D Mathstress-ng: AVL Treestress-ng: Vector Floating Pointstress-ng: Vector Shufflestress-ng: Wide Vector Mathstress-ng: Cloningstress-ng: AVX-512 VNNIstress-ng: Mixed Scheduleropenssl: SHA256openssl: SHA512openssl: AES-128-GCMopenssl: AES-256-GCMopenssl: ChaCha20openssl: ChaCha20-Poly1305minife: Smallquicksilver: CORAL2 P1quicksilver: CORAL2 P2quicksilver: CTS2amg: mt-dgemm: Sustained Floating-Point Ratexmrig: Monero - 1Mxmrig: Wownero - 1Mdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamcachebench: Readcachebench: Writecachebench: Read / Modify / Writecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratinglczero: BLASlczero: Eigenstockfish: Total Timegromacs: MPI CPU - water_GMX50_barespeedb: Seq Fillspeedb: Rand Fillspeedb: Rand Fill Syncspeedb: Rand Readspeedb: Read While Writingspeedb: Read Rand Write Randspeedb: Update Randrocksdb: Rand Readrocksdb: Read While Writingrocksdb: Read Rand Write Randrocksdb: Update Randopenssl: RSA4096llama-cpp: llama-2-7b.Q4_0.ggufllama-cpp: llama-2-13b.Q4_0.ggufllama-cpp: llama-2-70b-chat.Q5_0.ggufopenssl: RSA4096deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streambuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigbuild-llvm: Ninjabuild-llvm: Unix MakefilesG242-P36gigdd1.910.680.301.830.6733761.08252315.2627153.742020.1862783286.48398869.87681885.3052250.5321143237.72167637763.5928009.0720365273.287.29879814.35164364343.39574.851088.771419.061624492.92604943.76343012.7537172432.6672283.187330369.9615671801.48113551.875987.8822213.54151220570.5130330081.185099.81299.50102535.3586218.952346519.637795.964690386.6436794.331013229617533447876959038268820730030648784268016173222607011221344884023996.0252733332554333316203333105706433317.7849834201.71935.21137.781339.9765477.814133.6229200.0280202.227933.747345.6788476.37812677.070847.0250430.137511438.27651638239.97073045034.97615633331653764762481886531774.588295079284987207376409571625129050352419683272275434052355855884533203374314066342.821.5813.903.07517886.055.5703185.3571132.10321834.5799314.1452310.84031830.57601358.1773132.304723.50041320.1354146.746278.703308.297266.333411.52133765.26251986.1227162.142022.0162867317.16398993.46682490.7550130.9721054213.79167850957.6827959.8519654874.855.64882510.28164067515.18576.531104.191416.031624969.46612149.93323012.9637215286.0472298.237392099.8215654462.92112993.155993.7422219.8151387869.7629805509.125082.65299.1102553.1186375.792355564.947312.784691697.8536309.291000395937503445339903038285632826030654453487016179166304011225039640024150.7258100002552000016460000106013600018.272751141.451339.5239472.069933.5823198.9064202.633233.86946.7273477.09642624.771946.5531421.34511438.66616138251.59192445027.47270133305754120459471776539164.688290059278264204410418448304132553412518519264998450500912851606034490384279086345.621.914.023.13518115.955.4233185.8675133.44841832.1154315.8962310.64221830.71741334.5433132.322824.01251326.9581149.477480.078309.477267.86408.27133559.87251996.3627159.072020.362845443.53399042.09682554.3350686.5821119614.31166379337.6727536.7920708288.986.8882225.34164592319.96569.361092.251426.451624702.09583751.83318037.9337267646.9172290.817395099.6415654282.58113379.285985.6922220.7151037296.4630776841.735089.19299.99102604.7486257.772354926.976918.494692452.836361.29101321237450344487017003827930286802551000024460000164300001135.4365343.7639483.30833.2422198.3147201.755433.159246.4998477.68992684.834146.4194433.859311438.86384738252.6284445041.1548533315795415526048226859548285766285316207891420437471137855302473336264748404291813863656335373224438046345.326.6414.113.14518085.755.7222183.6437130.65751850.2264316.9118311.53251843.23961336.3924132.262723.42091327.9962145.283780.243310.137264.744407.19OpenBenchmarking.org

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50G242-P360.42980.85961.28941.71922.149SE +/- 0.00, N = 31.91MIN: 1.8 / MAX: 2.09

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152G242-P360.1530.3060.4590.6120.765SE +/- 0.00, N = 30.68MIN: 0.65 / MAX: 0.7

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lG242-P360.06750.1350.20250.270.3375SE +/- 0.00, N = 30.30MIN: 0.27 / MAX: 0.4

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50G242-P360.41180.82361.23541.64722.059SE +/- 0.02, N = 51.83MIN: 1.7 / MAX: 2.02

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152G242-P360.15080.30160.45240.60320.754SE +/- 0.00, N = 20.67MIN: 0.65 / MAX: 0.7

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressG242-P36gigdd7K14K21K28K35KSE +/- 1.60, N = 333761.0833765.2633559.871. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptoG242-P36gigdd50K100K150K200K250KSE +/- 928.63, N = 3252315.26251986.12251996.361. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingG242-P36gigdd6K12K18K24K30KSE +/- 1.16, N = 327153.7427162.1427159.071. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingG242-P36gigdd400800120016002000SE +/- 0.78, N = 32020.182022.012020.301. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsG242-P36gigdd13M26M39M52M65MSE +/- 17918.08, N = 362783286.4862867317.1662845443.531. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathG242-P36gigdd90K180K270K360K450KSE +/- 4.53, N = 3398869.87398993.46399042.091. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathG242-P36gigdd150K300K450K600K750KSE +/- 404.39, N = 3681885.30682490.75682554.331. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingG242-P36gigdd11K22K33K44K55KSE +/- 410.62, N = 352250.5350130.9750686.581. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingG242-P36gigdd5M10M15M20M25MSE +/- 32907.24, N = 321143237.7221054213.7921119614.311. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphoresG242-P36gigdd40M80M120M160M200MSE +/- 217685.76, N = 3167637763.59167850957.68166379337.671. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivityG242-P36gigdd6K12K18K24K30KSE +/- 159.43, N = 328009.0727959.8527536.791. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingG242-P36gigdd4M8M12M16M20MSE +/- 174052.70, N = 1520365273.2819654874.8520708288.981. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicG242-P36gigdd246810SE +/- 0.59, N = 157.295.646.801. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CacheG242-P36gigdd200K400K600K800K1000KSE +/- 1033.74, N = 3879814.35882510.28882225.341. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocG242-P36gigdd40M80M120M160M200MSE +/- 296218.44, N = 3164364343.39164067515.18164592319.961. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDG242-P36gigdd120240360480600SE +/- 4.82, N = 8574.85576.53569.361. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPG242-P36gigdd2004006008001000SE +/- 5.43, N = 31088.771104.191092.251. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAG242-P36gigdd30060090012001500SE +/- 2.47, N = 31419.061416.031426.451. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEG242-P36gigdd300K600K900K1200K1500KSE +/- 18.53, N = 31624492.921624969.461624702.091. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringG242-P36gigdd130K260K390K520K650KSE +/- 5192.48, N = 3604943.76612149.93583751.831. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexG242-P36gigdd70K140K210K280K350KSE +/- 7072.24, N = 15343012.75323012.96318037.931. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexG242-P36gigdd8M16M24M32M40MSE +/- 9463.26, N = 337172432.6637215286.0437267646.911. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallG242-P36gigdd15K30K45K60K75KSE +/- 1.53, N = 372283.1872298.2372290.811. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PollG242-P36gigdd1.6M3.2M4.8M6.4M8MSE +/- 12697.25, N = 37330369.967392099.827395099.641. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashG242-P36gigdd3M6M9M12M15MSE +/- 9429.94, N = 315671801.4815654462.9215654282.581. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PthreadG242-P36gigdd20K40K60K80K100KSE +/- 65.20, N = 3113551.87112993.15113379.281. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibG242-P36gigdd13002600390052006500SE +/- 0.87, N = 35987.885993.745985.691. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointG242-P36gigdd5K10K15K20K25KSE +/- 0.42, N = 322213.5422219.8022220.701. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AddG242-P36gigdd30M60M90M120M150MSE +/- 110268.18, N = 3151220570.51151387869.76151037296.461. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipeG242-P36gigdd7M14M21M28M35MSE +/- 95784.06, N = 330330081.1829805509.1230776841.731. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathG242-P36gigdd11002200330044005500SE +/- 3.74, N = 35099.815082.655089.191. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreeG242-P36gigdd70140210280350SE +/- 0.16, N = 3299.50299.10299.991. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointG242-P36gigdd20K40K60K80K100KSE +/- 25.89, N = 3102535.35102553.11102604.741. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShuffleG242-P36gigdd20K40K60K80K100KSE +/- 3.20, N = 386218.9586375.7986257.771. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathG242-P36gigdd500K1000K1500K2000K2500KSE +/- 6960.54, N = 32346519.632355564.942354926.971. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningG242-P36gigdd2K4K6K8K10KSE +/- 29.21, N = 37795.967312.786918.491. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIG242-P36gigdd1000K2000K3000K4000K5000KSE +/- 401.84, N = 34690386.644691697.854692452.801. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulerG242-P36gigdd8K16K24K32K40KSE +/- 141.59, N = 336794.3336309.2936361.291. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256G242-P36gigdd20000M40000M60000M80000M100000MSE +/- 64411674.99, N = 31013229617531000395937501013212374501. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512G242-P36gigdd7000M14000M21000M28000M35000MSE +/- 8688088.34, N = 33447876959034453399030344487017001. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMG242-P36gigdd80000M160000M240000M320000M400000MSE +/- 3586455.40, N = 33826882073003828563282603827930286801. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMG242-P36gig70000M140000M210000M280000M350000MSE +/- 40660594.45, N = 33064878426803065445348701. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20G242-P36gig30000M60000M90000M120000M150000MSE +/- 10001054.79, N = 31617322260701617916630401. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305G242-P36gig20000M40000M60000M80000M100000MSE +/- 361309.16, N = 31122134488401122503964001. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallG242-P36gig5K10K15K20K25KSE +/- 14.30, N = 423996.024150.71. (CXX) g++ options: -O3 -fopenmp -lmpi_cxx -lmpi

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1G242-P36gigdd6M12M18M24M30MSE +/- 81103.50, N = 32527333325810000255100001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2G242-P36gigdd5M10M15M20M25MSE +/- 84129.53, N = 32554333325520000244600001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2G242-P36gigdd4M8M12M16M20MSE +/- 42557.15, N = 31620333316460000164300001. (CXX) g++ options: -fopenmp -O3 -march=native

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2G242-P36gig200M400M600M800M1000MSE +/- 47484.50, N = 3105706433310601360001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateG242-P36gig48121620SE +/- 0.09, N = 417.7818.271. (CC) gcc options: -O3 -march=native -fopenmp

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1MG242-P369001800270036004500SE +/- 17.55, N = 34201.71. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1MG242-P36400800120016002000SE +/- 2.92, N = 31935.21. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd2004006008001000SE +/- 1.48, N = 31137.781141.451135.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamG242-P36gigdd70140210280350SE +/- 0.19, N = 3339.98339.52343.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamG242-P36gigdd100200300400500SE +/- 0.36, N = 3477.81472.07483.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamG242-P36gigdd816243240SE +/- 0.04, N = 333.6233.5833.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamG242-P36gigdd4080120160200SE +/- 0.52, N = 3200.03198.91198.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd4080120160200SE +/- 0.53, N = 3202.23202.63201.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamG242-P36gigdd816243240SE +/- 0.08, N = 333.7533.8733.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamG242-P36gigdd1122334455SE +/- 0.32, N = 345.6846.7346.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamG242-P36gigdd100200300400500SE +/- 0.42, N = 3476.38477.10477.69

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd6001200180024003000SE +/- 25.24, N = 32677.072624.772684.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamG242-P36gigdd1122334455SE +/- 0.03, N = 347.0346.5546.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd90180270360450SE +/- 4.70, N = 3430.14421.35433.86

CacheBench

This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: ReadG242-P36gigdd2K4K6K8K10KSE +/- 0.01, N = 311438.2811438.6711438.86MIN: 11437.32 / MAX: 11438.59MIN: 11438.33 / MAX: 11438.85MIN: 11438.05 / MAX: 11439.051. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: WriteG242-P36gigdd8K16K24K32K40KSE +/- 1.22, N = 338239.9738251.5938252.63MIN: 35288.52 / MAX: 41382MIN: 35289.91 / MAX: 41383.99MIN: 35291.37 / MAX: 41384.31. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / WriteG242-P36gigdd10K20K30K40K50KSE +/- 2.04, N = 345034.9845027.4745041.15MIN: 43692.22 / MAX: 45639.26MIN: 43694.36 / MAX: 45640.07MIN: 43693.38 / MAX: 45647.651. (CC) gcc options: -O3 -lrt

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingG242-P36gigdd70K140K210K280K350KSE +/- 991.66, N = 33333163330573315791. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingG242-P36gigdd120K240K360K480K600KSE +/- 396.38, N = 35376475412045415521. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASG242-P36gigdd1428425670SE +/- 0.58, N = 36259601. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: EigenG242-P36gigdd1122334455SE +/- 0.33, N = 34847481. (CXX) g++ options: -flto -pthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeG242-P36gigdd50M100M150M200M250MSE +/- 6857171.33, N = 151886531771776539162268595481. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareG242-P36gig1.05482.10963.16444.21925.274SE +/- 0.002, N = 34.5884.6881. (CXX) g++ options: -O3

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential FillG242-P36gigdd60K120K180K240K300KSE +/- 3101.60, N = 52950792900592857661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random FillG242-P36gigdd60K120K180K240K300KSE +/- 1985.22, N = 32849872782642853161. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill SyncG242-P36gigdd40K80K120K160K200KSE +/- 1986.97, N = 32073762044102078911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadG242-P36gigdd90M180M270M360M450MSE +/- 2947408.87, N = 114095716254184483044204374711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingG242-P36gigdd3M6M9M12M15MSE +/- 201662.23, N = 151290503513255341137855301. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write RandomG242-P36gigdd500K1000K1500K2000K2500KSE +/- 21596.32, N = 32419683251851924733361. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update RandomG242-P36gigdd60K120K180K240K300KSE +/- 1573.56, N = 32722752649982647481. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadG242-P36gigdd100M200M300M400M500MSE +/- 4162622.50, N = 154340523554505009124042918131. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While WritingG242-P36gigdd2M4M6M8M10MSE +/- 68677.29, N = 98558845851606086365631. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomG242-P36gigdd800K1600K2400K3200K4000KSE +/- 30568.75, N = 73320337344903835373221. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomG242-P36gigdd100K200K300K400K500KSE +/- 4409.44, N = 34314064279084438041. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096G242-P36gigdd14002800420056007000SE +/- 0.10, N = 36342.86345.66345.31. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.ggufG242-P36gigdd612182430SE +/- 0.21, N = 621.5821.9026.641. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.ggufG242-P36gigdd48121620SE +/- 0.16, N = 1513.9014.0214.111. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.ggufG242-P36gigdd0.70651.4132.11952.8263.5325SE +/- 0.03, N = 83.073.133.141. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -lopenblas

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096G242-P36gigdd110K220K330K440K550KSE +/- 27.21, N = 3517886.0518115.9518085.71. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd1326395265SE +/- 0.09, N = 355.5755.4255.72

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamG242-P36gigdd4080120160200SE +/- 0.10, N = 3185.36185.87183.64

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamG242-P36gigdd306090120150SE +/- 0.15, N = 3132.10133.45130.66

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamG242-P36gigdd400800120016002000SE +/- 1.29, N = 31834.581832.121850.23

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamG242-P36gigdd70140210280350SE +/- 0.64, N = 3314.15315.90316.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd70140210280350SE +/- 0.86, N = 3310.84310.64311.53

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamG242-P36gigdd400800120016002000SE +/- 0.45, N = 31830.581830.721843.24

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamG242-P36gigdd30060090012001500SE +/- 8.37, N = 31358.181334.541336.39

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamG242-P36gigdd306090120150SE +/- 0.18, N = 3132.30132.32132.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd612182430SE +/- 0.21, N = 323.5024.0123.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamG242-P36gigdd30060090012001500SE +/- 0.85, N = 31320.141326.961328.00

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamG242-P36gigdd306090120150SE +/- 1.58, N = 3146.75149.48145.28

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigG242-P36gigdd20406080100SE +/- 0.82, N = 378.7080.0880.24

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigG242-P36gigdd70140210280350SE +/- 1.01, N = 3308.30309.48310.14

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaG242-P36gigdd60120180240300SE +/- 0.67, N = 3266.33267.86264.74

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix MakefilesG242-P36gigdd90180270360450SE +/- 1.15, N = 3411.52408.27407.19

110 Results Shown

PyTorch:
  CPU - 1 - ResNet-50
  CPU - 1 - ResNet-152
  CPU - 1 - Efficientnet_v2_l
  CPU - 16 - ResNet-50
  CPU - 16 - ResNet-152
Stress-NG:
  CPU Stress
  Crypto
  Memory Copying
  Glibc Qsort Data Sorting
  Glibc C String Functions
  Vector Math
  Matrix Math
  Forking
  System V Message Passing
  Semaphores
  Socket Activity
  Context Switching
  Atomic
  CPU Cache
  Malloc
  MEMFD
  MMAP
  NUMA
  SENDFILE
  IO_uring
  Futex
  Mutex
  Function Call
  Poll
  Hash
  Pthread
  Zlib
  Floating Point
  Fused Multiply-Add
  Pipe
  Matrix 3D Math
  AVL Tree
  Vector Floating Point
  Vector Shuffle
  Wide Vector Math
  Cloning
  AVX-512 VNNI
  Mixed Scheduler
OpenSSL:
  SHA256
  SHA512
  AES-128-GCM
  AES-256-GCM
  ChaCha20
  ChaCha20-Poly1305
miniFE
Quicksilver:
  CORAL2 P1
  CORAL2 P2
  CTS2
Algebraic Multi-Grid Benchmark
ACES DGEMM
Xmrig:
  Monero - 1M
  Wownero - 1M
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
CacheBench:
  Read
  Write
  Read / Modify / Write
7-Zip Compression:
  Compression Rating
  Decompression Rating
LeelaChessZero:
  BLAS
  Eigen
Stockfish
GROMACS
Speedb:
  Seq Fill
  Rand Fill
  Rand Fill Sync
  Rand Read
  Read While Writing
  Read Rand Write Rand
  Update Rand
RocksDB:
  Rand Read
  Read While Writing
  Read Rand Write Rand
  Update Rand
OpenSSL
Llama.cpp:
  llama-2-7b.Q4_0.gguf
  llama-2-13b.Q4_0.gguf
  llama-2-70b-chat.Q5_0.gguf
OpenSSL
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
Timed LLVM Compilation:
  Ninja
  Unix Makefiles