Tau T2A 16 vCPUs

amazon testing on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208198-NE-2208123NE86
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 13 Tests
Creator Workloads 3 Tests
Cryptography 2 Tests
Database Test Suite 5 Tests
Encoding 2 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 11 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 2 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 6 Tests
Multi-Core 15 Tests
OpenMPI Tests 8 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 4 Tests
Scientific Computing 4 Tests
Server 7 Tests
Server CPU Tests 8 Tests
Single-Threaded 2 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Tau T2A: 32 vCPUs
August 11 2022
  1 Day, 11 Hours, 24 Minutes
m6g.8xlarge
August 18 2022
  1 Day, 4 Hours, 29 Minutes
Invert Hiding All Results Option
  1 Day, 7 Hours, 57 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Tau T2A 16 vCPUsProcessorMotherboardMemoryDiskNetworkChipsetOSKernelCompilerFile-SystemSystem LayerTau T2A: 32 vCPUsm6g.8xlargeARMv8 Neoverse-N1 (32 Cores)KVM Google Compute Engine128GB215GB nvme_card-pdGoogle Compute Engine VirtualUbuntu 22.045.15.0-1016-gcp (aarch64)GCC 12.0.1 20220319ext4KVMAmazon EC2 m6g.8xlarge (1.0 BIOS)Amazon Device 0200215GB Amazon Elastic Block StoreAmazon Elastic5.15.0-1009-aws (aarch64)amazonOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- Tau T2A: 32 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - m6g.8xlarge: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

Tau T2A: 32 vCPUs vs. m6g.8xlarge ComparisonPhoronix Test SuiteBaseline+283.2%+283.2%+566.4%+566.4%+849.6%+849.6%+1132.8%+1132.8%153.7%142.1%99.3%56.2%56.2%41.3%39.9%31.4%27.4%24.7%21.7%21.1%21%19.8%18%17.3%16.7%16.4%16%15.7%13.5%13%12.8%12.4%11.5%11.4%11.1%10.9%10.9%10.5%10.4%10%9.9%9.7%9.1%8.8%8.5%8.2%8.1%7.4%7.4%7%6.3%6.3%6.1%4.9%4.6%4.4%4.1%3.4%3.1%3%2.7%2.4%2.2%100 - 250 - Read Write - Average Latency100 - 250 - Read WriteCPU Cache1132.9%Update Rand100 - 100 - Read Write100 - 100 - Read Write - Average LatencyF.H.RTradesoapR.R.W.RTradebeansWrites500H21000CPU - SqueezeNet v220.4%CPU - Numpy - 16384 - Equation of State20%1000000 - 100 - Group By Test TimeCoreMark Size 666 - I.P.S19.3%CPU - SqueezeNet v1.119%CPU19%Rand Read19%RSA409618.9%EP.D18.9%RSA409618.9%SHA25618.6%Vector Math18.6%CPU Stress18.5%Matrix Math18.4%1000000 - 100 - C.P.B18.4%40000000 - 100 - C.P.B18.4%Mobilenet Quant18.3%1000000 - 2000 - C.P.B18.2%tConvolve MT - Degridding40000000 - 2000 - C.P.B18%217.9%17.6%S.C.m.jCPU - MobileNet v217.3%16.9%tConvolve MPI - Gridding1000000 - 100 - I.J.T.T616.1%S.C.c.jExhaustive16%1000000 - 100 - S.5.B.TMedium15.7%Thorough15.5%CPU - Numpy - 16384 - Isoneutral Mixing14.3%Speed 5 - Bosphorus 1080p13.7%I.M.D.S6, Lossless13.4%20k Atoms13.1%CPU - Numpy - 1048576 - Equation of State40000000 - 2000 - S.5.B.TSavina Reactors.IO12.7%SET12.5%tConvolve MPI - DegriddingBMW27 - CPU-Only12.1%Rhodopsin Protein12%1000000 - 2000 - S.5.B.TCPU - DenseNet11.4%26tConvolve OpenMP - Gridding100 - 100 - Read Only - Average LatencySpeed 0 - Bosphorus 1080p10.9%40000000 - 100 - S.5.B.TMPI CPU - water_GMX50_bare10.6%100 - 100 - Read OnlyCPU - Numpy - 4194304 - Equation of StateSpeed 0 - Bosphorus 4K10.4%Jython10.3%Speed 5 - Bosphorus 4K10.3%1000000 - 2000 - C.P.B.U.D10.2%40000000 - 100 - C.P.B.U.D10.1%40000000 - 2000 - C.P.B.U.D10%1000000 - 2000 - Group By Test TimeClassroom - CPU-Only10%NUMA1000000 - 2000 - R.T.T1000000 - 100 - C.P.B.U.D9.4%GET9.3%Fishy Cat - CPU-Only9.1%1000000 - 2000 - I.J.T.TRead While WritingLU.C8.6%26100 - 250 - Read Only - Average LatencyApache Spark Bayes8.1%100 - 250 - Read Only10, Lossless7.5%S.V.M.PtConvolve MT - Gridding26Time To Compile6.6%Mobilenet Float6.5%1000000 - 100 - R.T.TIS.D40000000 - 2000 - Group By Test Time6%A.U.C.T5%tConvolve OpenMP - DegriddingFutex1000000 - 2000 - B.I.J.T.TTime To Compile4.2%ALS Movie LensI.R.V3.9%40000000 - 100 - R.T.T3.8%40000000 - 2000 - R.T.T3.7%40000000 - 100 - I.J.T.T3.7%BT.C3.6%Rand Forest3.6%SP.CFT.C3.1%1000000 - 100 - B.I.J.T.TMG.C3%40000000 - 100 - Group By Test Time40000000 - 2000 - I.J.T.T3%Inception V43%40000000 - 100 - B.I.J.T.T2.9%Scala DottyG.A.U.J.F2.7%Apache Spark ALS2.7%40000000 - 2000 - B.I.J.T.T2.5%H.C.OA.S.P2.4%CG.C2.4%NASNet Mobile2.3%26PostgreSQL pgbenchPostgreSQL pgbenchStress-NGFacebook RocksDBPostgreSQL pgbenchPostgreSQL pgbenchRenaissanceDaCapo BenchmarkFacebook RocksDBDaCapo BenchmarkApache CassandranginxDaCapo BenchmarknginxTNNPyHPC BenchmarksApache SparkCoremarkTNNSysbenchFacebook RocksDBOpenSSLNAS Parallel BenchmarksOpenSSLOpenSSLStress-NGStress-NGStress-NGApache SparkApache SparkTensorFlow LiteApache SparkASKAPApache Sparklibavif avifenclibavif avifencSPECjbb 2015TNNAircrack-ngASKAPApache Sparklibavif avifencSPECjbb 2015ASTC EncoderApache SparkASTC EncoderASTC EncoderPyHPC BenchmarksVP9 libvpx EncodingRenaissancelibavif avifencLAMMPS Molecular Dynamics SimulatorPyHPC BenchmarksApache SparkRenaissanceRedisASKAPBlenderLAMMPS Molecular Dynamics SimulatorApache SparkTNNGraph500ASKAPPostgreSQL pgbenchVP9 libvpx EncodingApache SparkGROMACSPostgreSQL pgbenchPyHPC BenchmarksVP9 libvpx EncodingDaCapo BenchmarkVP9 libvpx EncodingApache SparkApache SparkApache SparkApache SparkBlenderStress-NGApache SparkApache SparkRedisBlenderApache SparkFacebook RocksDBNAS Parallel BenchmarksGraph500PostgreSQL pgbenchRenaissancePostgreSQL pgbenchlibavif avifencStress-NGASKAPGraph500Timed MPlayer CompilationTensorFlow LiteApache SparkNAS Parallel BenchmarksApache SparkHigh Performance Conjugate GradientRenaissanceASKAPStress-NGApache SparkTimed FFmpeg CompilationRenaissanceTensorFlow LiteApache SparkApache SparkApache SparkNAS Parallel BenchmarksRenaissanceNAS Parallel BenchmarksNAS Parallel BenchmarksApache SparkNAS Parallel BenchmarksApache SparkApache SparkTensorFlow LiteApache SparkRenaissanceRenaissanceRenaissanceApache SparkASKAPRenaissanceNAS Parallel BenchmarksTensorFlow LiteGraph500Tau T2A: 32 vCPUsm6g.8xlarge

Tau T2A 16 vCPUsspec-jbb2015: SPECjbb2015-Composite critical-jOPSspec-jbb2015: SPECjbb2015-Composite max-jOPSrenaissance: Akka Unbalanced Cobwebbed Treespark: 40000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - SHA-512 Benchmark Timegraph500: 26graph500: 26graph500: 26graph500: 26lammps: 20k Atomsspark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - SHA-512 Benchmark Timerenaissance: Scala Dottyspark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - SHA-512 Benchmark Timerenaissance: Apache Spark PageRankpgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlyopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writerenaissance: ALS Movie Lensspark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - SHA-512 Benchmark Timebuild-gem5: Time To Compilerocksdb: Read Rand Write Randvpxenc: Speed 0 - Bosphorus 4Kavifenc: 0renaissance: Apache Spark ALSblender: Classroom - CPU-Onlytensorflow-lite: SqueezeNetblender: Fishy Cat - CPU-Onlytnn: CPU - DenseNetaskap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingrocksdb: Read While Writingavifenc: 2tensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantopenssl: SHA256renaissance: Savina Reactors.IOpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingaskap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingrocksdb: Rand Readrenaissance: Genetic Algorithm Using Jenetics + Futurespgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writehpcg: gpaw: Carbon Nanotubecassandra: Writesvpxenc: Speed 0 - Bosphorus 1080pblender: BMW27 - CPU-Onlygromacs: MPI CPU - water_GMX50_barerenaissance: In-Memory Database Shootoutrenaissance: Apache Spark Bayesrenaissance: Finagle HTTP Requestsvpxenc: Speed 5 - Bosphorus 4Kaircrack-ng: nginx: 500stress-ng: Futexnginx: 1000sysbench: CPUastcenc: Exhaustivestress-ng: CPU Cachepyhpc: CPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 16384 - Isoneutral Mixingrocksdb: Update Randopenssl: RSA4096openssl: RSA4096renaissance: Rand Forestnpb: SP.Cvpxenc: Speed 5 - Bosphorus 1080pdacapobench: Tradesoapnpb: EP.Ddacapobench: H2pyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingnpb: BT.Cbuild-ffmpeg: Time To Compilebuild-mplayer: Time To Compilestress-ng: CPU Stressstress-ng: NUMAstress-ng: Vector Mathstress-ng: Matrix Mathstress-ng: System V Message Passingnpb: LU.Ctnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1npb: IS.Daskap: Hogbom Clean OpenMPcoremark: CoreMark Size 666 - Iterations Per Seconddacapobench: Tradebeanspyhpc: CPU - Numpy - 16384 - Equation of Stateredis: SETredis: GETaskap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingastcenc: Thoroughpyhpc: CPU - Numpy - 1048576 - Equation of Stateavifenc: 6, Losslessnpb: SP.Bdacapobench: Jythonnpb: FT.Ctnn: CPU - SqueezeNet v2npb: CG.Cavifenc: 6avifenc: 10, Losslessastcenc: Mediumnpb: MG.Clammps: Rhodopsin ProteinTau T2A: 32 vCPUsm6g.8xlarge229553507529296.726.5528.6622.2222.844.7869.7939.2216954200012470200050837200047737700016.55031.9830.3224.3627.644.7669.5746.301871.72.122.872.606.724.8069.924.965174.30.803312239994.53206.4114.801228217606.61.682.132.016.724.7969.774.79312.12013218272.13266.3374118.3249.893853.90214.413056.8975522.074456.552610992169.63931657.333994.928372.82093.253550.652578891991310705.93.7233899.283962.081247042013084.20.30432953929.559338322.0930130.353878194.99112.471.7186566.5766.49430.86.9933647.548235749.361437660.62233484.08108241.6168.6557566.912.0550.014211735128273.11570.21047.326843.5812.1150153265.6851750.91569530.6438.95828.9288209.47549.1397749.08151792.836128517.1087702.30322.768301.1501822.77996.700700917.94473759540.0051411234.921926297.799181.247262.747.16190.39210.34134381.91507952309.8195.47321433.926.6826.7755.982550939.0516.596266384115730764.827.2129.5223.0521.525.2682.3234.7718394000013891800051964600051076100014.63532.9231.4425.2826.835.2482.3441.761821.72.032.632.376.115.2982.624.455297.40.742337416977.47208.245.258552416918.71.631.831.895.615.2482.604.14316.27817370211.93313.2734229.8274.813806.31233.993406.5386515.574785.752839853199.97332600.735336.429037.22230.114200.712174872823312067.93.6604550.234453.691048250863167.90.27436419318.922528520.8354132.6801095254.50126.121.5545783.7828.86674.36.3428780.706286801.671503894.12282425.0990962.2279.616445.981.8620.016421971107872.31320.91084.527767.1010.6535842746.8142720.91567111.7240.61330.8386927.34603.2582451.97128190.246581912.0480791.93378.550358.4251937.351020.48587539.27764646730.0061254760.841761869.139626.548068.368.27060.34711.72734983.68560450732.78114.95220938.757.7557.2856.918949445.8114.824OpenBenchmarking.org

SPECjbb 2015

This is a benchmark of SPECjbb 2015. For this test profile to work, you must have a valid license/copy of the SPECjbb 2015 ISO (SPECjbb2015-1.02.iso) in your Phoronix Test Suite download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPSTau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30K2295526638

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPSTau T2A: 32 vCPUsm6g.8xlarge9K18K27K36K45K3507541157

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeTau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 344.06, N = 4SE +/- 1085.96, N = 629296.730764.8MIN: 20859.52 / MAX: 30225.51MIN: 23349.61 / MAX: 36054.32
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeTau T2A: 32 vCPUsm6g.8xlarge5K10K15K20K25KMin: 28772.63 / Avg: 29296.69 / Max: 30225.51Min: 29177.17 / Avg: 30764.77 / Max: 36054.32

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430SE +/- 0.17, N = 12SE +/- 0.10, N = 1226.5527.21
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430Min: 25.77 / Avg: 26.55 / Max: 27.69Min: 26.65 / Avg: 27.21 / Max: 27.68

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.19, N = 12SE +/- 0.13, N = 1228.6629.52
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835Min: 27.96 / Avg: 28.66 / Max: 29.92Min: 28.67 / Avg: 29.52 / Max: 30.31

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430SE +/- 0.24, N = 12SE +/- 0.09, N = 1222.2223.05
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge510152025Min: 21.07 / Avg: 22.22 / Max: 24.03Min: 22.28 / Avg: 23.05 / Max: 23.32

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge510152025SE +/- 0.32, N = 12SE +/- 0.18, N = 1222.8421.52
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge510152025Min: 21.59 / Avg: 22.84 / Max: 25Min: 20.89 / Avg: 21.52 / Max: 23.23

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.18352.3673.55054.7345.9175SE +/- 0.02, N = 12SE +/- 0.01, N = 124.785.26
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 4.7 / Avg: 4.78 / Max: 4.93Min: 5.2 / Avg: 5.26 / Max: 5.33

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.11, N = 12SE +/- 0.07, N = 1269.7982.32
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge1632486480Min: 69.21 / Avg: 69.79 / Max: 70.3Min: 81.89 / Avg: 82.32 / Max: 82.78

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge918273645SE +/- 0.55, N = 12SE +/- 0.37, N = 1239.2234.77
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge816243240Min: 37.02 / Avg: 39.22 / Max: 43.53Min: 33.04 / Avg: 34.77 / Max: 37.13

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge40M80M120M160M200M1695420001839400001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge30M60M90M120M150M1247020001389180001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge110M220M330M440M550M5083720005196460001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge110M220M330M440M550M4773770005107610001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsTau T2A: 32 vCPUsm6g.8xlarge48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 316.5514.641. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsTau T2A: 32 vCPUsm6g.8xlarge48121620Min: 16.55 / Avg: 16.55 / Max: 16.56Min: 14.62 / Avg: 14.64 / Max: 14.651. (CXX) g++ options: -O3 -march=native -ldl

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge816243240SE +/- 0.26, N = 9SE +/- 0.39, N = 331.9832.92
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835Min: 30.81 / Avg: 31.98 / Max: 33.46Min: 32.14 / Avg: 32.92 / Max: 33.33

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.44, N = 9SE +/- 0.34, N = 330.3231.44
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835Min: 28.69 / Avg: 30.32 / Max: 33.24Min: 30.92 / Avg: 31.44 / Max: 32.08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430SE +/- 0.12, N = 9SE +/- 0.11, N = 324.3625.28
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430Min: 23.9 / Avg: 24.36 / Max: 25.05Min: 25.06 / Avg: 25.28 / Max: 25.46

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.16, N = 9SE +/- 0.31, N = 327.6426.83
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430Min: 26.97 / Avg: 27.64 / Max: 28.3Min: 26.2 / Avg: 26.83 / Max: 27.16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.1792.3583.5374.7165.895SE +/- 0.01, N = 9SE +/- 0.02, N = 34.765.24
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 4.7 / Avg: 4.76 / Max: 4.81Min: 5.22 / Avg: 5.24 / Max: 5.28

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.08, N = 9SE +/- 0.23, N = 369.5782.34
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge1632486480Min: 69.23 / Avg: 69.57 / Max: 69.99Min: 81.99 / Avg: 82.34 / Max: 82.78

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge1020304050SE +/- 0.45, N = 9SE +/- 0.11, N = 346.3041.76
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge918273645Min: 45.06 / Avg: 46.3 / Max: 49.46Min: 41.59 / Avg: 41.76 / Max: 41.96

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyTau T2A: 32 vCPUsm6g.8xlarge400800120016002000SE +/- 32.38, N = 11SE +/- 25.25, N = 121871.71821.7MIN: 1370.64 / MAX: 3034.49MIN: 1387.66 / MAX: 2713.36
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyTau T2A: 32 vCPUsm6g.8xlarge30060090012001500Min: 1755.54 / Avg: 1871.74 / Max: 2085.76Min: 1665.52 / Avg: 1821.67 / Max: 1906.88

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.4770.9541.4311.9082.385SE +/- 0.02, N = 15SE +/- 0.05, N = 112.122.03
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 1.94 / Avg: 2.12 / Max: 2.28Min: 1.8 / Avg: 2.03 / Max: 2.29

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.64581.29161.93742.58323.229SE +/- 0.04, N = 15SE +/- 0.03, N = 112.872.63
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 2.65 / Avg: 2.87 / Max: 3.14Min: 2.5 / Avg: 2.63 / Max: 2.83

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.5851.171.7552.342.925SE +/- 0.03, N = 15SE +/- 0.03, N = 112.602.37
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 2.41 / Avg: 2.6 / Max: 2.87Min: 2.24 / Avg: 2.37 / Max: 2.57

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.05, N = 15SE +/- 0.03, N = 116.726.11
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge3691215Min: 6.22 / Avg: 6.72 / Max: 7Min: 5.89 / Avg: 6.11 / Max: 6.26

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.19032.38063.57094.76125.9515SE +/- 0.01, N = 15SE +/- 0.01, N = 114.805.29
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 4.72 / Avg: 4.8 / Max: 4.87Min: 5.24 / Avg: 5.29 / Max: 5.35

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.06, N = 15SE +/- 0.05, N = 1169.9282.62
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge1632486480Min: 69.49 / Avg: 69.92 / Max: 70.28Min: 82.17 / Avg: 82.62 / Max: 82.76

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge1.1162.2323.3484.4645.58SE +/- 0.04, N = 15SE +/- 0.03, N = 114.964.45
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 4.73 / Avg: 4.96 / Max: 5.21Min: 4.25 / Avg: 4.45 / Max: 4.65

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 77.61, N = 12SE +/- 49.78, N = 35174.35297.4MIN: 4316.47 / MAX: 6446.52MIN: 4909.93 / MAX: 5385.51
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankTau T2A: 32 vCPUsm6g.8xlarge9001800270036004500Min: 4649.9 / Avg: 5174.3 / Max: 5678.23Min: 5213.2 / Avg: 5297.37 / Max: 5385.51

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge0.18070.36140.54210.72280.9035SE +/- 0.012, N = 12SE +/- 0.008, N = 40.8030.7421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge246810Min: 0.76 / Avg: 0.8 / Max: 0.86Min: 0.72 / Avg: 0.74 / Max: 0.761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyTau T2A: 32 vCPUsm6g.8xlarge70K140K210K280K350KSE +/- 4561.68, N = 12SE +/- 3802.23, N = 43122393374161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyTau T2A: 32 vCPUsm6g.8xlarge60K120K180K240K300KMin: 291136.68 / Avg: 312239.35 / Max: 329206.29Min: 328170.61 / Avg: 337415.97 / Max: 346468.751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Execution TimeTau T2A: 32 vCPUsm6g.8xlarge2004006008001000994.53977.471. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Mesh TimeTau T2A: 32 vCPUsm6g.8xlarge50100150200250206.4208.21. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 7.17, N = 12SE +/- 0.05, N = 3114.8045.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge20406080100Min: 73.6 / Avg: 114.8 / Max: 144.41Min: 45.15 / Avg: 45.26 / Max: 45.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteTau T2A: 32 vCPUsm6g.8xlarge12002400360048006000SE +/- 154.25, N = 12SE +/- 6.46, N = 3228255241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000Min: 1731.22 / Avg: 2281.55 / Max: 3396.54Min: 5515.3 / Avg: 5523.91 / Max: 5536.551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensTau T2A: 32 vCPUsm6g.8xlarge4K8K12K16K20KSE +/- 57.21, N = 3SE +/- 233.21, N = 317606.616918.7MIN: 17544.26 / MAX: 19037.24MIN: 16601.1 / MAX: 18787.37
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensTau T2A: 32 vCPUsm6g.8xlarge3K6K9K12K15KMin: 17544.26 / Avg: 17606.65 / Max: 17720.91Min: 16601.1 / Avg: 16918.66 / Max: 17373.28

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.3780.7561.1341.5121.89SE +/- 0.03, N = 15SE +/- 0.08, N = 31.681.63
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 1.49 / Avg: 1.68 / Max: 1.94Min: 1.48 / Avg: 1.63 / Max: 1.75

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.47930.95861.43791.91722.3965SE +/- 0.02, N = 15SE +/- 0.01, N = 32.131.83
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 1.93 / Avg: 2.13 / Max: 2.3Min: 1.8 / Avg: 1.83 / Max: 1.84

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.45230.90461.35691.80922.2615SE +/- 0.03, N = 15SE +/- 0.03, N = 32.011.89
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 1.85 / Avg: 2.01 / Max: 2.17Min: 1.84 / Avg: 1.89 / Max: 1.95

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.23, N = 15SE +/- 0.06, N = 36.725.61
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge3691215Min: 6.07 / Avg: 6.72 / Max: 9.41Min: 5.5 / Avg: 5.61 / Max: 5.71

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.1792.3583.5374.7165.895SE +/- 0.01, N = 15SE +/- 0.04, N = 34.795.24
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 4.72 / Avg: 4.79 / Max: 4.87Min: 5.18 / Avg: 5.24 / Max: 5.3

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.06, N = 15SE +/- 0.15, N = 369.7782.60
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge1632486480Min: 69.44 / Avg: 69.77 / Max: 70.23Min: 82.39 / Avg: 82.6 / Max: 82.89

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge1.07782.15563.23344.31125.389SE +/- 0.11, N = 15SE +/- 0.02, N = 34.794.14
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge246810Min: 4.41 / Avg: 4.79 / Max: 5.83Min: 4.12 / Avg: 4.14 / Max: 4.18

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileTau T2A: 32 vCPUsm6g.8xlarge70140210280350SE +/- 1.96, N = 3SE +/- 0.15, N = 3312.12316.28
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileTau T2A: 32 vCPUsm6g.8xlarge60120180240300Min: 309.1 / Avg: 312.12 / Max: 315.78Min: 316.03 / Avg: 316.28 / Max: 316.56

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomTau T2A: 32 vCPUsm6g.8xlarge400K800K1200K1600K2000KSE +/- 9643.50, N = 15SE +/- 17104.02, N = 15132182717370211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomTau T2A: 32 vCPUsm6g.8xlarge300K600K900K1200K1500KMin: 1278014 / Avg: 1321826.6 / Max: 1419584Min: 1639556 / Avg: 1737020.73 / Max: 18332201. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KTau T2A: 32 vCPUsm6g.8xlarge0.47930.95861.43791.91722.3965SE +/- 0.00, N = 3SE +/- 0.00, N = 32.131.931. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KTau T2A: 32 vCPUsm6g.8xlarge246810Min: 2.13 / Avg: 2.13 / Max: 2.13Min: 1.93 / Avg: 1.93 / Max: 1.941. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Tau T2A: 32 vCPUsm6g.8xlarge70140210280350SE +/- 0.65, N = 3SE +/- 0.28, N = 3266.34313.271. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Tau T2A: 32 vCPUsm6g.8xlarge60120180240300Min: 265.62 / Avg: 266.34 / Max: 267.64Min: 312.72 / Avg: 313.27 / Max: 313.591. (CXX) g++ options: -O3 -fPIC -march=native -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSTau T2A: 32 vCPUsm6g.8xlarge9001800270036004500SE +/- 32.04, N = 3SE +/- 34.35, N = 94118.34229.8MIN: 3925.84 / MAX: 4358.22MIN: 4008.75 / MAX: 4594.43
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500Min: 4055.57 / Avg: 4118.34 / Max: 4160.9Min: 4118.87 / Avg: 4229.82 / Max: 4427.24

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge60120180240300SE +/- 0.07, N = 3SE +/- 0.46, N = 3249.89274.81
OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge50100150200250Min: 249.81 / Avg: 249.89 / Max: 250.02Min: 274.12 / Avg: 274.81 / Max: 275.68

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetTau T2A: 32 vCPUsm6g.8xlarge8001600240032004000SE +/- 31.57, N = 8SE +/- 54.35, N = 153853.903806.31
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500Min: 3683.15 / Avg: 3853.9 / Max: 3940.72Min: 3496.53 / Avg: 3806.31 / Max: 4148.32

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge50100150200250SE +/- 0.42, N = 3SE +/- 0.24, N = 3214.41233.99
OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge4080120160200Min: 213.67 / Avg: 214.41 / Max: 215.13Min: 233.63 / Avg: 233.99 / Max: 234.45

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500SE +/- 6.90, N = 3SE +/- 5.54, N = 33056.903406.54MIN: 2928.19 / MAX: 3237.58MIN: 3340.13 / MAX: 3491.431. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetTau T2A: 32 vCPUsm6g.8xlarge6001200180024003000Min: 3043.51 / Avg: 3056.9 / Max: 3066.48Min: 3395.73 / Avg: 3406.54 / Max: 3414.091. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingTau T2A: 32 vCPUsm6g.8xlarge14002800420056007000SE +/- 80.56, N = 15SE +/- 4.40, N = 35522.076515.571. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500Min: 5023.7 / Avg: 5522.07 / Max: 5974.89Min: 6508.93 / Avg: 6515.57 / Max: 6523.881. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000SE +/- 35.89, N = 15SE +/- 7.65, N = 34456.554785.751. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingTau T2A: 32 vCPUsm6g.8xlarge8001600240032004000Min: 4180.66 / Avg: 4456.55 / Max: 4653.3Min: 4770.54 / Avg: 4785.75 / Max: 4794.711. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingTau T2A: 32 vCPUsm6g.8xlarge600K1200K1800K2400K3000KSE +/- 32390.32, N = 12SE +/- 25190.27, N = 7261099228398531. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingTau T2A: 32 vCPUsm6g.8xlarge500K1000K1500K2000K2500KMin: 2501493 / Avg: 2610991.75 / Max: 2841216Min: 2732513 / Avg: 2839852.86 / Max: 28989401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2Tau T2A: 32 vCPUsm6g.8xlarge4080120160200SE +/- 0.13, N = 3SE +/- 0.14, N = 3169.64199.971. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2Tau T2A: 32 vCPUsm6g.8xlarge4080120160200Min: 169.44 / Avg: 169.64 / Max: 169.9Min: 199.82 / Avg: 199.97 / Max: 200.261. (CXX) g++ options: -O3 -fPIC -march=native -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Tau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 149.01, N = 3SE +/- 296.19, N = 1531657.332600.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Tau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KMin: 31403.2 / Avg: 31657.3 / Max: 31919.2Min: 30979.2 / Avg: 32600.7 / Max: 34565.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Tau T2A: 32 vCPUsm6g.8xlarge8K16K24K32K40KSE +/- 379.42, N = 3SE +/- 396.60, N = 1533994.935336.4
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Tau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KMin: 33299 / Avg: 33994.87 / Max: 34604.9Min: 33280.5 / Avg: 35336.41 / Max: 37467.8

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileTau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KSE +/- 355.48, N = 3SE +/- 527.69, N = 1528372.829037.2
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileTau T2A: 32 vCPUsm6g.8xlarge5K10K15K20K25KMin: 27971.5 / Avg: 28372.8 / Max: 29081.7Min: 26357.6 / Avg: 29037.25 / Max: 32849.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatTau T2A: 32 vCPUsm6g.8xlarge5001000150020002500SE +/- 17.55, N = 3SE +/- 55.66, N = 152093.252230.11
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatTau T2A: 32 vCPUsm6g.8xlarge400800120016002000Min: 2075.66 / Avg: 2093.25 / Max: 2128.36Min: 1994.52 / Avg: 2230.11 / Max: 2717.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantTau T2A: 32 vCPUsm6g.8xlarge9001800270036004500SE +/- 14.04, N = 3SE +/- 72.68, N = 153550.654200.71
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500Min: 3528.98 / Avg: 3550.65 / Max: 3576.94Min: 3878.62 / Avg: 4200.71 / Max: 5121.42

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Tau T2A: 32 vCPUsm6g.8xlarge6000M12000M18000M24000M30000MSE +/- 119493320.18, N = 3SE +/- 4842815.19, N = 325788919913217487282331. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Tau T2A: 32 vCPUsm6g.8xlarge4000M8000M12000M16000M20000MMin: 25549965560 / Avg: 25788919913.33 / Max: 25911799070Min: 21739103990 / Avg: 21748728233.33 / Max: 217544832401. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOTau T2A: 32 vCPUsm6g.8xlarge3K6K9K12K15KSE +/- 131.70, N = 4SE +/- 101.80, N = 310705.912067.9MIN: 10505.49 / MAX: 14847.21MIN: 11869.31 / MAX: 19424.22
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KMin: 10505.49 / Avg: 10705.92 / Max: 11080.73Min: 11869.31 / Avg: 12067.91 / Max: 12206.08

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge0.83771.67542.51313.35084.1885SE +/- 0.016, N = 3SE +/- 0.021, N = 33.7233.660
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge246810Min: 3.71 / Avg: 3.72 / Max: 3.76Min: 3.64 / Avg: 3.66 / Max: 3.7

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000SE +/- 42.99, N = 15SE +/- 6.58, N = 33899.284550.231. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingTau T2A: 32 vCPUsm6g.8xlarge8001600240032004000Min: 3430.01 / Avg: 3899.28 / Max: 4036.86Min: 4543.65 / Avg: 4550.23 / Max: 4563.41. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000SE +/- 54.84, N = 15SE +/- 6.31, N = 33962.084453.691. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingTau T2A: 32 vCPUsm6g.8xlarge8001600240032004000Min: 3452.57 / Avg: 3962.08 / Max: 4181.6Min: 4447.38 / Avg: 4453.69 / Max: 4466.311. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadTau T2A: 32 vCPUsm6g.8xlarge30M60M90M120M150MSE +/- 376574.31, N = 3SE +/- 726664.50, N = 131247042011048250861. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadTau T2A: 32 vCPUsm6g.8xlarge20M40M60M80M100MMin: 123951463 / Avg: 124704201.33 / Max: 125102097Min: 100761806 / Avg: 104825086.38 / Max: 1070896191. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500SE +/- 8.81, N = 3SE +/- 14.17, N = 33084.23167.9MIN: 2993.8 / MAX: 3192.9MIN: 3061.76 / MAX: 3232.32
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesTau T2A: 32 vCPUsm6g.8xlarge6001200180024003000Min: 3067.59 / Avg: 3084.16 / Max: 3097.64Min: 3140.01 / Avg: 3167.94 / Max: 3186.02

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge0.06840.13680.20520.27360.342SE +/- 0.002, N = 3SE +/- 0.002, N = 30.3040.2741. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge12345Min: 0.3 / Avg: 0.3 / Max: 0.31Min: 0.27 / Avg: 0.27 / Max: 0.281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyTau T2A: 32 vCPUsm6g.8xlarge80K160K240K320K400KSE +/- 1811.74, N = 3SE +/- 3146.40, N = 33295393641931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyTau T2A: 32 vCPUsm6g.8xlarge60K120K180K240K300KMin: 325915.71 / Avg: 329538.71 / Max: 331400.84Min: 360399.19 / Avg: 364192.68 / Max: 370437.591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.06, N = 3SE +/- 0.04, N = 329.5618.921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge714212835Min: 29.44 / Avg: 29.56 / Max: 29.64Min: 18.86 / Avg: 18.92 / Max: 18.991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 6.74, N = 3SE +/- 10.72, N = 3338352851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteTau T2A: 32 vCPUsm6g.8xlarge9001800270036004500Min: 3374.15 / Avg: 3383.07 / Max: 3396.28Min: 5264.81 / Avg: 5284.86 / Max: 5301.461. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Tau T2A: 32 vCPUsm6g.8xlarge510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 322.0920.841. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Tau T2A: 32 vCPUsm6g.8xlarge510152025Min: 22.08 / Avg: 22.09 / Max: 22.1Min: 20.8 / Avg: 20.84 / Max: 20.871. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeTau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 0.30, N = 3SE +/- 0.03, N = 3130.35132.681. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeTau T2A: 32 vCPUsm6g.8xlarge20406080100Min: 130.04 / Avg: 130.35 / Max: 130.95Min: 132.63 / Avg: 132.68 / Max: 132.731. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: WritesTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 777.36, N = 3SE +/- 507.25, N = 387819109525
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: WritesTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KMin: 86347 / Avg: 87819.33 / Max: 88988Min: 108528 / Avg: 109525 / Max: 110186

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pTau T2A: 32 vCPUsm6g.8xlarge1.12282.24563.36844.49125.614SE +/- 0.01, N = 3SE +/- 0.00, N = 34.994.501. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pTau T2A: 32 vCPUsm6g.8xlarge246810Min: 4.97 / Avg: 4.99 / Max: 5.01Min: 4.5 / Avg: 4.5 / Max: 4.511. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 0.10, N = 3SE +/- 0.21, N = 3112.47126.12
OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge20406080100Min: 112.34 / Avg: 112.47 / Max: 112.67Min: 125.89 / Avg: 126.12 / Max: 126.54

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareTau T2A: 32 vCPUsm6g.8xlarge0.38660.77321.15981.54641.933SE +/- 0.010, N = 3SE +/- 0.001, N = 31.7181.5541. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareTau T2A: 32 vCPUsm6g.8xlarge246810Min: 1.7 / Avg: 1.72 / Max: 1.73Min: 1.55 / Avg: 1.55 / Max: 1.561. (CXX) g++ options: -O3 -march=native

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutTau T2A: 32 vCPUsm6g.8xlarge14002800420056007000SE +/- 37.00, N = 3SE +/- 39.12, N = 36566.55783.7MIN: 5609.26 / MAX: 13128.6MIN: 5350.98 / MAX: 6150.47
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500Min: 6527.84 / Avg: 6566.48 / Max: 6640.45Min: 5706.21 / Avg: 5783.65 / Max: 5832.01

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesTau T2A: 32 vCPUsm6g.8xlarge2004006008001000SE +/- 9.73, N = 3SE +/- 6.49, N = 15766.4828.8MIN: 495.95 / MAX: 1178.88MIN: 538.38 / MAX: 1090.62
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesTau T2A: 32 vCPUsm6g.8xlarge150300450600750Min: 754.77 / Avg: 766.44 / Max: 785.77Min: 795.88 / Avg: 828.81 / Max: 880.5

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 122.37, N = 3SE +/- 15.10, N = 39430.86674.3MIN: 8793.75 / MAX: 9955.78MIN: 6412.79 / MAX: 6795.18
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsTau T2A: 32 vCPUsm6g.8xlarge16003200480064008000Min: 9216.45 / Avg: 9430.81 / Max: 9640.26Min: 6644.91 / Avg: 6674.33 / Max: 6694.96

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.02, N = 3SE +/- 0.00, N = 36.996.341. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KTau T2A: 32 vCPUsm6g.8xlarge3691215Min: 6.97 / Avg: 6.99 / Max: 7.03Min: 6.34 / Avg: 6.34 / Max: 6.351. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Tau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 287.54, N = 15SE +/- 4.44, N = 333647.5528780.711. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Tau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KMin: 30818.67 / Avg: 33647.55 / Max: 34100.55Min: 28773.33 / Avg: 28780.71 / Max: 28788.671. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500Tau T2A: 32 vCPUsm6g.8xlarge60K120K180K240K300KSE +/- 245.82, N = 3SE +/- 1454.63, N = 3235749.36286801.671. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500Tau T2A: 32 vCPUsm6g.8xlarge50K100K150K200K250KMin: 235285.7 / Avg: 235749.36 / Max: 236122.79Min: 283938.79 / Avg: 286801.67 / Max: 288681.221. (CC) gcc options: -lcrypt -lz -O3 -march=native

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexTau T2A: 32 vCPUsm6g.8xlarge300K600K900K1200K1500KSE +/- 15026.23, N = 3SE +/- 17854.23, N = 151437660.621503894.121. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexTau T2A: 32 vCPUsm6g.8xlarge300K600K900K1200K1500KMin: 1416471.01 / Avg: 1437660.62 / Max: 1466711.14Min: 1394812.71 / Avg: 1503894.12 / Max: 1603314.51. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000Tau T2A: 32 vCPUsm6g.8xlarge60K120K180K240K300KSE +/- 479.91, N = 3SE +/- 2142.75, N = 3233484.08282425.091. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000Tau T2A: 32 vCPUsm6g.8xlarge50K100K150K200K250KMin: 232524.74 / Avg: 233484.08 / Max: 233990Min: 278865.35 / Avg: 282425.09 / Max: 286271.451. (CC) gcc options: -lcrypt -lz -O3 -march=native

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 23.77, N = 3SE +/- 123.73, N = 3108241.6190962.221. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KMin: 108194.99 / Avg: 108241.61 / Max: 108272.97Min: 90796.26 / Avg: 90962.22 / Max: 91204.161. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.08, N = 3SE +/- 0.02, N = 368.6679.621. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveTau T2A: 32 vCPUsm6g.8xlarge1530456075Min: 68.57 / Avg: 68.66 / Max: 68.81Min: 79.58 / Avg: 79.62 / Max: 79.631. (CXX) g++ options: -O3 -march=native -flto -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheTau T2A: 32 vCPUsm6g.8xlarge120240360480600SE +/- 0.28, N = 3SE +/- 2.14, N = 12566.9145.981. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheTau T2A: 32 vCPUsm6g.8xlarge100200300400500Min: 566.61 / Avg: 566.91 / Max: 567.47Min: 36.5 / Avg: 45.98 / Max: 57.451. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge0.46240.92481.38721.84962.312SE +/- 0.002, N = 3SE +/- 0.001, N = 32.0551.862
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge246810Min: 2.05 / Avg: 2.05 / Max: 2.06Min: 1.86 / Avg: 1.86 / Max: 1.86

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge0.00360.00720.01080.01440.018SE +/- 0.000, N = 15SE +/- 0.000, N = 30.0140.016
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge12345Min: 0.01 / Avg: 0.01 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomTau T2A: 32 vCPUsm6g.8xlarge90K180K270K360K450KSE +/- 705.83, N = 3SE +/- 1212.60, N = 32117354219711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomTau T2A: 32 vCPUsm6g.8xlarge70K140K210K280K350KMin: 210492 / Avg: 211735 / Max: 212936Min: 420032 / Avg: 421971 / Max: 4242021. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Tau T2A: 32 vCPUsm6g.8xlarge30K60K90K120K150KSE +/- 29.86, N = 3SE +/- 2.96, N = 3128273.1107872.31. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Tau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KMin: 128219.9 / Avg: 128273.1 / Max: 128323.2Min: 107867.8 / Avg: 107872.33 / Max: 107877.91. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Tau T2A: 32 vCPUsm6g.8xlarge30060090012001500SE +/- 0.06, N = 3SE +/- 0.12, N = 31570.21320.91. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Tau T2A: 32 vCPUsm6g.8xlarge30060090012001500Min: 1570.1 / Avg: 1570.2 / Max: 1570.3Min: 1320.7 / Avg: 1320.9 / Max: 1321.11. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestTau T2A: 32 vCPUsm6g.8xlarge2004006008001000SE +/- 12.92, N = 3SE +/- 2.98, N = 31047.31084.5MIN: 904.64 / MAX: 1280.13MIN: 958.08 / MAX: 1325.97
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestTau T2A: 32 vCPUsm6g.8xlarge2004006008001000Min: 1026.01 / Avg: 1047.32 / Max: 1070.63Min: 1078.6 / Avg: 1084.54 / Max: 1087.84

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CTau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KSE +/- 31.60, N = 3SE +/- 32.63, N = 326843.5827767.101. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CTau T2A: 32 vCPUsm6g.8xlarge5K10K15K20K25KMin: 26799.99 / Avg: 26843.58 / Max: 26905.01Min: 27708.85 / Avg: 27767.1 / Max: 27821.71. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080pTau T2A: 32 vCPUsm6g.8xlarge3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 312.1110.651. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080pTau T2A: 32 vCPUsm6g.8xlarge48121620Min: 12.09 / Avg: 12.11 / Max: 12.13Min: 10.62 / Avg: 10.65 / Max: 10.671. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 95.95, N = 20SE +/- 11.76, N = 450153584
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapTau T2A: 32 vCPUsm6g.8xlarge9001800270036004500Min: 4595 / Avg: 5015.35 / Max: 6201Min: 3563 / Avg: 3583.5 / Max: 3617

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500SE +/- 2.04, N = 3SE +/- 1.15, N = 33265.682746.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DTau T2A: 32 vCPUsm6g.8xlarge6001200180024003000Min: 3263.14 / Avg: 3265.68 / Max: 3269.72Min: 2745.63 / Avg: 2746.81 / Max: 2749.11. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Tau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 83.04, N = 20SE +/- 39.63, N = 451754272
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Tau T2A: 32 vCPUsm6g.8xlarge9001800270036004500Min: 4867 / Avg: 5174.6 / Max: 6440Min: 4197 / Avg: 4272.25 / Max: 4348

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge0.20590.41180.61770.82361.0295SE +/- 0.005, N = 3SE +/- 0.006, N = 30.9150.915
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge246810Min: 0.91 / Avg: 0.91 / Max: 0.92Min: 0.9 / Avg: 0.91 / Max: 0.92

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CTau T2A: 32 vCPUsm6g.8xlarge15K30K45K60K75KSE +/- 272.46, N = 3SE +/- 44.47, N = 369530.6467111.721. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CTau T2A: 32 vCPUsm6g.8xlarge12K24K36K48K60KMin: 68985.78 / Avg: 69530.64 / Max: 69809.68Min: 67055.84 / Avg: 67111.72 / Max: 67199.591. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileTau T2A: 32 vCPUsm6g.8xlarge918273645SE +/- 0.16, N = 3SE +/- 0.09, N = 338.9640.61
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileTau T2A: 32 vCPUsm6g.8xlarge816243240Min: 38.75 / Avg: 38.96 / Max: 39.28Min: 40.5 / Avg: 40.61 / Max: 40.8

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.35, N = 4SE +/- 0.13, N = 328.9330.84
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileTau T2A: 32 vCPUsm6g.8xlarge714212835Min: 28.46 / Avg: 28.93 / Max: 29.97Min: 30.61 / Avg: 30.84 / Max: 31.05

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 4.23, N = 3SE +/- 0.61, N = 38209.476927.341. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressTau T2A: 32 vCPUsm6g.8xlarge14002800420056007000Min: 8203.26 / Avg: 8209.47 / Max: 8217.54Min: 6926.13 / Avg: 6927.34 / Max: 6927.981. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMATau T2A: 32 vCPUsm6g.8xlarge130260390520650SE +/- 1.69, N = 3SE +/- 0.53, N = 3549.13603.251. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMATau T2A: 32 vCPUsm6g.8xlarge110220330440550Min: 545.77 / Avg: 549.13 / Max: 551.12Min: 602.41 / Avg: 603.25 / Max: 604.241. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 190.70, N = 3SE +/- 0.29, N = 397749.0882451.971. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KMin: 97367.81 / Avg: 97749.08 / Max: 97948.21Min: 82451.49 / Avg: 82451.97 / Max: 82452.481. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathTau T2A: 32 vCPUsm6g.8xlarge30K60K90K120K150KSE +/- 9.80, N = 3SE +/- 2.06, N = 3151792.83128190.241. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathTau T2A: 32 vCPUsm6g.8xlarge30K60K90K120K150KMin: 151776.48 / Avg: 151792.83 / Max: 151810.36Min: 128186.24 / Avg: 128190.24 / Max: 128193.131. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingTau T2A: 32 vCPUsm6g.8xlarge1.4M2.8M4.2M5.6M7MSE +/- 7551.56, N = 3SE +/- 1469.54, N = 36128517.106581912.041. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingTau T2A: 32 vCPUsm6g.8xlarge1.1M2.2M3.3M4.4M5.5MMin: 6118435.12 / Avg: 6128517.1 / Max: 6143296.86Min: 6579622.7 / Avg: 6581912.04 / Max: 6584652.911. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 137.48, N = 3SE +/- 115.81, N = 387702.3080791.931. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CTau T2A: 32 vCPUsm6g.8xlarge15K30K45K60K75KMin: 87427.73 / Avg: 87702.3 / Max: 87852.23Min: 80595.21 / Avg: 80791.93 / Max: 80996.181. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Tau T2A: 32 vCPUsm6g.8xlarge80160240320400SE +/- 0.05, N = 3SE +/- 0.20, N = 3322.77378.55MIN: 319.63 / MAX: 326.43MIN: 377.31 / MAX: 380.261. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Tau T2A: 32 vCPUsm6g.8xlarge70140210280350Min: 322.67 / Avg: 322.77 / Max: 322.86Min: 378.15 / Avg: 378.55 / Max: 378.781. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Tau T2A: 32 vCPUsm6g.8xlarge80160240320400SE +/- 0.07, N = 3SE +/- 0.08, N = 3301.15358.43MIN: 299.13 / MAX: 307.2MIN: 357.7 / MAX: 359.411. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Tau T2A: 32 vCPUsm6g.8xlarge60120180240300Min: 301.02 / Avg: 301.15 / Max: 301.26Min: 358.27 / Avg: 358.42 / Max: 358.511. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DTau T2A: 32 vCPUsm6g.8xlarge400800120016002000SE +/- 0.86, N = 3SE +/- 5.23, N = 31822.771937.351. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DTau T2A: 32 vCPUsm6g.8xlarge30060090012001500Min: 1821.35 / Avg: 1822.77 / Max: 1824.31Min: 1926.89 / Avg: 1937.35 / Max: 1942.841. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPTau T2A: 32 vCPUsm6g.8xlarge2004006008001000SE +/- 3.30, N = 3SE +/- 6.01, N = 3996.701020.481. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPTau T2A: 32 vCPUsm6g.8xlarge2004006008001000Min: 990.1 / Avg: 996.7 / Max: 1000Min: 1010.1 / Avg: 1020.48 / Max: 1030.931. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondTau T2A: 32 vCPUsm6g.8xlarge150K300K450K600K750KSE +/- 385.56, N = 3SE +/- 43.22, N = 3700917.94587539.281. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondTau T2A: 32 vCPUsm6g.8xlarge120K240K360K480K600KMin: 700218.82 / Avg: 700917.94 / Max: 701549.25Min: 587479.35 / Avg: 587539.28 / Max: 587623.191. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansTau T2A: 32 vCPUsm6g.8xlarge13002600390052006500SE +/- 28.01, N = 4SE +/- 46.20, N = 459544673
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000Min: 5886 / Avg: 5954.25 / Max: 6023Min: 4543 / Avg: 4673 / Max: 4753

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge0.00140.00280.00420.00560.007SE +/- 0.000, N = 14SE +/- 0.000, N = 30.0050.006
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETTau T2A: 32 vCPUsm6g.8xlarge300K600K900K1200K1500KSE +/- 9294.72, N = 3SE +/- 6854.78, N = 31411234.921254760.841. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETTau T2A: 32 vCPUsm6g.8xlarge200K400K600K800K1000KMin: 1395868.25 / Avg: 1411234.92 / Max: 1427977.75Min: 1243963.75 / Avg: 1254760.84 / Max: 1267475.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETTau T2A: 32 vCPUsm6g.8xlarge400K800K1200K1600K2000KSE +/- 10764.67, N = 3SE +/- 5705.98, N = 31926297.791761869.131. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETTau T2A: 32 vCPUsm6g.8xlarge300K600K900K1200K1500KMin: 1914987.38 / Avg: 1926297.79 / Max: 1947817.75Min: 1754386 / Avg: 1761869.13 / Max: 1773072.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 0.00, N = 3SE +/- 117.40, N = 39181.249626.541. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KMin: 9181.24 / Avg: 9181.24 / Max: 9181.24Min: 9509.14 / Avg: 9626.54 / Max: 9861.331. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 66.63, N = 3SE +/- 0.00, N = 37262.748068.361. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingTau T2A: 32 vCPUsm6g.8xlarge14002800420056007000Min: 7196.11 / Avg: 7262.74 / Max: 7396Min: 8068.36 / Avg: 8068.36 / Max: 8068.361. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.0033, N = 3SE +/- 0.0024, N = 37.16198.27061. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughTau T2A: 32 vCPUsm6g.8xlarge3691215Min: 7.16 / Avg: 7.16 / Max: 7.17Min: 8.27 / Avg: 8.27 / Max: 8.281. (CXX) g++ options: -O3 -march=native -flto -pthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge0.08820.17640.26460.35280.441SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3920.347
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge12345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.35 / Avg: 0.35 / Max: 0.35

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessTau T2A: 32 vCPUsm6g.8xlarge3691215SE +/- 0.00, N = 3SE +/- 0.14, N = 310.3411.731. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessTau T2A: 32 vCPUsm6g.8xlarge3691215Min: 10.33 / Avg: 10.34 / Max: 10.35Min: 11.48 / Avg: 11.73 / Max: 11.971. (CXX) g++ options: -O3 -fPIC -march=native -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BTau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 38.20, N = 3SE +/- 44.23, N = 334381.9134983.681. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BTau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KMin: 34332.78 / Avg: 34381.91 / Max: 34457.15Min: 34899.62 / Avg: 34983.68 / Max: 35049.571. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonTau T2A: 32 vCPUsm6g.8xlarge12002400360048006000SE +/- 5.52, N = 4SE +/- 24.54, N = 450795604
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000Min: 5069 / Avg: 5079 / Max: 5090Min: 5560 / Avg: 5603.5 / Max: 5670

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CTau T2A: 32 vCPUsm6g.8xlarge11K22K33K44K55KSE +/- 41.18, N = 3SE +/- 144.89, N = 352309.8150732.781. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CTau T2A: 32 vCPUsm6g.8xlarge9K18K27K36K45KMin: 52227.66 / Avg: 52309.81 / Max: 52356.1Min: 50541.34 / Avg: 50732.78 / Max: 51016.891. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Tau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 0.00, N = 3SE +/- 1.17, N = 395.47114.95MIN: 95.15 / MAX: 96.88MIN: 113.55 / MAX: 117.721. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Tau T2A: 32 vCPUsm6g.8xlarge20406080100Min: 95.47 / Avg: 95.47 / Max: 95.48Min: 113.76 / Avg: 114.95 / Max: 117.31. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CTau T2A: 32 vCPUsm6g.8xlarge5K10K15K20K25KSE +/- 35.67, N = 3SE +/- 28.45, N = 321433.9220938.751. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CTau T2A: 32 vCPUsm6g.8xlarge4K8K12K16K20KMin: 21363.89 / Avg: 21433.92 / Max: 21480.75Min: 20881.99 / Avg: 20938.75 / Max: 20970.51. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6Tau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.020, N = 3SE +/- 0.039, N = 36.6827.7551. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6Tau T2A: 32 vCPUsm6g.8xlarge3691215Min: 6.66 / Avg: 6.68 / Max: 6.72Min: 7.71 / Avg: 7.75 / Max: 7.831. (CXX) g++ options: -O3 -fPIC -march=native -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.072, N = 3SE +/- 0.019, N = 36.7757.2851. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessTau T2A: 32 vCPUsm6g.8xlarge3691215Min: 6.66 / Avg: 6.78 / Max: 6.91Min: 7.27 / Avg: 7.28 / Max: 7.321. (CXX) g++ options: -O3 -fPIC -march=native -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: MediumTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.0035, N = 3SE +/- 0.0111, N = 35.98256.91891. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: MediumTau T2A: 32 vCPUsm6g.8xlarge3691215Min: 5.98 / Avg: 5.98 / Max: 5.99Min: 6.9 / Avg: 6.92 / Max: 6.941. (CXX) g++ options: -O3 -march=native -flto -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CTau T2A: 32 vCPUsm6g.8xlarge11K22K33K44K55KSE +/- 31.40, N = 3SE +/- 47.68, N = 350939.0549445.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CTau T2A: 32 vCPUsm6g.8xlarge9K18K27K36K45KMin: 50888.97 / Avg: 50939.05 / Max: 50996.9Min: 49377.17 / Avg: 49445.81 / Max: 49537.461. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinTau T2A: 32 vCPUsm6g.8xlarge48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 316.6014.821. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinTau T2A: 32 vCPUsm6g.8xlarge48121620Min: 16.57 / Avg: 16.6 / Max: 16.61Min: 14.78 / Avg: 14.82 / Max: 14.861. (CXX) g++ options: -O3 -march=native -ldl

136 Results Shown

SPECjbb 2015:
  SPECjbb2015-Composite critical-jOPS
  SPECjbb2015-Composite max-jOPS
Renaissance
Apache Spark:
  40000000 - 2000 - Broadcast Inner Join Test Time
  40000000 - 2000 - Inner Join Test Time
  40000000 - 2000 - Repartition Test Time
  40000000 - 2000 - Group By Test Time
  40000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  40000000 - 2000 - Calculate Pi Benchmark
  40000000 - 2000 - SHA-512 Benchmark Time
Graph500:
  26:
    sssp max_TEPS
    sssp median_TEPS
    bfs max_TEPS
    bfs median_TEPS
LAMMPS Molecular Dynamics Simulator
Apache Spark:
  40000000 - 100 - Broadcast Inner Join Test Time
  40000000 - 100 - Inner Join Test Time
  40000000 - 100 - Repartition Test Time
  40000000 - 100 - Group By Test Time
  40000000 - 100 - Calculate Pi Benchmark Using Dataframe
  40000000 - 100 - Calculate Pi Benchmark
  40000000 - 100 - SHA-512 Benchmark Time
Renaissance
Apache Spark:
  1000000 - 2000 - Broadcast Inner Join Test Time
  1000000 - 2000 - Inner Join Test Time
  1000000 - 2000 - Repartition Test Time
  1000000 - 2000 - Group By Test Time
  1000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  1000000 - 2000 - Calculate Pi Benchmark
  1000000 - 2000 - SHA-512 Benchmark Time
Renaissance
PostgreSQL pgbench:
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
OpenFOAM:
  drivaerFastback, Medium Mesh Size - Execution Time
  drivaerFastback, Medium Mesh Size - Mesh Time
PostgreSQL pgbench:
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
Renaissance
Apache Spark:
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 100 - SHA-512 Benchmark Time
Timed Gem5 Compilation
Facebook RocksDB
VP9 libvpx Encoding
libavif avifenc
Renaissance
Blender
TensorFlow Lite
Blender
TNN
ASKAP:
  tConvolve MT - Degridding
  tConvolve MT - Gridding
Facebook RocksDB
libavif avifenc
TensorFlow Lite:
  Inception V4
  Inception ResNet V2
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
OpenSSL
Renaissance
PyHPC Benchmarks
ASKAP:
  tConvolve MPI - Gridding
  tConvolve MPI - Degridding
Facebook RocksDB
Renaissance
PostgreSQL pgbench:
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
High Performance Conjugate Gradient
GPAW
Apache Cassandra
VP9 libvpx Encoding
Blender
GROMACS
Renaissance:
  In-Memory Database Shootout
  Apache Spark Bayes
  Finagle HTTP Requests
VP9 libvpx Encoding
Aircrack-ng
nginx
Stress-NG
nginx
Sysbench
ASTC Encoder
Stress-NG
PyHPC Benchmarks:
  CPU - Numpy - 4194304 - Equation of State
  CPU - Numpy - 16384 - Isoneutral Mixing
Facebook RocksDB
OpenSSL:
  RSA4096:
    verify/s
    sign/s
Renaissance
NAS Parallel Benchmarks
VP9 libvpx Encoding
DaCapo Benchmark
NAS Parallel Benchmarks
DaCapo Benchmark
PyHPC Benchmarks
NAS Parallel Benchmarks
Timed FFmpeg Compilation
Timed MPlayer Compilation
Stress-NG:
  CPU Stress
  NUMA
  Vector Math
  Matrix Math
  System V Message Passing
NAS Parallel Benchmarks
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
NAS Parallel Benchmarks
ASKAP
Coremark
DaCapo Benchmark
PyHPC Benchmarks
Redis:
  SET
  GET
ASKAP:
  tConvolve OpenMP - Degridding
  tConvolve OpenMP - Gridding
ASTC Encoder
PyHPC Benchmarks
libavif avifenc
NAS Parallel Benchmarks
DaCapo Benchmark
NAS Parallel Benchmarks
TNN
NAS Parallel Benchmarks
libavif avifenc:
  6
  10, Lossless
ASTC Encoder
NAS Parallel Benchmarks
LAMMPS Molecular Dynamics Simulator