Tau T2A 16 vCPUs

amazon testing on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2208198-NE-2208123NE86&grs&sro.

Tau T2A 16 vCPUsProcessorMotherboardMemoryDiskNetworkChipsetOSKernelCompilerFile-SystemSystem LayerTau T2A: 32 vCPUsm6g.8xlargeARMv8 Neoverse-N1 (32 Cores)KVM Google Compute Engine128GB215GB nvme_card-pdGoogle Compute Engine VirtualUbuntu 22.045.15.0-1016-gcp (aarch64)GCC 12.0.1 20220319ext4KVMAmazon EC2 m6g.8xlarge (1.0 BIOS)Amazon Device 0200215GB Amazon Elastic Block StoreAmazon Elastic5.15.0-1009-aws (aarch64)amazonOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- Tau T2A: 32 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - m6g.8xlarge: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

Tau T2A 16 vCPUsrocksdb: Update Randpgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencyrenaissance: Finagle HTTP Requestsrocksdb: Read Rand Write Randdacapobench: Tradebeanscassandra: Writesnginx: 500nginx: 1000tnn: CPU - SqueezeNet v2pyhpc: CPU - Numpy - 16384 - Equation of Statecoremark: CoreMark Size 666 - Iterations Per Secondtnn: CPU - SqueezeNet v1.1sysbench: CPUrocksdb: Rand Readopenssl: RSA4096npb: EP.Dopenssl: RSA4096openssl: SHA256stress-ng: Vector Mathstress-ng: CPU Stressstress-ng: Matrix Mathspark: 1000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmarkaskap: tConvolve MT - Degriddingspark: 40000000 - 2000 - Calculate Pi Benchmarkavifenc: 2avifenc: 0spec-jbb2015: SPECjbb2015-Composite max-jOPStnn: CPU - MobileNet v2aircrack-ng: askap: tConvolve MPI - Griddingspark: 1000000 - 100 - Inner Join Test Timeavifenc: 6spec-jbb2015: SPECjbb2015-Composite critical-jOPSastcenc: Exhaustiveastcenc: Mediumastcenc: Thoroughpyhpc: CPU - Numpy - 16384 - Isoneutral Mixingvpxenc: Speed 5 - Bosphorus 1080prenaissance: In-Memory Database Shootoutavifenc: 6, Losslesslammps: 20k Atomspyhpc: CPU - Numpy - 1048576 - Equation of Statespark: 40000000 - 2000 - SHA-512 Benchmark Timerenaissance: Savina Reactors.IOredis: SETaskap: tConvolve MPI - Degriddingblender: BMW27 - CPU-Onlylammps: Rhodopsin Proteinspark: 1000000 - 2000 - SHA-512 Benchmark Timetnn: CPU - DenseNetgraph500: 26askap: tConvolve OpenMP - Griddingpgbench: 100 - 100 - Read Only - Average Latencyvpxenc: Speed 0 - Bosphorus 1080pspark: 40000000 - 100 - SHA-512 Benchmark Timegromacs: MPI CPU - water_GMX50_barepgbench: 100 - 100 - Read Onlypyhpc: CPU - Numpy - 4194304 - Equation of Statevpxenc: Speed 0 - Bosphorus 4Kdacapobench: Jythonvpxenc: Speed 5 - Bosphorus 4Kspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timeblender: Classroom - CPU-Onlystress-ng: NUMAspark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframeredis: GETblender: Fishy Cat - CPU-Onlyspark: 1000000 - 2000 - Inner Join Test Timerocksdb: Read While Writingnpb: LU.Cgraph500: 26pgbench: 100 - 250 - Read Only - Average Latencyrenaissance: Apache Spark Bayespgbench: 100 - 250 - Read Onlyavifenc: 10, Losslessstress-ng: System V Message Passingaskap: tConvolve MT - Griddinggraph500: 26build-mplayer: Time To Compilespark: 1000000 - 100 - Repartition Test Timenpb: IS.Dspark: 40000000 - 2000 - Group By Test Timehpcg: askap: tConvolve OpenMP - Degriddingstress-ng: Futexbuild-ffmpeg: Time To Compilerenaissance: ALS Movie Lenstensorflow-lite: Inception ResNet V2spark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 100 - Inner Join Test Timenpb: BT.Crenaissance: Rand Forestnpb: SP.Cnpb: FT.Cnpb: MG.Cspark: 40000000 - 100 - Group By Test Timespark: 40000000 - 2000 - Inner Join Test Timetensorflow-lite: Inception V4spark: 40000000 - 100 - Broadcast Inner Join Test Timerenaissance: Scala Dottyrenaissance: Genetic Algorithm Using Jenetics + Futuresrenaissance: Apache Spark ALSspark: 40000000 - 2000 - Broadcast Inner Join Test Timeaskap: Hogbom Clean OpenMPrenaissance: Apache Spark PageRanknpb: CG.Cgraph500: 26gpaw: Carbon Nanotubenpb: SP.Bopenfoam: drivaerFastback, Medium Mesh Size - Execution Timepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingbuild-gem5: Time To Compiletensorflow-lite: SqueezeNetopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timepyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingstress-ng: CPU Cachepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writetensorflow-lite: Mobilenet Quanttensorflow-lite: Mobilenet Floattensorflow-lite: NASNet Mobilespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - SHA-512 Benchmark Timerenaissance: Akka Unbalanced Cobwebbed Treedacapobench: Tradesoapdacapobench: H2Tau T2A: 32 vCPUsm6g.8xlarge211735338329.5599430.81321827595487819235749.36233484.0895.4730.005700917.944737301.150108241.61124704201128273.13265.681570.22578891991397749.088209.47151792.8369.7769.5769.925522.0769.79169.639266.33735075322.76833647.5483899.282.136.6822295568.65575.98257.16190.01412.116566.510.34116.5500.39239.2210705.91411234.923962.08112.4716.5964.963056.8971247020007262.740.3044.9946.301.7183295392.0552.1350796.994.804.764.786.72249.89549.132.604.791926297.79214.412.87261099287702.301695420000.803766.43122396.7756128517.104456.5547737700028.9282.011822.7722.8422.09309181.241437660.6238.95817606.633994.924.3622.2230.3269530.641047.326843.5852309.8150939.0527.6428.6631657.331.981871.73084.24118.326.55996.7005174.321433.92508372000130.35334381.91994.533.723312.1203853.90206.40.915566.91114.80122823550.652093.2528372.82.121.686.724.7929296.750155175421971528518.9226674.317370214673109525286801.67282425.09114.9520.006587539.277646358.42590962.22104825086107872.32746.811320.92174872823382451.976927.34128190.2482.6082.3482.626515.5782.32199.973313.27341157378.55028780.7064550.231.837.7552663879.61646.91898.27060.01610.655783.711.72714.6350.34734.7712067.91254760.844453.69126.1214.8244.453406.5381389180008068.360.2744.5041.761.5543641931.8621.9356046.345.295.245.266.11274.81603.252.375.241761869.13233.992.63283985380791.931839400000.742828.83374167.2856581912.044785.7551076100030.8381.891937.3521.5220.83549626.541503894.1240.61316918.735336.425.2823.0531.4467111.721084.527767.1050732.7849445.8126.8329.5232600.732.921821.73167.94229.827.211020.485297.420938.75519646000132.68034983.68977.473.660316.2783806.31208.20.91545.9845.25855244200.712230.1129037.22.031.635.614.1430764.835844272OpenBenchmarking.org

Facebook RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomTau T2A: 32 vCPUsm6g.8xlarge90K180K270K360K450KSE +/- 705.83, N = 3SE +/- 1212.60, N = 32117354219711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 6.74, N = 3SE +/- 10.72, N = 3338352851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.06, N = 3SE +/- 0.04, N = 329.5618.921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

Renaissance

Test: Finagle HTTP Requests

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 122.37, N = 3SE +/- 15.10, N = 39430.86674.3MIN: 8793.75 / MAX: 9955.78MIN: 6412.79 / MAX: 6795.18

Facebook RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomTau T2A: 32 vCPUsm6g.8xlarge400K800K1200K1600K2000KSE +/- 9643.50, N = 15SE +/- 17104.02, N = 15132182717370211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansTau T2A: 32 vCPUsm6g.8xlarge13002600390052006500SE +/- 28.01, N = 4SE +/- 46.20, N = 459544673

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: WritesTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 777.36, N = 3SE +/- 507.25, N = 387819109525

nginx

Concurrent Requests: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500Tau T2A: 32 vCPUsm6g.8xlarge60K120K180K240K300KSE +/- 245.82, N = 3SE +/- 1454.63, N = 3235749.36286801.671. (CC) gcc options: -lcrypt -lz -O3 -march=native

nginx

Concurrent Requests: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000Tau T2A: 32 vCPUsm6g.8xlarge60K120K180K240K300KSE +/- 479.91, N = 3SE +/- 2142.75, N = 3233484.08282425.091. (CC) gcc options: -lcrypt -lz -O3 -march=native

TNN

Target: CPU - Model: SqueezeNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Tau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 0.00, N = 3SE +/- 1.17, N = 395.47114.95MIN: 95.15 / MAX: 96.88MIN: 113.55 / MAX: 117.721. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge0.00140.00280.00420.00560.007SE +/- 0.000, N = 14SE +/- 0.000, N = 30.0050.006

Coremark

CoreMark Size 666 - Iterations Per Second

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondTau T2A: 32 vCPUsm6g.8xlarge150K300K450K600K750KSE +/- 385.56, N = 3SE +/- 43.22, N = 3700917.94587539.281. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Tau T2A: 32 vCPUsm6g.8xlarge80160240320400SE +/- 0.07, N = 3SE +/- 0.08, N = 3301.15358.43MIN: 299.13 / MAX: 307.2MIN: 357.7 / MAX: 359.411. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

Sysbench

Test: CPU

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 23.77, N = 3SE +/- 123.73, N = 3108241.6190962.221. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

Facebook RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadTau T2A: 32 vCPUsm6g.8xlarge30M60M90M120M150MSE +/- 376574.31, N = 3SE +/- 726664.50, N = 131247042011048250861. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Tau T2A: 32 vCPUsm6g.8xlarge30K60K90K120K150KSE +/- 29.86, N = 3SE +/- 2.96, N = 3128273.1107872.31. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500SE +/- 2.04, N = 3SE +/- 1.15, N = 33265.682746.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Tau T2A: 32 vCPUsm6g.8xlarge30060090012001500SE +/- 0.06, N = 3SE +/- 0.12, N = 31570.21320.91. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Tau T2A: 32 vCPUsm6g.8xlarge6000M12000M18000M24000M30000MSE +/- 119493320.18, N = 3SE +/- 4842815.19, N = 325788919913217487282331. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 190.70, N = 3SE +/- 0.29, N = 397749.0882451.971. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 4.23, N = 3SE +/- 0.61, N = 38209.476927.341. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathTau T2A: 32 vCPUsm6g.8xlarge30K60K90K120K150KSE +/- 9.80, N = 3SE +/- 2.06, N = 3151792.83128190.241. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.06, N = 15SE +/- 0.15, N = 369.7782.60

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.08, N = 9SE +/- 0.23, N = 369.5782.34

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.06, N = 15SE +/- 0.05, N = 1169.9282.62

ASKAP

Test: tConvolve MT - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingTau T2A: 32 vCPUsm6g.8xlarge14002800420056007000SE +/- 80.56, N = 15SE +/- 4.40, N = 35522.076515.571. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi BenchmarkTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.11, N = 12SE +/- 0.07, N = 1269.7982.32

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2Tau T2A: 32 vCPUsm6g.8xlarge4080120160200SE +/- 0.13, N = 3SE +/- 0.14, N = 3169.64199.971. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Tau T2A: 32 vCPUsm6g.8xlarge70140210280350SE +/- 0.65, N = 3SE +/- 0.28, N = 3266.34313.271. (CXX) g++ options: -O3 -fPIC -march=native -lm

SPECjbb 2015

SPECjbb2015-Composite max-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPSTau T2A: 32 vCPUsm6g.8xlarge9K18K27K36K45K3507541157

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Tau T2A: 32 vCPUsm6g.8xlarge80160240320400SE +/- 0.05, N = 3SE +/- 0.20, N = 3322.77378.55MIN: 319.63 / MAX: 326.43MIN: 377.31 / MAX: 380.261. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

Aircrack-ng

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Tau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 287.54, N = 15SE +/- 4.44, N = 333647.5528780.711. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

ASKAP

Test: tConvolve MPI - Gridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000SE +/- 42.99, N = 15SE +/- 6.58, N = 33899.284550.231. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.47930.95861.43791.91722.3965SE +/- 0.02, N = 15SE +/- 0.01, N = 32.131.83

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6Tau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.020, N = 3SE +/- 0.039, N = 36.6827.7551. (CXX) g++ options: -O3 -fPIC -march=native -lm

SPECjbb 2015

SPECjbb2015-Composite critical-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPSTau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30K2295526638

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveTau T2A: 32 vCPUsm6g.8xlarge20406080100SE +/- 0.08, N = 3SE +/- 0.02, N = 368.6679.621. (CXX) g++ options: -O3 -march=native -flto -pthread

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: MediumTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.0035, N = 3SE +/- 0.0111, N = 35.98256.91891. (CXX) g++ options: -O3 -march=native -flto -pthread

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.0033, N = 3SE +/- 0.0024, N = 37.16198.27061. (CXX) g++ options: -O3 -march=native -flto -pthread

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge0.00360.00720.01080.01440.018SE +/- 0.000, N = 15SE +/- 0.000, N = 30.0140.016

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080pTau T2A: 32 vCPUsm6g.8xlarge3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 312.1110.651. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutTau T2A: 32 vCPUsm6g.8xlarge14002800420056007000SE +/- 37.00, N = 3SE +/- 39.12, N = 36566.55783.7MIN: 5609.26 / MAX: 13128.6MIN: 5350.98 / MAX: 6150.47

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessTau T2A: 32 vCPUsm6g.8xlarge3691215SE +/- 0.00, N = 3SE +/- 0.14, N = 310.3411.731. (CXX) g++ options: -O3 -fPIC -march=native -lm

LAMMPS Molecular Dynamics Simulator

Model: 20k Atoms

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsTau T2A: 32 vCPUsm6g.8xlarge48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 316.5514.641. (CXX) g++ options: -O3 -march=native -ldl

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge0.08820.17640.26460.35280.441SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3920.347

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge918273645SE +/- 0.55, N = 12SE +/- 0.37, N = 1239.2234.77

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOTau T2A: 32 vCPUsm6g.8xlarge3K6K9K12K15KSE +/- 131.70, N = 4SE +/- 101.80, N = 310705.912067.9MIN: 10505.49 / MAX: 14847.21MIN: 11869.31 / MAX: 19424.22

Redis

Test: SET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETTau T2A: 32 vCPUsm6g.8xlarge300K600K900K1200K1500KSE +/- 9294.72, N = 3SE +/- 6854.78, N = 31411234.921254760.841. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

ASKAP

Test: tConvolve MPI - Degridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000SE +/- 54.84, N = 15SE +/- 6.31, N = 33962.084453.691. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 0.10, N = 3SE +/- 0.21, N = 3112.47126.12

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinTau T2A: 32 vCPUsm6g.8xlarge48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 316.6014.821. (CXX) g++ options: -O3 -march=native -ldl

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge1.1162.2323.3484.4645.58SE +/- 0.04, N = 15SE +/- 0.03, N = 114.964.45

TNN

Target: CPU - Model: DenseNet

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500SE +/- 6.90, N = 3SE +/- 5.54, N = 33056.903406.54MIN: 2928.19 / MAX: 3237.58MIN: 3340.13 / MAX: 3491.431. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

Graph500

Scale: 26

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge30M60M90M120M150M1247020001389180001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

ASKAP

Test: tConvolve OpenMP - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 66.63, N = 3SE +/- 0.00, N = 37262.748068.361. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge0.06840.13680.20520.27360.342SE +/- 0.002, N = 3SE +/- 0.002, N = 30.3040.2741. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pTau T2A: 32 vCPUsm6g.8xlarge1.12282.24563.36844.49125.614SE +/- 0.01, N = 3SE +/- 0.00, N = 34.994.501. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge1020304050SE +/- 0.45, N = 9SE +/- 0.11, N = 346.3041.76

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareTau T2A: 32 vCPUsm6g.8xlarge0.38660.77321.15981.54641.933SE +/- 0.010, N = 3SE +/- 0.001, N = 31.7181.5541. (CXX) g++ options: -O3 -march=native

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyTau T2A: 32 vCPUsm6g.8xlarge80K160K240K320K400KSE +/- 1811.74, N = 3SE +/- 3146.40, N = 33295393641931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateTau T2A: 32 vCPUsm6g.8xlarge0.46240.92481.38721.84962.312SE +/- 0.002, N = 3SE +/- 0.001, N = 32.0551.862

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KTau T2A: 32 vCPUsm6g.8xlarge0.47930.95861.43791.91722.3965SE +/- 0.00, N = 3SE +/- 0.00, N = 32.131.931. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonTau T2A: 32 vCPUsm6g.8xlarge12002400360048006000SE +/- 5.52, N = 4SE +/- 24.54, N = 450795604

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.02, N = 3SE +/- 0.00, N = 36.996.341. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.19032.38063.57094.76125.9515SE +/- 0.01, N = 15SE +/- 0.01, N = 114.805.29

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.1792.3583.5374.7165.895SE +/- 0.01, N = 9SE +/- 0.02, N = 34.765.24

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.18352.3673.55054.7345.9175SE +/- 0.02, N = 12SE +/- 0.01, N = 124.785.26

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.05, N = 15SE +/- 0.03, N = 116.726.11

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge60120180240300SE +/- 0.07, N = 3SE +/- 0.46, N = 3249.89274.81

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMATau T2A: 32 vCPUsm6g.8xlarge130260390520650SE +/- 1.69, N = 3SE +/- 0.53, N = 3549.13603.251. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.5851.171.7552.342.925SE +/- 0.03, N = 15SE +/- 0.03, N = 112.602.37

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeTau T2A: 32 vCPUsm6g.8xlarge1.1792.3583.5374.7165.895SE +/- 0.01, N = 15SE +/- 0.04, N = 34.795.24

Redis

Test: GET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETTau T2A: 32 vCPUsm6g.8xlarge400K800K1200K1600K2000KSE +/- 10764.67, N = 3SE +/- 5705.98, N = 31926297.791761869.131. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-OnlyTau T2A: 32 vCPUsm6g.8xlarge50100150200250SE +/- 0.42, N = 3SE +/- 0.24, N = 3214.41233.99

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.64581.29161.93742.58323.229SE +/- 0.04, N = 15SE +/- 0.03, N = 112.872.63

Facebook RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingTau T2A: 32 vCPUsm6g.8xlarge600K1200K1800K2400K3000KSE +/- 32390.32, N = 12SE +/- 25190.27, N = 7261099228398531. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CTau T2A: 32 vCPUsm6g.8xlarge20K40K60K80K100KSE +/- 137.48, N = 3SE +/- 115.81, N = 387702.3080791.931. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Graph500

Scale: 26

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge40M80M120M160M200M1695420001839400001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge0.18070.36140.54210.72280.9035SE +/- 0.012, N = 12SE +/- 0.008, N = 40.8030.7421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesTau T2A: 32 vCPUsm6g.8xlarge2004006008001000SE +/- 9.73, N = 3SE +/- 6.49, N = 15766.4828.8MIN: 495.95 / MAX: 1178.88MIN: 538.38 / MAX: 1090.62

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyTau T2A: 32 vCPUsm6g.8xlarge70K140K210K280K350KSE +/- 4561.68, N = 12SE +/- 3802.23, N = 43122393374161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.072, N = 3SE +/- 0.019, N = 36.7757.2851. (CXX) g++ options: -O3 -fPIC -march=native -lm

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingTau T2A: 32 vCPUsm6g.8xlarge1.4M2.8M4.2M5.6M7MSE +/- 7551.56, N = 3SE +/- 1469.54, N = 36128517.106581912.041. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ASKAP

Test: tConvolve MT - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingTau T2A: 32 vCPUsm6g.8xlarge10002000300040005000SE +/- 35.89, N = 15SE +/- 7.65, N = 34456.554785.751. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Graph500

Scale: 26

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge110M220M330M440M550M4773770005107610001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Timed MPlayer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.35, N = 4SE +/- 0.13, N = 328.9330.84

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.45230.90461.35691.80922.2615SE +/- 0.03, N = 15SE +/- 0.03, N = 32.011.89

NAS Parallel Benchmarks

Test / Class: IS.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DTau T2A: 32 vCPUsm6g.8xlarge400800120016002000SE +/- 0.86, N = 3SE +/- 5.23, N = 31822.771937.351. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge510152025SE +/- 0.32, N = 12SE +/- 0.18, N = 1222.8421.52

High Performance Conjugate Gradient

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Tau T2A: 32 vCPUsm6g.8xlarge510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 322.0920.841. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

ASKAP

Test: tConvolve OpenMP - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingTau T2A: 32 vCPUsm6g.8xlarge2K4K6K8K10KSE +/- 0.00, N = 3SE +/- 117.40, N = 39181.249626.541. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexTau T2A: 32 vCPUsm6g.8xlarge300K600K900K1200K1500KSE +/- 15026.23, N = 3SE +/- 17854.23, N = 151437660.621503894.121. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileTau T2A: 32 vCPUsm6g.8xlarge918273645SE +/- 0.16, N = 3SE +/- 0.09, N = 338.9640.61

Renaissance

Test: ALS Movie Lens

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensTau T2A: 32 vCPUsm6g.8xlarge4K8K12K16K20KSE +/- 57.21, N = 3SE +/- 233.21, N = 317606.616918.7MIN: 17544.26 / MAX: 19037.24MIN: 16601.1 / MAX: 18787.37

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Tau T2A: 32 vCPUsm6g.8xlarge8K16K24K32K40KSE +/- 379.42, N = 3SE +/- 396.60, N = 1533994.935336.4

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430SE +/- 0.12, N = 9SE +/- 0.11, N = 324.3625.28

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430SE +/- 0.24, N = 12SE +/- 0.09, N = 1222.2223.05

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.44, N = 9SE +/- 0.34, N = 330.3231.44

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CTau T2A: 32 vCPUsm6g.8xlarge15K30K45K60K75KSE +/- 272.46, N = 3SE +/- 44.47, N = 369530.6467111.721. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Renaissance

Test: Random Forest

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestTau T2A: 32 vCPUsm6g.8xlarge2004006008001000SE +/- 12.92, N = 3SE +/- 2.98, N = 31047.31084.5MIN: 904.64 / MAX: 1280.13MIN: 958.08 / MAX: 1325.97

NAS Parallel Benchmarks

Test / Class: SP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CTau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KSE +/- 31.60, N = 3SE +/- 32.63, N = 326843.5827767.101. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CTau T2A: 32 vCPUsm6g.8xlarge11K22K33K44K55KSE +/- 41.18, N = 3SE +/- 144.89, N = 352309.8150732.781. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CTau T2A: 32 vCPUsm6g.8xlarge11K22K33K44K55KSE +/- 31.40, N = 3SE +/- 47.68, N = 350939.0549445.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.16, N = 9SE +/- 0.31, N = 327.6426.83

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge714212835SE +/- 0.19, N = 12SE +/- 0.13, N = 1228.6629.52

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Tau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 149.01, N = 3SE +/- 296.19, N = 1531657.332600.7

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge816243240SE +/- 0.26, N = 9SE +/- 0.39, N = 331.9832.92

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyTau T2A: 32 vCPUsm6g.8xlarge400800120016002000SE +/- 32.38, N = 11SE +/- 25.25, N = 121871.71821.7MIN: 1370.64 / MAX: 3034.49MIN: 1387.66 / MAX: 2713.36

Renaissance

Test: Genetic Algorithm Using Jenetics + Futures

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesTau T2A: 32 vCPUsm6g.8xlarge7001400210028003500SE +/- 8.81, N = 3SE +/- 14.17, N = 33084.23167.9MIN: 2993.8 / MAX: 3192.9MIN: 3061.76 / MAX: 3232.32

Renaissance

Test: Apache Spark ALS

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSTau T2A: 32 vCPUsm6g.8xlarge9001800270036004500SE +/- 32.04, N = 3SE +/- 34.35, N = 94118.34229.8MIN: 3925.84 / MAX: 4358.22MIN: 4008.75 / MAX: 4594.43

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge612182430SE +/- 0.17, N = 12SE +/- 0.10, N = 1226.5527.21

ASKAP

Test: Hogbom Clean OpenMP

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPTau T2A: 32 vCPUsm6g.8xlarge2004006008001000SE +/- 3.30, N = 3SE +/- 6.01, N = 3996.701020.481. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 77.61, N = 12SE +/- 49.78, N = 35174.35297.4MIN: 4316.47 / MAX: 6446.52MIN: 4909.93 / MAX: 5385.51

NAS Parallel Benchmarks

Test / Class: CG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CTau T2A: 32 vCPUsm6g.8xlarge5K10K15K20K25KSE +/- 35.67, N = 3SE +/- 28.45, N = 321433.9220938.751. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Graph500

Scale: 26

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 26Tau T2A: 32 vCPUsm6g.8xlarge110M220M330M440M550M5083720005196460001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

GPAW

Input: Carbon Nanotube

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeTau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 0.30, N = 3SE +/- 0.03, N = 3130.35132.681. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BTau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 38.20, N = 3SE +/- 44.23, N = 334381.9134983.681. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Execution TimeTau T2A: 32 vCPUsm6g.8xlarge2004006008001000994.53977.471. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge0.83771.67542.51313.35084.1885SE +/- 0.016, N = 3SE +/- 0.021, N = 33.7233.660

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileTau T2A: 32 vCPUsm6g.8xlarge70140210280350SE +/- 1.96, N = 3SE +/- 0.15, N = 3312.12316.28

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetTau T2A: 32 vCPUsm6g.8xlarge8001600240032004000SE +/- 31.57, N = 8SE +/- 54.35, N = 153853.903806.31

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Mesh TimeTau T2A: 32 vCPUsm6g.8xlarge50100150200250206.4208.21. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingTau T2A: 32 vCPUsm6g.8xlarge0.20590.41180.61770.82361.0295SE +/- 0.005, N = 3SE +/- 0.006, N = 30.9150.915

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheTau T2A: 32 vCPUsm6g.8xlarge120240360480600SE +/- 0.28, N = 3SE +/- 2.14, N = 12566.9145.981. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyTau T2A: 32 vCPUsm6g.8xlarge306090120150SE +/- 7.17, N = 12SE +/- 0.05, N = 3114.8045.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteTau T2A: 32 vCPUsm6g.8xlarge12002400360048006000SE +/- 154.25, N = 12SE +/- 6.46, N = 3228255241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantTau T2A: 32 vCPUsm6g.8xlarge9001800270036004500SE +/- 14.04, N = 3SE +/- 72.68, N = 153550.654200.71

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatTau T2A: 32 vCPUsm6g.8xlarge5001000150020002500SE +/- 17.55, N = 3SE +/- 55.66, N = 152093.252230.11

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileTau T2A: 32 vCPUsm6g.8xlarge6K12K18K24K30KSE +/- 355.48, N = 3SE +/- 527.69, N = 1528372.829037.2

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.4770.9541.4311.9082.385SE +/- 0.02, N = 15SE +/- 0.05, N = 112.122.03

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeTau T2A: 32 vCPUsm6g.8xlarge0.3780.7561.1341.5121.89SE +/- 0.03, N = 15SE +/- 0.08, N = 31.681.63

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeTau T2A: 32 vCPUsm6g.8xlarge246810SE +/- 0.23, N = 15SE +/- 0.06, N = 36.725.61

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeTau T2A: 32 vCPUsm6g.8xlarge1.07782.15563.23344.31125.389SE +/- 0.11, N = 15SE +/- 0.02, N = 34.794.14

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeTau T2A: 32 vCPUsm6g.8xlarge7K14K21K28K35KSE +/- 344.06, N = 4SE +/- 1085.96, N = 629296.730764.8MIN: 20859.52 / MAX: 30225.51MIN: 23349.61 / MAX: 36054.32

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapTau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 95.95, N = 20SE +/- 11.76, N = 450153584

DaCapo Benchmark

Java Test: H2

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Tau T2A: 32 vCPUsm6g.8xlarge11002200330044005500SE +/- 83.04, N = 20SE +/- 39.63, N = 451754272


Phoronix Test Suite v10.8.5