Tau T2A 16 vCPUs

KVM testing on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2208123-NE-2208114NE01&grr&sor.

ProcessorMotherboardMemoryDiskNetworkOSKernelCompilerFile-SystemSystem LayerTau T2A 16 vCPUs 8 vCPUs 32 vCPUsARMv8 Neoverse-N1 (16 Cores)KVM Google Compute Engine64GB215GB nvme_card-pdGoogle Compute Engine VirtualUbuntu 22.045.15.0-1013-gcp (aarch64)GCC 12.0.1 20220319ext4KVMARMv8 Neoverse-N1 (8 Cores)32GBARMv8 Neoverse-N1 (32 Cores)128GB5.15.0-1016-gcp (aarch64)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- Tau T2A: 16 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 8 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 32 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

spec-jbb2015: SPECjbb2015-Composite critical-jOPSspec-jbb2015: SPECjbb2015-Composite max-jOPSlammps: 20k Atomsrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Savina Reactors.IOopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timegraph500: 26graph500: 26graph500: 26graph500: 26spark: 40000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - SHA-512 Benchmark Timeblender: Classroom - CPU-Onlybuild-gem5: Time To Compilespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - SHA-512 Benchmark Timeblender: Fishy Cat - CPU-Onlypgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writerenaissance: Apache Spark PageRankpgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlyvpxenc: Speed 0 - Bosphorus 4Krenaissance: ALS Movie Lensspark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - SHA-512 Benchmark Timeavifenc: 0rocksdb: Read While Writingrenaissance: Scala Dottyblender: BMW27 - CPU-Onlyrenaissance: In-Memory Database Shootoutgpaw: Carbon Nanotubetnn: CPU - DenseNetgromacs: MPI CPU - water_GMX50_barecassandra: Writesrocksdb: Read Rand Write Randavifenc: 2openssl: SHA256askap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingastcenc: Exhaustivepgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writerenaissance: Genetic Algorithm Using Jenetics + Futuresrenaissance: Apache Spark ALSaircrack-ng: renaissance: Apache Spark Bayesaskap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingvpxenc: Speed 0 - Bosphorus 1080phpcg: renaissance: Finagle HTTP Requestsnpb: SP.Cstress-ng: Futexnpb: BT.Cnpb: EP.Dtensorflow-lite: SqueezeNetrocksdb: Update Randvpxenc: Speed 5 - Bosphorus 4Knginx: 500nginx: 1000sysbench: CPUpyhpc: CPU - Numpy - 16384 - Isoneutral Mixingbuild-ffmpeg: Time To Compiledacapobench: H2pyhpc: CPU - Numpy - 4194304 - Equation of Statetensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantopenssl: RSA4096openssl: RSA4096rocksdb: Rand Readbuild-mplayer: Time To Compilerenaissance: Rand Forestvpxenc: Speed 5 - Bosphorus 1080pdacapobench: Tradesoapdacapobench: Tradebeansnpb: CG.Cpyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingnpb: LU.Cnpb: IS.Dstress-ng: CPU Stressstress-ng: NUMAstress-ng: CPU Cachestress-ng: Vector Mathstress-ng: Matrix Mathstress-ng: System V Message Passingnpb: SP.Btnn: CPU - MobileNet v2astcenc: Thoroughaskap: Hogbom Clean OpenMPpyhpc: CPU - Numpy - 16384 - Equation of Statetnn: CPU - SqueezeNet v1.1coremark: CoreMark Size 666 - Iterations Per Secondredis: GETredis: SETavifenc: 6, Losslessaskap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingnpb: FT.Cpyhpc: CPU - Numpy - 1048576 - Equation of Stateavifenc: 6avifenc: 10, Losslessdacapobench: Jythonastcenc: Mediumtnn: CPU - SqueezeNet v2npb: MG.Clammps: Rhodopsin ProteinTau T2A 16 vCPUs 8 vCPUs 32 vCPUs9207180928.49924004.715981.51534.72303.71262563000952655002574780007075020042.7544.3935.4530.708.34137.04020772451.5545.2944.6237.2235.648.40136.84961301951.40506.10495.5411.792.222.556.008.39137.7612851904.93426.04204.91912205197.01.9001316072.0416797.52.653.653.367.438.29137.1739761995.91328.97012648261979.2226.266261.4208.9653358.7280.88039296884700194.774129264115273343.252585.313.801137.62180.63315789439.87125083067.33906.716697.9241262.04083.163789.014.8417.09848932.519710.901198681.8749125.931634.993955.893159726.68261253.72258672.8854317.420.01561.98348432.05846113.445445.915855.12481.961898.4164247.0786.76204896747.621997.711.785598552412171.950.94055447.311498.454116.961113.53551.2549102.3076177.565475267.3619552.45328.88614.2146645.1610.006305.227351562.5431581798635.371305143.5814.7025023.73631.8132644.850.40011.1687.65850106.944996.20833309.768.861392191584.66216322.426456.42426.16425.9574.7178.0066.2745.6115.71277.8389.2380.2680.0268.6850.8715.65278.36684567793.671016.66917.3932.863.484.586.9015.92277.896.28841.18215.09611655020.85.045496281.9216183.55.185.985.678.8515.81278.4725272298.09456.2375947021800.2447.715537.1381.1963842.1230.4517862548353245.60164560835071977.891325.063.640276.87981.8445423739.77025153001.65725.28308.5802249.82196.742360.634.6511.09699234.87115.28937451.9914368.29820.946618.322046886.11245968.24240628.2227237.280.015113.45543461.98197646.191592.616159.14395.732482.3732136.8393.73105568988.838985.911.27804055386855.810.90332029.141104.262065.531387.80436.3124633.9938215.954507844.177338.98331.34429.0505371.2950.005303.803175037.7651431980967.481440521.5523.3122421.432296.1018574.230.38120.2739.99751889.050595.72927703.334.812229553507516.55029296.710705.9994.53206.450837200016954200047737700012470200026.5528.6622.2222.844.7869.7939.2231.9830.3224.3627.644.7669.5746.30249.89312.1201.682.132.016.724.7969.774.79214.41114.80122825174.30.8033122392.1317606.62.122.872.606.724.8069.924.96266.33726109921871.7112.476566.5130.3533056.8971.718878191321827169.639257889199133899.283962.083.72368.65570.30432953929.55933833084.24118.333647.548766.45522.074456.554.9922.09309430.826843.581437660.6269530.643265.683853.902117356.99235749.36233484.08108241.610.01438.95851752.05531657.333994.928372.82093.253550.65128273.11570.212470420128.9281047.312.115015595421433.920.91587702.301822.778209.47549.13566.9197749.08151792.836128517.1034381.91322.7687.1619996.7000.005301.150700917.9447371926297.791411234.9210.3419181.247262.7452309.810.3926.6826.77550795.982595.47350939.0516.596OpenBenchmarking.org

SPECjbb 2015

SPECjbb2015-Composite critical-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPS32 vCPUs16 vCPUs8 vCPUs5K10K15K20K25K2295592073921

SPECjbb 2015

SPECjbb2015-Composite max-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPS32 vCPUs16 vCPUs8 vCPUs8K16K24K32K40K35075180929158

LAMMPS Molecular Dynamics Simulator

Model: 20k Atoms

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atoms32 vCPUs16 vCPUs8 vCPUs48121620SE +/- 0.004, N = 3SE +/- 0.112, N = 3SE +/- 0.025, N = 316.5508.4994.6621. (CXX) g++ options: -O3 -march=native -ldl

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Tree8 vCPUs16 vCPUs32 vCPUs6K12K18K24K30KSE +/- 267.82, N = 9SE +/- 164.12, N = 3SE +/- 344.06, N = 416322.424004.729296.7MIN: 10733.44 / MAX: 19126.93MIN: 18739.09 / MAX: 24332.49MIN: 20859.52 / MAX: 30225.51

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO32 vCPUs16 vCPUs8 vCPUs6K12K18K24K30KSE +/- 131.70, N = 4SE +/- 583.25, N = 12SE +/- 435.60, N = 910705.915981.526456.4MIN: 10505.49 / MAX: 14847.21MIN: 12776.53 / MAX: 36273.51MIN: 13667.16 / MAX: 42318.14

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Execution Time32 vCPUs16 vCPUs8 vCPUs5001000150020002500994.531534.722426.16-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Mesh Time32 vCPUs16 vCPUs8 vCPUs90180270360450206.40303.71425.95-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

Graph500

Scale: 26

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 2632 vCPUs16 vCPUs110M220M330M440M550M5083720002625630001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 2632 vCPUs16 vCPUs40M80M120M160M200M169542000952655001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 2632 vCPUs16 vCPUs100M200M300M400M500M4773770002574780001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 2632 vCPUs16 vCPUs30M60M90M120M150M124702000707502001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs20406080100SE +/- 0.17, N = 12SE +/- 0.40, N = 3SE +/- 0.33, N = 326.5542.7574.71

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs20406080100SE +/- 0.19, N = 12SE +/- 0.73, N = 3SE +/- 1.11, N = 328.6644.3978.00

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test Time32 vCPUs16 vCPUs8 vCPUs1530456075SE +/- 0.24, N = 12SE +/- 0.20, N = 3SE +/- 0.36, N = 322.2235.4566.27

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test Time32 vCPUs16 vCPUs8 vCPUs1020304050SE +/- 0.32, N = 12SE +/- 0.23, N = 3SE +/- 0.57, N = 322.8430.7045.61

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe32 vCPUs16 vCPUs8 vCPUs48121620SE +/- 0.02, N = 12SE +/- 0.03, N = 3SE +/- 0.00, N = 34.788.3415.71

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark32 vCPUs16 vCPUs8 vCPUs60120180240300SE +/- 0.11, N = 12SE +/- 0.17, N = 3SE +/- 0.13, N = 369.79137.04277.83

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time32 vCPUs16 vCPUs8 vCPUs20406080100SE +/- 0.55, N = 12SE +/- 0.08, N = 3SE +/- 0.52, N = 339.2251.5589.23

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs20406080100SE +/- 0.26, N = 9SE +/- 0.44, N = 3SE +/- 0.22, N = 331.9845.2980.26

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs20406080100SE +/- 0.44, N = 9SE +/- 0.72, N = 3SE +/- 0.19, N = 330.3244.6280.02

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Time32 vCPUs16 vCPUs8 vCPUs1530456075SE +/- 0.12, N = 9SE +/- 0.92, N = 3SE +/- 0.25, N = 324.3637.2268.68

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Time32 vCPUs16 vCPUs8 vCPUs1122334455SE +/- 0.16, N = 9SE +/- 0.24, N = 3SE +/- 0.53, N = 327.6435.6450.87

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe32 vCPUs16 vCPUs8 vCPUs48121620SE +/- 0.01, N = 9SE +/- 0.01, N = 3SE +/- 0.05, N = 34.768.4015.65

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark32 vCPUs16 vCPUs8 vCPUs60120180240300SE +/- 0.08, N = 9SE +/- 0.06, N = 3SE +/- 0.14, N = 369.57136.85278.37

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time32 vCPUs16 vCPUs8 vCPUs20406080100SE +/- 0.45, N = 9SE +/- 0.22, N = 3SE +/- 0.85, N = 346.3051.4093.67

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-Only32 vCPUs16 vCPUs8 vCPUs2004006008001000SE +/- 0.07, N = 3SE +/- 0.22, N = 3SE +/- 1.99, N = 3249.89506.101016.66

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile32 vCPUs16 vCPUs8 vCPUs2004006008001000SE +/- 1.96, N = 3SE +/- 1.16, N = 3SE +/- 0.66, N = 3312.12495.54917.39

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs0.64351.2871.93052.5743.2175SE +/- 0.03, N = 15SE +/- 0.04, N = 12SE +/- 0.03, N = 31.681.792.86

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs0.7831.5662.3493.1323.915SE +/- 0.02, N = 15SE +/- 0.03, N = 12SE +/- 0.04, N = 32.132.223.48

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Time32 vCPUs16 vCPUs8 vCPUs1.03052.0613.09154.1225.1525SE +/- 0.03, N = 15SE +/- 0.03, N = 12SE +/- 0.02, N = 32.012.554.58

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Time16 vCPUs32 vCPUs8 vCPUs246810SE +/- 0.05, N = 12SE +/- 0.23, N = 15SE +/- 0.17, N = 36.006.726.90

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe32 vCPUs16 vCPUs8 vCPUs48121620SE +/- 0.01, N = 15SE +/- 0.02, N = 12SE +/- 0.02, N = 34.798.3915.92

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark32 vCPUs16 vCPUs8 vCPUs60120180240300SE +/- 0.06, N = 15SE +/- 0.11, N = 12SE +/- 0.17, N = 369.77137.76277.89

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time32 vCPUs16 vCPUs8 vCPUs246810SE +/- 0.11, N = 15SE +/- 0.05, N = 12SE +/- 0.03, N = 34.794.936.28

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-Only32 vCPUs16 vCPUs8 vCPUs2004006008001000SE +/- 0.42, N = 3SE +/- 0.85, N = 3SE +/- 1.74, N = 3214.41426.04841.18

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency32 vCPUs16 vCPUs8 vCPUs50100150200250SE +/- 7.17, N = 12SE +/- 2.67, N = 3SE +/- 2.96, N = 12114.80204.92215.101. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write32 vCPUs16 vCPUs8 vCPUs5001000150020002500SE +/- 154.25, N = 12SE +/- 15.69, N = 3SE +/- 15.88, N = 122282122011651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRank8 vCPUs32 vCPUs16 vCPUs11002200330044005500SE +/- 37.35, N = 11SE +/- 77.61, N = 12SE +/- 49.42, N = 35020.85174.35197.0MIN: 4537.14 / MAX: 5676.41MIN: 4316.47 / MAX: 6446.52MIN: 4732.96 / MAX: 5397.5

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency32 vCPUs16 vCPUs8 vCPUs1.13512.27023.40534.54045.6755SE +/- 0.012, N = 12SE +/- 0.021, N = 3SE +/- 0.060, N = 120.8031.9005.0451. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only32 vCPUs16 vCPUs8 vCPUs70K140K210K280K350KSE +/- 4561.68, N = 12SE +/- 1418.81, N = 3SE +/- 588.20, N = 12312239131607496281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K32 vCPUs16 vCPUs8 vCPUs0.47930.95861.43791.91722.3965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 62.132.041.921. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Renaissance

Test: ALS Movie Lens

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lens8 vCPUs16 vCPUs32 vCPUs4K8K12K16K20KSE +/- 101.59, N = 3SE +/- 79.58, N = 3SE +/- 57.21, N = 316183.516797.517606.6MIN: 16078.67 / MAX: 18111.73MIN: 16713.33 / MAX: 18436.45MIN: 17544.26 / MAX: 19037.24

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs1.16552.3313.49654.6625.8275SE +/- 0.02, N = 15SE +/- 0.05, N = 3SE +/- 0.07, N = 32.122.655.18

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time32 vCPUs16 vCPUs8 vCPUs1.34552.6914.03655.3826.7275SE +/- 0.04, N = 15SE +/- 0.11, N = 3SE +/- 0.09, N = 32.873.655.98

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Time32 vCPUs16 vCPUs8 vCPUs1.27582.55163.82745.10326.379SE +/- 0.03, N = 15SE +/- 0.01, N = 3SE +/- 0.01, N = 32.603.365.67

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Time32 vCPUs16 vCPUs8 vCPUs246810SE +/- 0.05, N = 15SE +/- 0.10, N = 3SE +/- 0.13, N = 36.727.438.85

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe32 vCPUs16 vCPUs8 vCPUs48121620SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.07, N = 34.808.2915.81

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark32 vCPUs16 vCPUs8 vCPUs60120180240300SE +/- 0.06, N = 15SE +/- 0.20, N = 3SE +/- 0.35, N = 369.92137.17278.47

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time32 vCPUs16 vCPUs8 vCPUs246810SE +/- 0.04, N = 15SE +/- 0.05, N = 3SE +/- 0.08, N = 34.965.918.09

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 032 vCPUs16 vCPUs8 vCPUs100200300400500SE +/- 0.65, N = 3SE +/- 0.80, N = 3SE +/- 0.90, N = 3266.34328.97456.241. (CXX) g++ options: -O3 -fPIC -march=native -lm

Facebook RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While Writing32 vCPUs16 vCPUs8 vCPUs600K1200K1800K2400K3000KSE +/- 32390.32, N = 12SE +/- 20446.66, N = 15SE +/- 9067.95, N = 15261099212648265947021. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dotty8 vCPUs32 vCPUs16 vCPUs400800120016002000SE +/- 18.70, N = 5SE +/- 32.38, N = 11SE +/- 12.38, N = 31800.21871.71979.2MIN: 1442.18 / MAX: 3536.53MIN: 1370.64 / MAX: 3034.49MIN: 1396.31 / MAX: 2843.97

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-Only32 vCPUs16 vCPUs8 vCPUs100200300400500SE +/- 0.10, N = 3SE +/- 0.50, N = 3SE +/- 0.04, N = 3112.47226.26447.71

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootout8 vCPUs16 vCPUs32 vCPUs14002800420056007000SE +/- 66.45, N = 4SE +/- 41.18, N = 15SE +/- 37.00, N = 35537.16261.46566.5MIN: 5069.67 / MAX: 6244.83MIN: 5664.61 / MAX: 10501.86MIN: 5609.26 / MAX: 13128.6

GPAW

Input: Carbon Nanotube

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube32 vCPUs16 vCPUs8 vCPUs80160240320400SE +/- 0.30, N = 3SE +/- 0.03, N = 3SE +/- 0.63, N = 3130.35208.97381.201. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi

TNN

Target: CPU - Model: DenseNet

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet32 vCPUs16 vCPUs8 vCPUs8001600240032004000SE +/- 6.90, N = 3SE +/- 12.43, N = 3SE +/- 9.75, N = 33056.903358.733842.12MIN: 2928.19 / MAX: 3237.58MIN: 3163.2 / MAX: 3575.85MIN: 3619.38 / MAX: 4060.161. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare32 vCPUs16 vCPUs8 vCPUs0.38660.77321.15981.54641.933SE +/- 0.010, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 31.7180.8800.4501. (CXX) g++ options: -O3 -march=native

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes32 vCPUs16 vCPUs8 vCPUs20K40K60K80K100KSE +/- 777.36, N = 3SE +/- 256.55, N = 3SE +/- 136.95, N = 10878193929617862

Facebook RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write Random32 vCPUs16 vCPUs8 vCPUs300K600K900K1200K1500KSE +/- 9643.50, N = 15SE +/- 1701.74, N = 3SE +/- 4976.00, N = 1513218278847005483531. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 232 vCPUs16 vCPUs8 vCPUs50100150200250SE +/- 0.13, N = 3SE +/- 0.47, N = 3SE +/- 0.32, N = 3169.64194.77245.601. (CXX) g++ options: -O3 -fPIC -march=native -lm

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA25632 vCPUs16 vCPUs8 vCPUs6000M12000M18000M24000M30000MSE +/- 119493320.18, N = 3SE +/- 19283388.31, N = 3SE +/- 19026629.44, N = 3257889199131292641152764560835071. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

ASKAP

Test: tConvolve MPI - Gridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding32 vCPUs16 vCPUs8 vCPUs8001600240032004000SE +/- 42.99, N = 15SE +/- 32.26, N = 3SE +/- 23.42, N = 153899.283343.251977.891. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MPI - Degridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding32 vCPUs16 vCPUs8 vCPUs8001600240032004000SE +/- 54.84, N = 15SE +/- 12.80, N = 3SE +/- 24.91, N = 153962.082585.311325.061. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing8 vCPUs32 vCPUs16 vCPUs0.85521.71042.56563.42084.276SE +/- 0.020, N = 3SE +/- 0.016, N = 3SE +/- 0.018, N = 33.6403.7233.801

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive32 vCPUs16 vCPUs8 vCPUs60120180240300SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 3.08, N = 368.66137.62276.881. (CXX) g++ options: -O3 -march=native -flto -pthread

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency32 vCPUs16 vCPUs8 vCPUs0.41490.82981.24471.65962.0745SE +/- 0.002, N = 3SE +/- 0.003, N = 3SE +/- 0.023, N = 30.3040.6331.8441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only32 vCPUs16 vCPUs8 vCPUs70K140K210K280K350KSE +/- 1811.74, N = 3SE +/- 697.61, N = 3SE +/- 663.61, N = 3329539157894542371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency32 vCPUs8 vCPUs16 vCPUs918273645SE +/- 0.06, N = 3SE +/- 0.15, N = 3SE +/- 0.27, N = 329.5639.7739.871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write32 vCPUs8 vCPUs16 vCPUs7001400210028003500SE +/- 6.74, N = 3SE +/- 9.80, N = 3SE +/- 17.11, N = 33383251525081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

Renaissance

Test: Genetic Algorithm Using Jenetics + Futures

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futures8 vCPUs16 vCPUs32 vCPUs7001400210028003500SE +/- 33.55, N = 3SE +/- 14.58, N = 3SE +/- 8.81, N = 33001.63067.33084.2MIN: 2862.8 / MAX: 3206.85MIN: 3011.77 / MAX: 3250.64MIN: 2993.8 / MAX: 3192.9

Renaissance

Test: Apache Spark ALS

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALS16 vCPUs32 vCPUs8 vCPUs12002400360048006000SE +/- 27.36, N = 3SE +/- 32.04, N = 3SE +/- 7.85, N = 33906.74118.35725.2MIN: 3730.93 / MAX: 4142.3MIN: 3925.84 / MAX: 4358.22MIN: 5528.56 / MAX: 5954.26

Aircrack-ng

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.732 vCPUs16 vCPUs8 vCPUs7K14K21K28K35KSE +/- 287.54, N = 15SE +/- 192.97, N = 15SE +/- 103.85, N = 1533647.5516697.928308.58-lpcre-lpcre1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayes32 vCPUs16 vCPUs8 vCPUs5001000150020002500SE +/- 9.73, N = 3SE +/- 7.45, N = 3SE +/- 42.77, N = 15766.41262.02249.8MIN: 495.95 / MAX: 1178.88MIN: 877.37 / MAX: 1398.23MIN: 1478.18 / MAX: 2434.18

ASKAP

Test: tConvolve MT - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding32 vCPUs16 vCPUs8 vCPUs12002400360048006000SE +/- 80.56, N = 15SE +/- 2.61, N = 3SE +/- 7.87, N = 35522.074083.162196.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MT - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding32 vCPUs16 vCPUs8 vCPUs10002000300040005000SE +/- 35.89, N = 15SE +/- 5.95, N = 3SE +/- 5.73, N = 34456.553789.012360.631. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p32 vCPUs16 vCPUs8 vCPUs1.12282.24563.36844.49125.614SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.994.844.651. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

High Performance Conjugate Gradient

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.132 vCPUs16 vCPUs8 vCPUs510152025SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 322.0917.1011.101. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Renaissance

Test: Finagle HTTP Requests

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requests16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 62.21, N = 3SE +/- 40.11, N = 3SE +/- 122.37, N = 38932.59234.89430.8MIN: 8338.21 / MAX: 10055.02MIN: 8668.63 / MAX: 9879.54MIN: 8793.75 / MAX: 9955.78

NAS Parallel Benchmarks

Test / Class: SP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C32 vCPUs16 vCPUs8 vCPUs6K12K18K24K30KSE +/- 31.60, N = 3SE +/- 112.56, N = 3SE +/- 39.91, N = 326843.5819710.907115.281. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Futex32 vCPUs16 vCPUs8 vCPUs300K600K900K1200K1500KSE +/- 15026.23, N = 3SE +/- 30917.20, N = 15SE +/- 36001.77, N = 151437660.621198681.87937451.991. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C32 vCPUs16 vCPUs8 vCPUs15K30K45K60K75KSE +/- 272.46, N = 3SE +/- 18.18, N = 3SE +/- 23.11, N = 369530.6449125.9314368.291. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D32 vCPUs16 vCPUs8 vCPUs7001400210028003500SE +/- 2.04, N = 3SE +/- 1.03, N = 3SE +/- 0.56, N = 33265.681634.99820.941. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet32 vCPUs16 vCPUs8 vCPUs14002800420056007000SE +/- 31.57, N = 8SE +/- 11.05, N = 3SE +/- 9.96, N = 33853.903955.896618.32

Facebook RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update Random16 vCPUs32 vCPUs8 vCPUs70K140K210K280K350KSE +/- 2705.38, N = 8SE +/- 705.83, N = 3SE +/- 1758.40, N = 33159722117352046881. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K32 vCPUs16 vCPUs8 vCPUs246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.996.686.111. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

nginx

Concurrent Requests: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 50016 vCPUs8 vCPUs32 vCPUs60K120K180K240K300KSE +/- 676.50, N = 3SE +/- 840.72, N = 3SE +/- 245.82, N = 3261253.72245968.24235749.361. (CC) gcc options: -lcrypt -lz -O3 -march=native

nginx

Concurrent Requests: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100016 vCPUs8 vCPUs32 vCPUs60K120K180K240K300KSE +/- 385.23, N = 3SE +/- 107.45, N = 3SE +/- 479.91, N = 3258672.88240628.22233484.081. (CC) gcc options: -lcrypt -lz -O3 -march=native

Sysbench

Test: CPU

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU32 vCPUs16 vCPUs8 vCPUs20K40K60K80K100KSE +/- 23.77, N = 3SE +/- 12.70, N = 3SE +/- 6.95, N = 3108241.6154317.4227237.281. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing32 vCPUs16 vCPUs8 vCPUs0.00340.00680.01020.01360.017SE +/- 0.000, N = 15SE +/- 0.000, N = 3SE +/- 0.000, N = 150.0140.0150.015

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile32 vCPUs16 vCPUs8 vCPUs306090120150SE +/- 0.16, N = 3SE +/- 0.09, N = 3SE +/- 0.18, N = 338.9661.98113.46

DaCapo Benchmark

Java Test: H2

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H28 vCPUs16 vCPUs32 vCPUs11002200330044005500SE +/- 38.62, N = 20SE +/- 39.59, N = 20SE +/- 83.04, N = 20434648435175

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State8 vCPUs32 vCPUs16 vCPUs0.46310.92621.38931.85242.3155SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 31.9812.0552.058

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V432 vCPUs16 vCPUs8 vCPUs20K40K60K80K100KSE +/- 149.01, N = 3SE +/- 84.24, N = 3SE +/- 49.87, N = 331657.346113.497646.1

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V232 vCPUs16 vCPUs8 vCPUs20K40K60K80K100KSE +/- 379.42, N = 3SE +/- 16.18, N = 3SE +/- 70.02, N = 333994.945445.991592.6

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 180.13, N = 3SE +/- 27.10, N = 3SE +/- 355.48, N = 315855.116159.128372.8

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float32 vCPUs16 vCPUs8 vCPUs9001800270036004500SE +/- 17.55, N = 3SE +/- 4.32, N = 3SE +/- 3.06, N = 32093.252481.964395.73

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 6.49, N = 3SE +/- 6.00, N = 3SE +/- 14.04, N = 31898.412482.373550.65

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA409632 vCPUs16 vCPUs8 vCPUs30K60K90K120K150KSE +/- 29.86, N = 3SE +/- 8.35, N = 3SE +/- 10.26, N = 3128273.164247.032136.81. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA409632 vCPUs16 vCPUs8 vCPUs30060090012001500SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 31570.2786.7393.71. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

Facebook RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random Read32 vCPUs16 vCPUs8 vCPUs30M60M90M120M150MSE +/- 376574.31, N = 3SE +/- 735054.27, N = 3SE +/- 252880.06, N = 312470420162048967310556891. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Timed MPlayer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile32 vCPUs16 vCPUs8 vCPUs20406080100SE +/- 0.35, N = 4SE +/- 0.26, N = 3SE +/- 0.03, N = 328.9347.6288.84

Renaissance

Test: Random Forest

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Forest8 vCPUs16 vCPUs32 vCPUs2004006008001000SE +/- 5.46, N = 3SE +/- 4.55, N = 3SE +/- 12.92, N = 3985.9997.71047.3MIN: 902.84 / MAX: 1247.45MIN: 900.54 / MAX: 1197.51MIN: 904.64 / MAX: 1280.13

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p32 vCPUs16 vCPUs8 vCPUs3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 312.1111.7811.271. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap32 vCPUs16 vCPUs8 vCPUs2K4K6K8K10KSE +/- 95.95, N = 20SE +/- 52.52, N = 4SE +/- 68.63, N = 4501555988040

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans16 vCPUs8 vCPUs32 vCPUs13002600390052006500SE +/- 59.44, N = 4SE +/- 38.11, N = 20SE +/- 28.01, N = 4552455385954

NAS Parallel Benchmarks

Test / Class: CG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C32 vCPUs16 vCPUs8 vCPUs5K10K15K20K25KSE +/- 35.67, N = 3SE +/- 171.15, N = 3SE +/- 49.63, N = 1521433.9212171.956855.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing8 vCPUs32 vCPUs16 vCPUs0.21150.4230.63450.8461.0575SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.004, N = 30.9030.9150.940

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C32 vCPUs16 vCPUs8 vCPUs20K40K60K80K100KSE +/- 137.48, N = 3SE +/- 701.24, N = 3SE +/- 50.76, N = 387702.3055447.3132029.141. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: IS.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D32 vCPUs16 vCPUs8 vCPUs400800120016002000SE +/- 0.86, N = 3SE +/- 14.70, N = 3SE +/- 1.14, N = 31822.771498.451104.261. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stress32 vCPUs16 vCPUs8 vCPUs2K4K6K8K10KSE +/- 4.23, N = 3SE +/- 2.80, N = 3SE +/- 1.28, N = 38209.474116.962065.531. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMA8 vCPUs16 vCPUs32 vCPUs30060090012001500SE +/- 1.98, N = 3SE +/- 4.29, N = 3SE +/- 1.69, N = 31387.801113.53549.131. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cache32 vCPUs16 vCPUs8 vCPUs120240360480600SE +/- 0.28, N = 3SE +/- 2.05, N = 3SE +/- 2.30, N = 3566.91551.25436.311. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Math32 vCPUs16 vCPUs8 vCPUs20K40K60K80K100KSE +/- 190.70, N = 3SE +/- 27.43, N = 3SE +/- 6.31, N = 397749.0849102.3024633.991. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Math32 vCPUs16 vCPUs8 vCPUs30K60K90K120K150KSE +/- 9.80, N = 3SE +/- 10.44, N = 3SE +/- 25.04, N = 3151792.8376177.5638215.951. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message Passing32 vCPUs16 vCPUs8 vCPUs1.3M2.6M3.9M5.2M6.5MSE +/- 7551.56, N = 3SE +/- 15929.45, N = 3SE +/- 12538.93, N = 36128517.105475267.364507844.171. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B32 vCPUs16 vCPUs8 vCPUs7K14K21K28K35KSE +/- 38.20, N = 3SE +/- 244.58, N = 3SE +/- 17.11, N = 334381.9119552.457338.981. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v232 vCPUs16 vCPUs8 vCPUs70140210280350SE +/- 0.05, N = 3SE +/- 1.36, N = 3SE +/- 0.78, N = 3322.77328.89331.34MIN: 319.63 / MAX: 326.43MIN: 322.15 / MAX: 373.8MIN: 327.36 / MAX: 339.941. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough32 vCPUs16 vCPUs8 vCPUs714212835SE +/- 0.0033, N = 3SE +/- 0.0106, N = 3SE +/- 0.0316, N = 37.161914.214629.05051. (CXX) g++ options: -O3 -march=native -flto -pthread

ASKAP

Test: Hogbom Clean OpenMP

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP32 vCPUs16 vCPUs8 vCPUs2004006008001000SE +/- 3.30, N = 3SE +/- 0.00, N = 3SE +/- 1.21, N = 3996.70645.16371.301. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State8 vCPUs32 vCPUs16 vCPUs0.00140.00280.00420.00560.007SE +/- 0.000, N = 3SE +/- 0.000, N = 14SE +/- 0.000, N = 150.0050.0050.006

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.132 vCPUs8 vCPUs16 vCPUs70140210280350SE +/- 0.07, N = 3SE +/- 0.64, N = 3SE +/- 0.80, N = 3301.15303.80305.23MIN: 299.13 / MAX: 307.2MIN: 300.11 / MAX: 314.72MIN: 298.53 / MAX: 370.851. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

Coremark

CoreMark Size 666 - Iterations Per Second

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second32 vCPUs16 vCPUs8 vCPUs150K300K450K600K750KSE +/- 385.56, N = 3SE +/- 87.09, N = 3SE +/- 85.46, N = 3700917.94351562.54175037.771. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

Redis

Test: GET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET8 vCPUs32 vCPUs16 vCPUs400K800K1200K1600K2000KSE +/- 19964.38, N = 5SE +/- 10764.67, N = 3SE +/- 17762.42, N = 31980967.481926297.791798635.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Redis

Test: SET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET8 vCPUs32 vCPUs16 vCPUs300K600K900K1200K1500KSE +/- 15381.83, N = 3SE +/- 9294.72, N = 3SE +/- 10176.81, N = 31440521.551411234.921305143.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Lossless32 vCPUs16 vCPUs8 vCPUs612182430SE +/- 0.00, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 310.3414.7023.311. (CXX) g++ options: -O3 -fPIC -march=native -lm

ASKAP

Test: tConvolve OpenMP - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding32 vCPUs16 vCPUs8 vCPUs2K4K6K8K10KSE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 33.24, N = 39181.245023.702421.431. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve OpenMP - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding32 vCPUs16 vCPUs8 vCPUs16003200480064008000SE +/- 66.63, N = 3SE +/- 43.40, N = 3SE +/- 29.91, N = 37262.743631.812296.101. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C32 vCPUs16 vCPUs8 vCPUs11K22K33K44K55KSE +/- 41.18, N = 3SE +/- 300.01, N = 3SE +/- 15.96, N = 352309.8132644.8518574.231. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State8 vCPUs32 vCPUs16 vCPUs0.090.180.270.360.45SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 30.3810.3920.400

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 632 vCPUs16 vCPUs8 vCPUs510152025SE +/- 0.020, N = 3SE +/- 0.037, N = 3SE +/- 0.130, N = 36.68211.16820.2731. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Lossless32 vCPUs16 vCPUs8 vCPUs3691215SE +/- 0.072, N = 3SE +/- 0.065, N = 8SE +/- 0.096, N = 36.7757.6589.9971. (CXX) g++ options: -O3 -fPIC -march=native -lm

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython16 vCPUs32 vCPUs8 vCPUs11002200330044005500SE +/- 36.80, N = 4SE +/- 5.52, N = 4SE +/- 30.34, N = 4501050795188

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Medium32 vCPUs16 vCPUs8 vCPUs3691215SE +/- 0.0035, N = 3SE +/- 0.0194, N = 3SE +/- 0.0253, N = 35.98256.94499.05051. (CXX) g++ options: -O3 -march=native -flto -pthread

TNN

Target: CPU - Model: SqueezeNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v232 vCPUs8 vCPUs16 vCPUs20406080100SE +/- 0.00, N = 3SE +/- 0.10, N = 3SE +/- 0.34, N = 395.4795.7396.21MIN: 95.15 / MAX: 96.88MIN: 95.23 / MAX: 97.46MIN: 95.08 / MAX: 100.721. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C32 vCPUs16 vCPUs8 vCPUs11K22K33K44K55KSE +/- 31.40, N = 3SE +/- 102.49, N = 3SE +/- 46.23, N = 350939.0533309.7627703.331. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein32 vCPUs16 vCPUs8 vCPUs48121620SE +/- 0.012, N = 3SE +/- 0.037, N = 3SE +/- 0.011, N = 316.5968.8614.8121. (CXX) g++ options: -O3 -march=native -ldl


Phoronix Test Suite v10.8.4