Tau T2A 16 vCPUs

KVM testing on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2208123-NE-2208114NE01.

ProcessorMotherboardMemoryDiskNetworkOSKernelCompilerFile-SystemSystem LayerTau T2A 16 vCPUs 8 vCPUs 32 vCPUsARMv8 Neoverse-N1 (16 Cores)KVM Google Compute Engine64GB215GB nvme_card-pdGoogle Compute Engine VirtualUbuntu 22.045.15.0-1013-gcp (aarch64)GCC 12.0.1 20220319ext4KVMARMv8 Neoverse-N1 (8 Cores)32GBARMv8 Neoverse-N1 (32 Cores)128GB5.15.0-1016-gcp (aarch64)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- Tau T2A: 16 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 8 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 32 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

hpcg: npb: BT.Cnpb: CG.Cnpb: EP.Dnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Bnpb: SP.Copenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: drivaerFastback, Medium Mesh Size - Execution Timelammps: 20k Atomslammps: Rhodopsin Proteindacapobench: H2dacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futuresvpxenc: Speed 0 - Bosphorus 4Kvpxenc: Speed 5 - Bosphorus 4Kvpxenc: Speed 0 - Bosphorus 1080pvpxenc: Speed 5 - Bosphorus 1080pcoremark: CoreMark Size 666 - Iterations Per Secondavifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslessbuild-ffmpeg: Time To Compilebuild-gem5: Time To Compilebuild-mplayer: Time To Compileaircrack-ng: openssl: SHA256openssl: RSA4096openssl: RSA4096spark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timeaskap: tConvolve MT - Griddingaskap: tConvolve MT - Degriddingaskap: tConvolve MPI - Degriddingaskap: tConvolve MPI - Griddingaskap: tConvolve OpenMP - Griddingaskap: tConvolve OpenMP - Degriddingaskap: Hogbom Clean OpenMPgraph500: 26graph500: 26graph500: 26graph500: 26gromacs: MPI CPU - water_GMX50_baretensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2pgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencyastcenc: Mediumastcenc: Thoroughastcenc: Exhaustiveredis: GETredis: SETstress-ng: NUMAstress-ng: Futexstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: System V Message Passinggpaw: Carbon Nanotubetnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1sysbench: CPUcassandra: Writesrocksdb: Rand Readrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlynginx: 500nginx: 1000pyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numpy - 16384 - Isoneutral Mixingpyhpc: CPU - Numpy - 1048576 - Equation of Statepyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: CPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingspec-jbb2015: SPECjbb2015-Composite max-jOPSspec-jbb2015: SPECjbb2015-Composite critical-jOPSTau T2A 16 vCPUs 8 vCPUs 32 vCPUs17.098449125.9312171.951634.9932644.851498.4555447.3133309.7619552.4519710.90303.711534.728.4998.86148435010559855241979.2997.716797.53906.71262.015981.55197.08932.56261.424004.73067.32.046.684.8411.78351562.543158328.970194.77411.16814.7027.65861.983495.54147.62116697.92412926411527786.764247.04.93137.7612851908.396.002.552.221.795.91137.1739761998.297.433.363.652.6551.40136.8496130198.4035.6437.2244.6245.2951.55137.0402077248.3430.7035.4544.3942.753789.014083.162585.313343.253631.815023.7645.16125747800026256300070750200952655000.8803955.8946113.415855.12481.961898.4145445.91578940.6331316071.900250839.8711220204.9196.944914.2146137.62181798635.371305143.581113.531198681.87551.254116.9676177.5649102.305475267.36208.9653358.728328.88696.208305.22754317.4239296620489673159721264826884700226.26506.10426.04261253.72258672.880.0060.0150.4000.9402.0583.80118092920711.096914368.296855.81820.9418574.231104.2632029.1427703.337338.987115.28425.952426.164.6624.81243465188804055381800.2985.916183.55725.22249.826456.45020.89234.85537.116322.43001.61.926.114.6511.27175037.765143456.237245.60120.27323.3129.997113.455917.39388.8388308.5806456083507393.732136.86.28277.8915.926.904.583.482.868.09278.47252722915.818.855.675.985.1893.67278.36684567715.6550.8768.6880.0280.2689.23277.8315.7145.6166.2778.0074.712360.632196.741325.061977.892296.102421.43371.2950.456618.3297646.116159.14395.732482.3791592.6542371.844496285.045251539.7701165215.0969.050529.0505276.87981980967.481440521.551387.80937451.99436.312065.5338215.9524633.994507844.17381.1963842.123331.34495.729303.80327237.281786231055689204688594702548353447.711016.66841.18245968.24240628.220.0050.0150.3810.9031.9813.6409158392122.093069530.6421433.923265.6852309.811822.7787702.3050939.0534381.9126843.58206.4994.5316.55016.59651755079501559541871.71047.317606.64118.3766.410705.95174.39430.86566.529296.73084.22.136.994.9912.11700917.944737266.337169.6396.68210.3416.77538.958312.12028.92833647.548257889199131570.2128273.14.7969.774.796.722.012.131.684.9669.924.806.722.602.872.1246.3069.574.7627.6424.3630.3231.9839.2269.794.7822.8422.2228.6626.554456.555522.073962.083899.287262.749181.24996.7004773770005083720001247020001695420001.7183853.9031657.328372.82093.253550.6533994.93295390.3043122390.803338329.5592282114.8015.98257.161968.65571926297.791411234.92549.131437660.62566.918209.47151792.8397749.086128517.10130.3533056.897322.76895.473301.150108241.618781912470420121173526109921321827112.47249.89214.41235749.36233484.080.0050.0140.3920.9152.0553.7233507522955OpenBenchmarking.org

High Performance Conjugate Gradient

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.116 vCPUs8 vCPUs32 vCPUs510152025SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 317.1011.1022.091. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C16 vCPUs8 vCPUs32 vCPUs15K30K45K60K75KSE +/- 18.18, N = 3SE +/- 23.11, N = 3SE +/- 272.46, N = 349125.9314368.2969530.641. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: CG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25KSE +/- 171.15, N = 3SE +/- 49.63, N = 15SE +/- 35.67, N = 312171.956855.8121433.921. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D16 vCPUs8 vCPUs32 vCPUs7001400210028003500SE +/- 1.03, N = 3SE +/- 0.56, N = 3SE +/- 2.04, N = 31634.99820.943265.681. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C16 vCPUs8 vCPUs32 vCPUs11K22K33K44K55KSE +/- 300.01, N = 3SE +/- 15.96, N = 3SE +/- 41.18, N = 332644.8518574.2352309.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: IS.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D16 vCPUs8 vCPUs32 vCPUs400800120016002000SE +/- 14.70, N = 3SE +/- 1.14, N = 3SE +/- 0.86, N = 31498.451104.261822.771. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 701.24, N = 3SE +/- 50.76, N = 3SE +/- 137.48, N = 355447.3132029.1487702.301. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C16 vCPUs8 vCPUs32 vCPUs11K22K33K44K55KSE +/- 102.49, N = 3SE +/- 46.23, N = 3SE +/- 31.40, N = 333309.7627703.3350939.051. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B16 vCPUs8 vCPUs32 vCPUs7K14K21K28K35KSE +/- 244.58, N = 3SE +/- 17.11, N = 3SE +/- 38.20, N = 319552.457338.9834381.911. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: SP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 112.56, N = 3SE +/- 39.91, N = 3SE +/- 31.60, N = 319710.907115.2826843.581. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Mesh Time16 vCPUs8 vCPUs32 vCPUs90180270360450303.71425.95206.40-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Execution Time16 vCPUs8 vCPUs32 vCPUs50010001500200025001534.722426.16994.53-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

LAMMPS Molecular Dynamics Simulator

Model: 20k Atoms

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atoms16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.112, N = 3SE +/- 0.025, N = 3SE +/- 0.004, N = 38.4994.66216.5501. (CXX) g++ options: -O3 -march=native -ldl

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.037, N = 3SE +/- 0.011, N = 3SE +/- 0.012, N = 38.8614.81216.5961. (CXX) g++ options: -O3 -march=native -ldl

DaCapo Benchmark

Java Test: H2

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H216 vCPUs8 vCPUs32 vCPUs11002200330044005500SE +/- 39.59, N = 20SE +/- 38.62, N = 20SE +/- 83.04, N = 20484343465175

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython16 vCPUs8 vCPUs32 vCPUs11002200330044005500SE +/- 36.80, N = 4SE +/- 30.34, N = 4SE +/- 5.52, N = 4501051885079

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 52.52, N = 4SE +/- 68.63, N = 4SE +/- 95.95, N = 20559880405015

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans16 vCPUs8 vCPUs32 vCPUs13002600390052006500SE +/- 59.44, N = 4SE +/- 38.11, N = 20SE +/- 28.01, N = 4552455385954

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dotty16 vCPUs8 vCPUs32 vCPUs400800120016002000SE +/- 12.38, N = 3SE +/- 18.70, N = 5SE +/- 32.38, N = 111979.21800.21871.7MIN: 1396.31 / MAX: 2843.97MIN: 1442.18 / MAX: 3536.53MIN: 1370.64 / MAX: 3034.49

Renaissance

Test: Random Forest

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Forest16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 4.55, N = 3SE +/- 5.46, N = 3SE +/- 12.92, N = 3997.7985.91047.3MIN: 900.54 / MAX: 1197.51MIN: 902.84 / MAX: 1247.45MIN: 904.64 / MAX: 1280.13

Renaissance

Test: ALS Movie Lens

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lens16 vCPUs8 vCPUs32 vCPUs4K8K12K16K20KSE +/- 79.58, N = 3SE +/- 101.59, N = 3SE +/- 57.21, N = 316797.516183.517606.6MIN: 16713.33 / MAX: 18436.45MIN: 16078.67 / MAX: 18111.73MIN: 17544.26 / MAX: 19037.24

Renaissance

Test: Apache Spark ALS

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALS16 vCPUs8 vCPUs32 vCPUs12002400360048006000SE +/- 27.36, N = 3SE +/- 7.85, N = 3SE +/- 32.04, N = 33906.75725.24118.3MIN: 3730.93 / MAX: 4142.3MIN: 5528.56 / MAX: 5954.26MIN: 3925.84 / MAX: 4358.22

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayes16 vCPUs8 vCPUs32 vCPUs5001000150020002500SE +/- 7.45, N = 3SE +/- 42.77, N = 15SE +/- 9.73, N = 31262.02249.8766.4MIN: 877.37 / MAX: 1398.23MIN: 1478.18 / MAX: 2434.18MIN: 495.95 / MAX: 1178.88

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 583.25, N = 12SE +/- 435.60, N = 9SE +/- 131.70, N = 415981.526456.410705.9MIN: 12776.53 / MAX: 36273.51MIN: 13667.16 / MAX: 42318.14MIN: 10505.49 / MAX: 14847.21

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRank16 vCPUs8 vCPUs32 vCPUs11002200330044005500SE +/- 49.42, N = 3SE +/- 37.35, N = 11SE +/- 77.61, N = 125197.05020.85174.3MIN: 4732.96 / MAX: 5397.5MIN: 4537.14 / MAX: 5676.41MIN: 4316.47 / MAX: 6446.52

Renaissance

Test: Finagle HTTP Requests

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requests16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 62.21, N = 3SE +/- 40.11, N = 3SE +/- 122.37, N = 38932.59234.89430.8MIN: 8338.21 / MAX: 10055.02MIN: 8668.63 / MAX: 9879.54MIN: 8793.75 / MAX: 9955.78

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootout16 vCPUs8 vCPUs32 vCPUs14002800420056007000SE +/- 41.18, N = 15SE +/- 66.45, N = 4SE +/- 37.00, N = 36261.45537.16566.5MIN: 5664.61 / MAX: 10501.86MIN: 5069.67 / MAX: 6244.83MIN: 5609.26 / MAX: 13128.6

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Tree16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 164.12, N = 3SE +/- 267.82, N = 9SE +/- 344.06, N = 424004.716322.429296.7MIN: 18739.09 / MAX: 24332.49MIN: 10733.44 / MAX: 19126.93MIN: 20859.52 / MAX: 30225.51

Renaissance

Test: Genetic Algorithm Using Jenetics + Futures

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futures16 vCPUs8 vCPUs32 vCPUs7001400210028003500SE +/- 14.58, N = 3SE +/- 33.55, N = 3SE +/- 8.81, N = 33067.33001.63084.2MIN: 3011.77 / MAX: 3250.64MIN: 2862.8 / MAX: 3206.85MIN: 2993.8 / MAX: 3192.9

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K16 vCPUs8 vCPUs32 vCPUs0.47930.95861.43791.91722.3965SE +/- 0.00, N = 3SE +/- 0.02, N = 6SE +/- 0.00, N = 32.041.922.131. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.686.116.991. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p16 vCPUs8 vCPUs32 vCPUs1.12282.24563.36844.49125.614SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.844.654.991. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p16 vCPUs8 vCPUs32 vCPUs3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.7811.2712.111. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Coremark

CoreMark Size 666 - Iterations Per Second

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second16 vCPUs8 vCPUs32 vCPUs150K300K450K600K750KSE +/- 87.09, N = 3SE +/- 85.46, N = 3SE +/- 385.56, N = 3351562.54175037.77700917.941. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 016 vCPUs8 vCPUs32 vCPUs100200300400500SE +/- 0.80, N = 3SE +/- 0.90, N = 3SE +/- 0.65, N = 3328.97456.24266.341. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 216 vCPUs8 vCPUs32 vCPUs50100150200250SE +/- 0.47, N = 3SE +/- 0.32, N = 3SE +/- 0.13, N = 3194.77245.60169.641. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 616 vCPUs8 vCPUs32 vCPUs510152025SE +/- 0.037, N = 3SE +/- 0.130, N = 3SE +/- 0.020, N = 311.16820.2736.6821. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Lossless16 vCPUs8 vCPUs32 vCPUs612182430SE +/- 0.19, N = 3SE +/- 0.11, N = 3SE +/- 0.00, N = 314.7023.3110.341. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Lossless16 vCPUs8 vCPUs32 vCPUs3691215SE +/- 0.065, N = 8SE +/- 0.096, N = 3SE +/- 0.072, N = 37.6589.9976.7751. (CXX) g++ options: -O3 -fPIC -march=native -lm

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile16 vCPUs8 vCPUs32 vCPUs306090120150SE +/- 0.09, N = 3SE +/- 0.18, N = 3SE +/- 0.16, N = 361.98113.4638.96

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 1.16, N = 3SE +/- 0.66, N = 3SE +/- 1.96, N = 3495.54917.39312.12

Timed MPlayer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.26, N = 3SE +/- 0.03, N = 3SE +/- 0.35, N = 447.6288.8428.93

Aircrack-ng

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.716 vCPUs8 vCPUs32 vCPUs7K14K21K28K35KSE +/- 192.97, N = 15SE +/- 103.85, N = 15SE +/- 287.54, N = 1516697.928308.5833647.55-lpcre-lpcre1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA25616 vCPUs8 vCPUs32 vCPUs6000M12000M18000M24000M30000MSE +/- 19283388.31, N = 3SE +/- 19026629.44, N = 3SE +/- 119493320.18, N = 3129264115276456083507257889199131. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs8 vCPUs32 vCPUs30060090012001500SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3786.7393.71570.21. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs8 vCPUs32 vCPUs30K60K90K120K150KSE +/- 8.35, N = 3SE +/- 10.26, N = 3SE +/- 29.86, N = 364247.032136.8128273.11. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.05, N = 12SE +/- 0.03, N = 3SE +/- 0.11, N = 154.936.284.79

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.11, N = 12SE +/- 0.17, N = 3SE +/- 0.06, N = 15137.76277.8969.77

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.02, N = 12SE +/- 0.02, N = 3SE +/- 0.01, N = 158.3915.924.79

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.05, N = 12SE +/- 0.17, N = 3SE +/- 0.23, N = 156.006.906.72

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1.03052.0613.09154.1225.1525SE +/- 0.03, N = 12SE +/- 0.02, N = 3SE +/- 0.03, N = 152.554.582.01

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs0.7831.5662.3493.1323.915SE +/- 0.03, N = 12SE +/- 0.04, N = 3SE +/- 0.02, N = 152.223.482.13

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs0.64351.2871.93052.5743.2175SE +/- 0.04, N = 12SE +/- 0.03, N = 3SE +/- 0.03, N = 151.792.861.68

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 155.918.094.96

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.20, N = 3SE +/- 0.35, N = 3SE +/- 0.06, N = 15137.17278.4769.92

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 158.2915.814.80

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.05, N = 157.438.856.72

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1.27582.55163.82745.10326.379SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 153.365.672.60

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1.34552.6914.03655.3826.7275SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 153.655.982.87

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1.16552.3313.49654.6625.8275SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 152.655.182.12

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.22, N = 3SE +/- 0.85, N = 3SE +/- 0.45, N = 951.4093.6746.30

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.06, N = 3SE +/- 0.14, N = 3SE +/- 0.08, N = 9136.85278.3769.57

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 98.4015.654.76

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs1122334455SE +/- 0.24, N = 3SE +/- 0.53, N = 3SE +/- 0.16, N = 935.6450.8727.64

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1530456075SE +/- 0.92, N = 3SE +/- 0.25, N = 3SE +/- 0.12, N = 937.2268.6824.36

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.72, N = 3SE +/- 0.19, N = 3SE +/- 0.44, N = 944.6280.0230.32

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.44, N = 3SE +/- 0.22, N = 3SE +/- 0.26, N = 945.2980.2631.98

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.08, N = 3SE +/- 0.52, N = 3SE +/- 0.55, N = 1251.5589.2339.22

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 12137.04277.8369.79

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 128.3415.714.78

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs1020304050SE +/- 0.23, N = 3SE +/- 0.57, N = 3SE +/- 0.32, N = 1230.7045.6122.84

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1530456075SE +/- 0.20, N = 3SE +/- 0.36, N = 3SE +/- 0.24, N = 1235.4566.2722.22

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.73, N = 3SE +/- 1.11, N = 3SE +/- 0.19, N = 1244.3978.0028.66

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.40, N = 3SE +/- 0.33, N = 3SE +/- 0.17, N = 1242.7574.7126.55

ASKAP

Test: tConvolve MT - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding16 vCPUs8 vCPUs32 vCPUs10002000300040005000SE +/- 5.95, N = 3SE +/- 5.73, N = 3SE +/- 35.89, N = 153789.012360.634456.551. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MT - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding16 vCPUs8 vCPUs32 vCPUs12002400360048006000SE +/- 2.61, N = 3SE +/- 7.87, N = 3SE +/- 80.56, N = 154083.162196.745522.071. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MPI - Degridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 12.80, N = 3SE +/- 24.91, N = 15SE +/- 54.84, N = 152585.311325.063962.081. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MPI - Gridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 32.26, N = 3SE +/- 23.42, N = 15SE +/- 42.99, N = 153343.251977.893899.281. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve OpenMP - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding16 vCPUs8 vCPUs32 vCPUs16003200480064008000SE +/- 43.40, N = 3SE +/- 29.91, N = 3SE +/- 66.63, N = 33631.812296.107262.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve OpenMP - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 0.00, N = 3SE +/- 33.24, N = 3SE +/- 0.00, N = 35023.702421.439181.241. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: Hogbom Clean OpenMP

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 0.00, N = 3SE +/- 1.21, N = 3SE +/- 3.30, N = 3645.16371.30996.701. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Graph500

Scale: 26

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs100M200M300M400M500M2574780004773770001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs110M220M330M440M550M2625630005083720001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs30M60M90M120M150M707502001247020001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs40M80M120M160M200M952655001695420001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare16 vCPUs8 vCPUs32 vCPUs0.38660.77321.15981.54641.933SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.010, N = 30.8800.4501.7181. (CXX) g++ options: -O3 -march=native

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet16 vCPUs8 vCPUs32 vCPUs14002800420056007000SE +/- 11.05, N = 3SE +/- 9.96, N = 3SE +/- 31.57, N = 83955.896618.323853.90

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V416 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 84.24, N = 3SE +/- 49.87, N = 3SE +/- 149.01, N = 346113.497646.131657.3

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 180.13, N = 3SE +/- 27.10, N = 3SE +/- 355.48, N = 315855.116159.128372.8

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float16 vCPUs8 vCPUs32 vCPUs9001800270036004500SE +/- 4.32, N = 3SE +/- 3.06, N = 3SE +/- 17.55, N = 32481.964395.732093.25

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 6.49, N = 3SE +/- 6.00, N = 3SE +/- 14.04, N = 31898.412482.373550.65

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V216 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 16.18, N = 3SE +/- 70.02, N = 3SE +/- 379.42, N = 345445.991592.633994.9

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only16 vCPUs8 vCPUs32 vCPUs70K140K210K280K350KSE +/- 697.61, N = 3SE +/- 663.61, N = 3SE +/- 1811.74, N = 3157894542373295391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency16 vCPUs8 vCPUs32 vCPUs0.41490.82981.24471.65962.0745SE +/- 0.003, N = 3SE +/- 0.023, N = 3SE +/- 0.002, N = 30.6331.8440.3041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only16 vCPUs8 vCPUs32 vCPUs70K140K210K280K350KSE +/- 1418.81, N = 3SE +/- 588.20, N = 12SE +/- 4561.68, N = 12131607496283122391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency16 vCPUs8 vCPUs32 vCPUs1.13512.27023.40534.54045.6755SE +/- 0.021, N = 3SE +/- 0.060, N = 12SE +/- 0.012, N = 121.9005.0450.8031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write16 vCPUs8 vCPUs32 vCPUs7001400210028003500SE +/- 17.11, N = 3SE +/- 9.80, N = 3SE +/- 6.74, N = 32508251533831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency16 vCPUs8 vCPUs32 vCPUs918273645SE +/- 0.27, N = 3SE +/- 0.15, N = 3SE +/- 0.06, N = 339.8739.7729.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write16 vCPUs8 vCPUs32 vCPUs5001000150020002500SE +/- 15.69, N = 3SE +/- 15.88, N = 12SE +/- 154.25, N = 121220116522821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency16 vCPUs8 vCPUs32 vCPUs50100150200250SE +/- 2.67, N = 3SE +/- 2.96, N = 12SE +/- 7.17, N = 12204.92215.10114.801. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Medium16 vCPUs8 vCPUs32 vCPUs3691215SE +/- 0.0194, N = 3SE +/- 0.0253, N = 3SE +/- 0.0035, N = 36.94499.05055.98251. (CXX) g++ options: -O3 -march=native -flto -pthread

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough16 vCPUs8 vCPUs32 vCPUs714212835SE +/- 0.0106, N = 3SE +/- 0.0316, N = 3SE +/- 0.0033, N = 314.214629.05057.16191. (CXX) g++ options: -O3 -march=native -flto -pthread

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.04, N = 3SE +/- 3.08, N = 3SE +/- 0.08, N = 3137.62276.8868.661. (CXX) g++ options: -O3 -march=native -flto -pthread

Redis

Test: GET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET16 vCPUs8 vCPUs32 vCPUs400K800K1200K1600K2000KSE +/- 17762.42, N = 3SE +/- 19964.38, N = 5SE +/- 10764.67, N = 31798635.371980967.481926297.791. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Redis

Test: SET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET16 vCPUs8 vCPUs32 vCPUs300K600K900K1200K1500KSE +/- 10176.81, N = 3SE +/- 15381.83, N = 3SE +/- 9294.72, N = 31305143.581440521.551411234.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMA16 vCPUs8 vCPUs32 vCPUs30060090012001500SE +/- 4.29, N = 3SE +/- 1.98, N = 3SE +/- 1.69, N = 31113.531387.80549.131. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Futex16 vCPUs8 vCPUs32 vCPUs300K600K900K1200K1500KSE +/- 30917.20, N = 15SE +/- 36001.77, N = 15SE +/- 15026.23, N = 31198681.87937451.991437660.621. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cache16 vCPUs8 vCPUs32 vCPUs120240360480600SE +/- 2.05, N = 3SE +/- 2.30, N = 3SE +/- 0.28, N = 3551.25436.31566.911. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stress16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 2.80, N = 3SE +/- 1.28, N = 3SE +/- 4.23, N = 34116.962065.538209.471. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Math16 vCPUs8 vCPUs32 vCPUs30K60K90K120K150KSE +/- 10.44, N = 3SE +/- 25.04, N = 3SE +/- 9.80, N = 376177.5638215.95151792.831. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Math16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 27.43, N = 3SE +/- 6.31, N = 3SE +/- 190.70, N = 349102.3024633.9997749.081. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message Passing16 vCPUs8 vCPUs32 vCPUs1.3M2.6M3.9M5.2M6.5MSE +/- 15929.45, N = 3SE +/- 12538.93, N = 3SE +/- 7551.56, N = 35475267.364507844.176128517.101. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

GPAW

Input: Carbon Nanotube

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube16 vCPUs8 vCPUs32 vCPUs80160240320400SE +/- 0.03, N = 3SE +/- 0.63, N = 3SE +/- 0.30, N = 3208.97381.20130.351. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi

TNN

Target: CPU - Model: DenseNet

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 12.43, N = 3SE +/- 9.75, N = 3SE +/- 6.90, N = 33358.733842.123056.90MIN: 3163.2 / MAX: 3575.85MIN: 3619.38 / MAX: 4060.16MIN: 2928.19 / MAX: 3237.581. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v216 vCPUs8 vCPUs32 vCPUs70140210280350SE +/- 1.36, N = 3SE +/- 0.78, N = 3SE +/- 0.05, N = 3328.89331.34322.77MIN: 322.15 / MAX: 373.8MIN: 327.36 / MAX: 339.94MIN: 319.63 / MAX: 326.431. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

TNN

Target: CPU - Model: SqueezeNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v216 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.34, N = 3SE +/- 0.10, N = 3SE +/- 0.00, N = 396.2195.7395.47MIN: 95.08 / MAX: 100.72MIN: 95.23 / MAX: 97.46MIN: 95.15 / MAX: 96.881. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.116 vCPUs8 vCPUs32 vCPUs70140210280350SE +/- 0.80, N = 3SE +/- 0.64, N = 3SE +/- 0.07, N = 3305.23303.80301.15MIN: 298.53 / MAX: 370.85MIN: 300.11 / MAX: 314.72MIN: 299.13 / MAX: 307.21. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

Sysbench

Test: CPU

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 12.70, N = 3SE +/- 6.95, N = 3SE +/- 23.77, N = 354317.4227237.28108241.611. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 256.55, N = 3SE +/- 136.95, N = 10SE +/- 777.36, N = 3392961786287819

Facebook RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random Read16 vCPUs8 vCPUs32 vCPUs30M60M90M120M150MSE +/- 735054.27, N = 3SE +/- 252880.06, N = 3SE +/- 376574.31, N = 362048967310556891247042011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update Random16 vCPUs8 vCPUs32 vCPUs70K140K210K280K350KSE +/- 2705.38, N = 8SE +/- 1758.40, N = 3SE +/- 705.83, N = 33159722046882117351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While Writing16 vCPUs8 vCPUs32 vCPUs600K1200K1800K2400K3000KSE +/- 20446.66, N = 15SE +/- 9067.95, N = 15SE +/- 32390.32, N = 12126482659470226109921. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write Random16 vCPUs8 vCPUs32 vCPUs300K600K900K1200K1500KSE +/- 1701.74, N = 3SE +/- 4976.00, N = 15SE +/- 9643.50, N = 1588470054835313218271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs100200300400500SE +/- 0.50, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3226.26447.71112.47

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 0.22, N = 3SE +/- 1.99, N = 3SE +/- 0.07, N = 3506.101016.66249.89

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 0.85, N = 3SE +/- 1.74, N = 3SE +/- 0.42, N = 3426.04841.18214.41

nginx

Concurrent Requests: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 50016 vCPUs8 vCPUs32 vCPUs60K120K180K240K300KSE +/- 676.50, N = 3SE +/- 840.72, N = 3SE +/- 245.82, N = 3261253.72245968.24235749.361. (CC) gcc options: -lcrypt -lz -O3 -march=native

nginx

Concurrent Requests: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100016 vCPUs8 vCPUs32 vCPUs60K120K180K240K300KSE +/- 385.23, N = 3SE +/- 107.45, N = 3SE +/- 479.91, N = 3258672.88240628.22233484.081. (CC) gcc options: -lcrypt -lz -O3 -march=native

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs0.00140.00280.00420.00560.007SE +/- 0.000, N = 15SE +/- 0.000, N = 3SE +/- 0.000, N = 140.0060.0050.005

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs0.00340.00680.01020.01360.017SE +/- 0.000, N = 3SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0150.0150.014

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs0.090.180.270.360.45SE +/- 0.003, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4000.3810.392

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs0.21150.4230.63450.8461.0575SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 30.9400.9030.915

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs0.46310.92621.38931.85242.3155SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 32.0581.9812.055

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs0.85521.71042.56563.42084.276SE +/- 0.018, N = 3SE +/- 0.020, N = 3SE +/- 0.016, N = 33.8013.6403.723

SPECjbb 2015

SPECjbb2015-Composite max-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPS16 vCPUs8 vCPUs32 vCPUs8K16K24K32K40K18092915835075

SPECjbb 2015

SPECjbb2015-Composite critical-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPS16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25K9207392122955


Phoronix Test Suite v10.8.4