Tau T2A 16 vCPUs

KVM testing on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2208123-NE-2208114NE01&grw&sro.

ProcessorMotherboardMemoryDiskNetworkOSKernelCompilerFile-SystemSystem LayerTau T2A 16 vCPUs 8 vCPUs 32 vCPUsARMv8 Neoverse-N1 (16 Cores)KVM Google Compute Engine64GB215GB nvme_card-pdGoogle Compute Engine VirtualUbuntu 22.045.15.0-1013-gcp (aarch64)GCC 12.0.1 20220319ext4KVMARMv8 Neoverse-N1 (8 Cores)32GBARMv8 Neoverse-N1 (32 Cores)128GB5.15.0-1016-gcp (aarch64)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- Tau T2A: 16 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 8 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 32 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

stress-ng: NUMAstress-ng: Futexstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: System V Message Passingspec-jbb2015: SPECjbb2015-Composite max-jOPSspec-jbb2015: SPECjbb2015-Composite critical-jOPSdacapobench: H2dacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futuresastcenc: Mediumastcenc: Thoroughastcenc: Exhaustivegraph500: 26graph500: 26graph500: 26graph500: 26tensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2tnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1gromacs: MPI CPU - water_GMX50_barelammps: 20k Atomslammps: Rhodopsin Proteinhpcg: npb: BT.Cnpb: CG.Cnpb: EP.Dnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Bnpb: SP.Caskap: tConvolve MT - Griddingaskap: tConvolve MT - Degriddingaskap: tConvolve MPI - Degriddingaskap: tConvolve MPI - Griddingaskap: tConvolve OpenMP - Griddingaskap: tConvolve OpenMP - Degriddingaskap: Hogbom Clean OpenMPpyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numpy - 16384 - Isoneutral Mixingpyhpc: CPU - Numpy - 1048576 - Equation of Statepyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: CPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: drivaerFastback, Medium Mesh Size - Execution Timegpaw: Carbon Nanotubecoremark: CoreMark Size 666 - Iterations Per Secondaircrack-ng: build-ffmpeg: Time To Compilebuild-mplayer: Time To Compilesysbench: CPUvpxenc: Speed 0 - Bosphorus 4Kvpxenc: Speed 5 - Bosphorus 4Kvpxenc: Speed 0 - Bosphorus 1080pvpxenc: Speed 5 - Bosphorus 1080pavifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslessbuild-gem5: Time To Compilenginx: 500nginx: 1000openssl: SHA256openssl: RSA4096openssl: RSA4096spark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timeredis: GETblender: Fishy Cat - CPU-Onlyredis: SETrocksdb: Rand Readrocksdb: Update Randblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlyrocksdb: Read While Writingrocksdb: Read Rand Write Randcassandra: Writespgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average LatencyTau T2A 16 vCPUs 8 vCPUs 32 vCPUs1113.531198681.87551.254116.9676177.5649102.305475267.3618092920748435010559855241979.2997.716797.53906.71262.015981.55197.08932.56261.424004.73067.36.944914.2146137.621825747800026256300070750200952655003955.8946113.415855.12481.961898.4145445.93358.728328.88696.208305.2270.8808.4998.86117.098449125.9312171.951634.9932644.851498.4555447.3133309.7619552.4519710.903789.014083.162585.313343.253631.815023.7645.1610.0060.0150.4000.9402.0583.801303.711534.72208.965351562.54315816697.92461.98347.62154317.422.046.684.8411.78328.970194.77411.16814.7027.658495.541261253.72258672.8812926411527786.764247.04.93137.7612851908.396.002.552.221.795.91137.1739761998.297.433.363.652.6551.40136.8496130198.4035.6437.2244.6245.2951.55137.0402077248.3430.7035.4544.3942.751798635.37426.041305143.5862048967315972506.10226.261264826884700392961578940.6331316071.900250839.8711220204.9191387.80937451.99436.312065.5338215.9524633.994507844.179158392143465188804055381800.2985.916183.55725.22249.826456.45020.89234.85537.116322.43001.69.050529.0505276.87986618.3297646.116159.14395.732482.3791592.63842.123331.34495.729303.8030.454.6624.81211.096914368.296855.81820.9418574.231104.2632029.1427703.337338.987115.282360.632196.741325.061977.892296.102421.43371.2950.0050.0150.3810.9031.9813.640425.952426.16381.196175037.7651438308.580113.45588.83827237.281.926.114.6511.27456.237245.60120.27323.3129.997917.393245968.24240628.226456083507393.732136.86.28277.8915.926.904.583.482.868.09278.47252722915.818.855.675.985.1893.67278.36684567715.6550.8768.6880.0280.2689.23277.8315.7145.6166.2778.0074.711980967.48841.181440521.55310556892046881016.66447.7159470254835317862542371.844496285.045251539.7701165215.096549.131437660.62566.918209.47151792.8397749.086128517.10350752295551755079501559541871.71047.317606.64118.3766.410705.95174.39430.86566.529296.73084.25.98257.161968.65574773770005083720001247020001695420003853.9031657.328372.82093.253550.6533994.93056.897322.76895.473301.1501.71816.55016.59622.093069530.6421433.923265.6852309.811822.7787702.3050939.0534381.9126843.584456.555522.073962.083899.287262.749181.24996.7000.0050.0140.3920.9152.0553.723206.4994.53130.353700917.94473733647.54838.95828.928108241.612.136.994.9912.11266.337169.6396.68210.3416.775312.120235749.36233484.08257889199131570.2128273.14.7969.774.796.722.012.131.684.9669.924.806.722.602.872.1246.3069.574.7627.6424.3630.3231.9839.2269.794.7822.8422.2228.6626.551926297.79214.411411234.92124704201211735249.89112.4726109921321827878193295390.3043122390.803338329.5592282114.801OpenBenchmarking.org

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMA16 vCPUs32 vCPUs8 vCPUs30060090012001500SE +/- 4.29, N = 3SE +/- 1.69, N = 3SE +/- 1.98, N = 31113.53549.131387.801. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Futex16 vCPUs32 vCPUs8 vCPUs300K600K900K1200K1500KSE +/- 30917.20, N = 15SE +/- 15026.23, N = 3SE +/- 36001.77, N = 151198681.871437660.62937451.991. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cache16 vCPUs32 vCPUs8 vCPUs120240360480600SE +/- 2.05, N = 3SE +/- 0.28, N = 3SE +/- 2.30, N = 3551.25566.91436.311. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stress16 vCPUs32 vCPUs8 vCPUs2K4K6K8K10KSE +/- 2.80, N = 3SE +/- 4.23, N = 3SE +/- 1.28, N = 34116.968209.472065.531. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Math16 vCPUs32 vCPUs8 vCPUs30K60K90K120K150KSE +/- 10.44, N = 3SE +/- 9.80, N = 3SE +/- 25.04, N = 376177.56151792.8338215.951. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Math16 vCPUs32 vCPUs8 vCPUs20K40K60K80K100KSE +/- 27.43, N = 3SE +/- 190.70, N = 3SE +/- 6.31, N = 349102.3097749.0824633.991. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message Passing16 vCPUs32 vCPUs8 vCPUs1.3M2.6M3.9M5.2M6.5MSE +/- 15929.45, N = 3SE +/- 7551.56, N = 3SE +/- 12538.93, N = 35475267.366128517.104507844.171. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

SPECjbb 2015

SPECjbb2015-Composite max-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPS16 vCPUs32 vCPUs8 vCPUs8K16K24K32K40K18092350759158

SPECjbb 2015

SPECjbb2015-Composite critical-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPS16 vCPUs32 vCPUs8 vCPUs5K10K15K20K25K9207229553921

DaCapo Benchmark

Java Test: H2

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H216 vCPUs32 vCPUs8 vCPUs11002200330044005500SE +/- 39.59, N = 20SE +/- 83.04, N = 20SE +/- 38.62, N = 20484351754346

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython16 vCPUs32 vCPUs8 vCPUs11002200330044005500SE +/- 36.80, N = 4SE +/- 5.52, N = 4SE +/- 30.34, N = 4501050795188

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap16 vCPUs32 vCPUs8 vCPUs2K4K6K8K10KSE +/- 52.52, N = 4SE +/- 95.95, N = 20SE +/- 68.63, N = 4559850158040

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans16 vCPUs32 vCPUs8 vCPUs13002600390052006500SE +/- 59.44, N = 4SE +/- 28.01, N = 4SE +/- 38.11, N = 20552459545538

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dotty16 vCPUs32 vCPUs8 vCPUs400800120016002000SE +/- 12.38, N = 3SE +/- 32.38, N = 11SE +/- 18.70, N = 51979.21871.71800.2MIN: 1396.31 / MAX: 2843.97MIN: 1370.64 / MAX: 3034.49MIN: 1442.18 / MAX: 3536.53

Renaissance

Test: Random Forest

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Forest16 vCPUs32 vCPUs8 vCPUs2004006008001000SE +/- 4.55, N = 3SE +/- 12.92, N = 3SE +/- 5.46, N = 3997.71047.3985.9MIN: 900.54 / MAX: 1197.51MIN: 904.64 / MAX: 1280.13MIN: 902.84 / MAX: 1247.45

Renaissance

Test: ALS Movie Lens

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lens16 vCPUs32 vCPUs8 vCPUs4K8K12K16K20KSE +/- 79.58, N = 3SE +/- 57.21, N = 3SE +/- 101.59, N = 316797.517606.616183.5MIN: 16713.33 / MAX: 18436.45MIN: 17544.26 / MAX: 19037.24MIN: 16078.67 / MAX: 18111.73

Renaissance

Test: Apache Spark ALS

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALS16 vCPUs32 vCPUs8 vCPUs12002400360048006000SE +/- 27.36, N = 3SE +/- 32.04, N = 3SE +/- 7.85, N = 33906.74118.35725.2MIN: 3730.93 / MAX: 4142.3MIN: 3925.84 / MAX: 4358.22MIN: 5528.56 / MAX: 5954.26

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayes16 vCPUs32 vCPUs8 vCPUs5001000150020002500SE +/- 7.45, N = 3SE +/- 9.73, N = 3SE +/- 42.77, N = 151262.0766.42249.8MIN: 877.37 / MAX: 1398.23MIN: 495.95 / MAX: 1178.88MIN: 1478.18 / MAX: 2434.18

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO16 vCPUs32 vCPUs8 vCPUs6K12K18K24K30KSE +/- 583.25, N = 12SE +/- 131.70, N = 4SE +/- 435.60, N = 915981.510705.926456.4MIN: 12776.53 / MAX: 36273.51MIN: 10505.49 / MAX: 14847.21MIN: 13667.16 / MAX: 42318.14

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRank16 vCPUs32 vCPUs8 vCPUs11002200330044005500SE +/- 49.42, N = 3SE +/- 77.61, N = 12SE +/- 37.35, N = 115197.05174.35020.8MIN: 4732.96 / MAX: 5397.5MIN: 4316.47 / MAX: 6446.52MIN: 4537.14 / MAX: 5676.41

Renaissance

Test: Finagle HTTP Requests

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requests16 vCPUs32 vCPUs8 vCPUs2K4K6K8K10KSE +/- 62.21, N = 3SE +/- 122.37, N = 3SE +/- 40.11, N = 38932.59430.89234.8MIN: 8338.21 / MAX: 10055.02MIN: 8793.75 / MAX: 9955.78MIN: 8668.63 / MAX: 9879.54

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootout16 vCPUs32 vCPUs8 vCPUs14002800420056007000SE +/- 41.18, N = 15SE +/- 37.00, N = 3SE +/- 66.45, N = 46261.46566.55537.1MIN: 5664.61 / MAX: 10501.86MIN: 5609.26 / MAX: 13128.6MIN: 5069.67 / MAX: 6244.83

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Tree16 vCPUs32 vCPUs8 vCPUs6K12K18K24K30KSE +/- 164.12, N = 3SE +/- 344.06, N = 4SE +/- 267.82, N = 924004.729296.716322.4MIN: 18739.09 / MAX: 24332.49MIN: 20859.52 / MAX: 30225.51MIN: 10733.44 / MAX: 19126.93

Renaissance

Test: Genetic Algorithm Using Jenetics + Futures

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futures16 vCPUs32 vCPUs8 vCPUs7001400210028003500SE +/- 14.58, N = 3SE +/- 8.81, N = 3SE +/- 33.55, N = 33067.33084.23001.6MIN: 3011.77 / MAX: 3250.64MIN: 2993.8 / MAX: 3192.9MIN: 2862.8 / MAX: 3206.85

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Medium16 vCPUs32 vCPUs8 vCPUs3691215SE +/- 0.0194, N = 3SE +/- 0.0035, N = 3SE +/- 0.0253, N = 36.94495.98259.05051. (CXX) g++ options: -O3 -march=native -flto -pthread

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough16 vCPUs32 vCPUs8 vCPUs714212835SE +/- 0.0106, N = 3SE +/- 0.0033, N = 3SE +/- 0.0316, N = 314.21467.161929.05051. (CXX) g++ options: -O3 -march=native -flto -pthread

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive16 vCPUs32 vCPUs8 vCPUs60120180240300SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 3.08, N = 3137.6268.66276.881. (CXX) g++ options: -O3 -march=native -flto -pthread

Graph500

Scale: 26

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs100M200M300M400M500M2574780004773770001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs110M220M330M440M550M2625630005083720001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs30M60M90M120M150M707502001247020001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Graph500

Scale: 26

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs40M80M120M160M200M952655001695420001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet16 vCPUs32 vCPUs8 vCPUs14002800420056007000SE +/- 11.05, N = 3SE +/- 31.57, N = 8SE +/- 9.96, N = 33955.893853.906618.32

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V416 vCPUs32 vCPUs8 vCPUs20K40K60K80K100KSE +/- 84.24, N = 3SE +/- 149.01, N = 3SE +/- 49.87, N = 346113.431657.397646.1

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile16 vCPUs32 vCPUs8 vCPUs6K12K18K24K30KSE +/- 180.13, N = 3SE +/- 355.48, N = 3SE +/- 27.10, N = 315855.128372.816159.1

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float16 vCPUs32 vCPUs8 vCPUs9001800270036004500SE +/- 4.32, N = 3SE +/- 17.55, N = 3SE +/- 3.06, N = 32481.962093.254395.73

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant16 vCPUs32 vCPUs8 vCPUs8001600240032004000SE +/- 6.49, N = 3SE +/- 14.04, N = 3SE +/- 6.00, N = 31898.413550.652482.37

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V216 vCPUs32 vCPUs8 vCPUs20K40K60K80K100KSE +/- 16.18, N = 3SE +/- 379.42, N = 3SE +/- 70.02, N = 345445.933994.991592.6

TNN

Target: CPU - Model: DenseNet

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet16 vCPUs32 vCPUs8 vCPUs8001600240032004000SE +/- 12.43, N = 3SE +/- 6.90, N = 3SE +/- 9.75, N = 33358.733056.903842.12MIN: 3163.2 / MAX: 3575.85MIN: 2928.19 / MAX: 3237.58MIN: 3619.38 / MAX: 4060.161. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v216 vCPUs32 vCPUs8 vCPUs70140210280350SE +/- 1.36, N = 3SE +/- 0.05, N = 3SE +/- 0.78, N = 3328.89322.77331.34MIN: 322.15 / MAX: 373.8MIN: 319.63 / MAX: 326.43MIN: 327.36 / MAX: 339.941. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

TNN

Target: CPU - Model: SqueezeNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v216 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.34, N = 3SE +/- 0.00, N = 3SE +/- 0.10, N = 396.2195.4795.73MIN: 95.08 / MAX: 100.72MIN: 95.15 / MAX: 96.88MIN: 95.23 / MAX: 97.461. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.116 vCPUs32 vCPUs8 vCPUs70140210280350SE +/- 0.80, N = 3SE +/- 0.07, N = 3SE +/- 0.64, N = 3305.23301.15303.80MIN: 298.53 / MAX: 370.85MIN: 299.13 / MAX: 307.2MIN: 300.11 / MAX: 314.721. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare16 vCPUs32 vCPUs8 vCPUs0.38660.77321.15981.54641.933SE +/- 0.001, N = 3SE +/- 0.010, N = 3SE +/- 0.000, N = 30.8801.7180.4501. (CXX) g++ options: -O3 -march=native

LAMMPS Molecular Dynamics Simulator

Model: 20k Atoms

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atoms16 vCPUs32 vCPUs8 vCPUs48121620SE +/- 0.112, N = 3SE +/- 0.004, N = 3SE +/- 0.025, N = 38.49916.5504.6621. (CXX) g++ options: -O3 -march=native -ldl

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein16 vCPUs32 vCPUs8 vCPUs48121620SE +/- 0.037, N = 3SE +/- 0.012, N = 3SE +/- 0.011, N = 38.86116.5964.8121. (CXX) g++ options: -O3 -march=native -ldl

High Performance Conjugate Gradient

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.116 vCPUs32 vCPUs8 vCPUs510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 317.1022.0911.101. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C16 vCPUs32 vCPUs8 vCPUs15K30K45K60K75KSE +/- 18.18, N = 3SE +/- 272.46, N = 3SE +/- 23.11, N = 349125.9369530.6414368.291. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: CG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C16 vCPUs32 vCPUs8 vCPUs5K10K15K20K25KSE +/- 171.15, N = 3SE +/- 35.67, N = 3SE +/- 49.63, N = 1512171.9521433.926855.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D16 vCPUs32 vCPUs8 vCPUs7001400210028003500SE +/- 1.03, N = 3SE +/- 2.04, N = 3SE +/- 0.56, N = 31634.993265.68820.941. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C16 vCPUs32 vCPUs8 vCPUs11K22K33K44K55KSE +/- 300.01, N = 3SE +/- 41.18, N = 3SE +/- 15.96, N = 332644.8552309.8118574.231. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: IS.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D16 vCPUs32 vCPUs8 vCPUs400800120016002000SE +/- 14.70, N = 3SE +/- 0.86, N = 3SE +/- 1.14, N = 31498.451822.771104.261. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C16 vCPUs32 vCPUs8 vCPUs20K40K60K80K100KSE +/- 701.24, N = 3SE +/- 137.48, N = 3SE +/- 50.76, N = 355447.3187702.3032029.141. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C16 vCPUs32 vCPUs8 vCPUs11K22K33K44K55KSE +/- 102.49, N = 3SE +/- 31.40, N = 3SE +/- 46.23, N = 333309.7650939.0527703.331. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B16 vCPUs32 vCPUs8 vCPUs7K14K21K28K35KSE +/- 244.58, N = 3SE +/- 38.20, N = 3SE +/- 17.11, N = 319552.4534381.917338.981. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NAS Parallel Benchmarks

Test / Class: SP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C16 vCPUs32 vCPUs8 vCPUs6K12K18K24K30KSE +/- 112.56, N = 3SE +/- 31.60, N = 3SE +/- 39.91, N = 319710.9026843.587115.281. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

ASKAP

Test: tConvolve MT - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding16 vCPUs32 vCPUs8 vCPUs10002000300040005000SE +/- 5.95, N = 3SE +/- 35.89, N = 15SE +/- 5.73, N = 33789.014456.552360.631. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MT - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding16 vCPUs32 vCPUs8 vCPUs12002400360048006000SE +/- 2.61, N = 3SE +/- 80.56, N = 15SE +/- 7.87, N = 34083.165522.072196.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MPI - Degridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding16 vCPUs32 vCPUs8 vCPUs8001600240032004000SE +/- 12.80, N = 3SE +/- 54.84, N = 15SE +/- 24.91, N = 152585.313962.081325.061. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve MPI - Gridding

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding16 vCPUs32 vCPUs8 vCPUs8001600240032004000SE +/- 32.26, N = 3SE +/- 42.99, N = 15SE +/- 23.42, N = 153343.253899.281977.891. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve OpenMP - Gridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding16 vCPUs32 vCPUs8 vCPUs16003200480064008000SE +/- 43.40, N = 3SE +/- 66.63, N = 3SE +/- 29.91, N = 33631.817262.742296.101. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: tConvolve OpenMP - Degridding

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding16 vCPUs32 vCPUs8 vCPUs2K4K6K8K10KSE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 33.24, N = 35023.709181.242421.431. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASKAP

Test: Hogbom Clean OpenMP

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP16 vCPUs32 vCPUs8 vCPUs2004006008001000SE +/- 0.00, N = 3SE +/- 3.30, N = 3SE +/- 1.21, N = 3645.16996.70371.301. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State16 vCPUs32 vCPUs8 vCPUs0.00140.00280.00420.00560.007SE +/- 0.000, N = 15SE +/- 0.000, N = 14SE +/- 0.000, N = 30.0060.0050.005

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing16 vCPUs32 vCPUs8 vCPUs0.00340.00680.01020.01360.017SE +/- 0.000, N = 3SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0150.0140.015

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State16 vCPUs32 vCPUs8 vCPUs0.090.180.270.360.45SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.4000.3920.381

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing16 vCPUs32 vCPUs8 vCPUs0.21150.4230.63450.8461.0575SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.004, N = 30.9400.9150.903

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State16 vCPUs32 vCPUs8 vCPUs0.46310.92621.38931.85242.3155SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 32.0582.0551.981

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing16 vCPUs32 vCPUs8 vCPUs0.85521.71042.56563.42084.276SE +/- 0.018, N = 3SE +/- 0.016, N = 3SE +/- 0.020, N = 33.8013.7233.640

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Mesh Time16 vCPUs32 vCPUs8 vCPUs90180270360450303.71206.40425.95-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Execution Time16 vCPUs32 vCPUs8 vCPUs50010001500200025001534.72994.532426.16-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

GPAW

Input: Carbon Nanotube

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube16 vCPUs32 vCPUs8 vCPUs80160240320400SE +/- 0.03, N = 3SE +/- 0.30, N = 3SE +/- 0.63, N = 3208.97130.35381.201. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi

Coremark

CoreMark Size 666 - Iterations Per Second

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second16 vCPUs32 vCPUs8 vCPUs150K300K450K600K750KSE +/- 87.09, N = 3SE +/- 385.56, N = 3SE +/- 85.46, N = 3351562.54700917.94175037.771. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

Aircrack-ng

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.716 vCPUs32 vCPUs8 vCPUs7K14K21K28K35KSE +/- 192.97, N = 15SE +/- 287.54, N = 15SE +/- 103.85, N = 1516697.9233647.558308.58-lpcre-lpcre1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile16 vCPUs32 vCPUs8 vCPUs306090120150SE +/- 0.09, N = 3SE +/- 0.16, N = 3SE +/- 0.18, N = 361.9838.96113.46

Timed MPlayer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile16 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.26, N = 3SE +/- 0.35, N = 4SE +/- 0.03, N = 347.6228.9388.84

Sysbench

Test: CPU

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU16 vCPUs32 vCPUs8 vCPUs20K40K60K80K100KSE +/- 12.70, N = 3SE +/- 23.77, N = 3SE +/- 6.95, N = 354317.42108241.6127237.281. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K16 vCPUs32 vCPUs8 vCPUs0.47930.95861.43791.91722.3965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 62.042.131.921. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K16 vCPUs32 vCPUs8 vCPUs246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 36.686.996.111. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

VP9 libvpx Encoding

Speed: Speed 0 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p16 vCPUs32 vCPUs8 vCPUs1.12282.24563.36844.49125.614SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.844.994.651. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

VP9 libvpx Encoding

Speed: Speed 5 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p16 vCPUs32 vCPUs8 vCPUs3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.7812.1111.271. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 016 vCPUs32 vCPUs8 vCPUs100200300400500SE +/- 0.80, N = 3SE +/- 0.65, N = 3SE +/- 0.90, N = 3328.97266.34456.241. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 216 vCPUs32 vCPUs8 vCPUs50100150200250SE +/- 0.47, N = 3SE +/- 0.13, N = 3SE +/- 0.32, N = 3194.77169.64245.601. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 616 vCPUs32 vCPUs8 vCPUs510152025SE +/- 0.037, N = 3SE +/- 0.020, N = 3SE +/- 0.130, N = 311.1686.68220.2731. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Lossless16 vCPUs32 vCPUs8 vCPUs612182430SE +/- 0.19, N = 3SE +/- 0.00, N = 3SE +/- 0.11, N = 314.7010.3423.311. (CXX) g++ options: -O3 -fPIC -march=native -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Lossless16 vCPUs32 vCPUs8 vCPUs3691215SE +/- 0.065, N = 8SE +/- 0.072, N = 3SE +/- 0.096, N = 37.6586.7759.9971. (CXX) g++ options: -O3 -fPIC -march=native -lm

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile16 vCPUs32 vCPUs8 vCPUs2004006008001000SE +/- 1.16, N = 3SE +/- 1.96, N = 3SE +/- 0.66, N = 3495.54312.12917.39

nginx

Concurrent Requests: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 50016 vCPUs32 vCPUs8 vCPUs60K120K180K240K300KSE +/- 676.50, N = 3SE +/- 245.82, N = 3SE +/- 840.72, N = 3261253.72235749.36245968.241. (CC) gcc options: -lcrypt -lz -O3 -march=native

nginx

Concurrent Requests: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100016 vCPUs32 vCPUs8 vCPUs60K120K180K240K300KSE +/- 385.23, N = 3SE +/- 479.91, N = 3SE +/- 107.45, N = 3258672.88233484.08240628.221. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA25616 vCPUs32 vCPUs8 vCPUs6000M12000M18000M24000M30000MSE +/- 19283388.31, N = 3SE +/- 119493320.18, N = 3SE +/- 19026629.44, N = 3129264115272578891991364560835071. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs32 vCPUs8 vCPUs30060090012001500SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3786.71570.2393.71. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs32 vCPUs8 vCPUs30K60K90K120K150KSE +/- 8.35, N = 3SE +/- 29.86, N = 3SE +/- 10.26, N = 364247.0128273.132136.81. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs32 vCPUs8 vCPUs246810SE +/- 0.05, N = 12SE +/- 0.11, N = 15SE +/- 0.03, N = 34.934.796.28

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs32 vCPUs8 vCPUs60120180240300SE +/- 0.11, N = 12SE +/- 0.06, N = 15SE +/- 0.17, N = 3137.7669.77277.89

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs32 vCPUs8 vCPUs48121620SE +/- 0.02, N = 12SE +/- 0.01, N = 15SE +/- 0.02, N = 38.394.7915.92

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Time16 vCPUs32 vCPUs8 vCPUs246810SE +/- 0.05, N = 12SE +/- 0.23, N = 15SE +/- 0.17, N = 36.006.726.90

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Time16 vCPUs32 vCPUs8 vCPUs1.03052.0613.09154.1225.1525SE +/- 0.03, N = 12SE +/- 0.03, N = 15SE +/- 0.02, N = 32.552.014.58

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs0.7831.5662.3493.1323.915SE +/- 0.03, N = 12SE +/- 0.02, N = 15SE +/- 0.04, N = 32.222.133.48

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs0.64351.2871.93052.5743.2175SE +/- 0.04, N = 12SE +/- 0.03, N = 15SE +/- 0.03, N = 31.791.682.86

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs32 vCPUs8 vCPUs246810SE +/- 0.05, N = 3SE +/- 0.04, N = 15SE +/- 0.08, N = 35.914.968.09

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs32 vCPUs8 vCPUs60120180240300SE +/- 0.20, N = 3SE +/- 0.06, N = 15SE +/- 0.35, N = 3137.1769.92278.47

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs32 vCPUs8 vCPUs48121620SE +/- 0.03, N = 3SE +/- 0.01, N = 15SE +/- 0.07, N = 38.294.8015.81

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Time16 vCPUs32 vCPUs8 vCPUs246810SE +/- 0.10, N = 3SE +/- 0.05, N = 15SE +/- 0.13, N = 37.436.728.85

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Time16 vCPUs32 vCPUs8 vCPUs1.27582.55163.82745.10326.379SE +/- 0.01, N = 3SE +/- 0.03, N = 15SE +/- 0.01, N = 33.362.605.67

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs1.34552.6914.03655.3826.7275SE +/- 0.11, N = 3SE +/- 0.04, N = 15SE +/- 0.09, N = 33.652.875.98

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs1.16552.3313.49654.6625.8275SE +/- 0.05, N = 3SE +/- 0.02, N = 15SE +/- 0.07, N = 32.652.125.18

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.22, N = 3SE +/- 0.45, N = 9SE +/- 0.85, N = 351.4046.3093.67

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs32 vCPUs8 vCPUs60120180240300SE +/- 0.06, N = 3SE +/- 0.08, N = 9SE +/- 0.14, N = 3136.8569.57278.37

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs32 vCPUs8 vCPUs48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 9SE +/- 0.05, N = 38.404.7615.65

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Time16 vCPUs32 vCPUs8 vCPUs1122334455SE +/- 0.24, N = 3SE +/- 0.16, N = 9SE +/- 0.53, N = 335.6427.6450.87

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Time16 vCPUs32 vCPUs8 vCPUs1530456075SE +/- 0.92, N = 3SE +/- 0.12, N = 9SE +/- 0.25, N = 337.2224.3668.68

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.72, N = 3SE +/- 0.44, N = 9SE +/- 0.19, N = 344.6230.3280.02

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.44, N = 3SE +/- 0.26, N = 9SE +/- 0.22, N = 345.2931.9880.26

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.08, N = 3SE +/- 0.55, N = 12SE +/- 0.52, N = 351.5539.2289.23

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs32 vCPUs8 vCPUs60120180240300SE +/- 0.17, N = 3SE +/- 0.11, N = 12SE +/- 0.13, N = 3137.0469.79277.83

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs32 vCPUs8 vCPUs48121620SE +/- 0.03, N = 3SE +/- 0.02, N = 12SE +/- 0.00, N = 38.344.7815.71

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test Time16 vCPUs32 vCPUs8 vCPUs1020304050SE +/- 0.23, N = 3SE +/- 0.32, N = 12SE +/- 0.57, N = 330.7022.8445.61

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test Time16 vCPUs32 vCPUs8 vCPUs1530456075SE +/- 0.20, N = 3SE +/- 0.24, N = 12SE +/- 0.36, N = 335.4522.2266.27

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.73, N = 3SE +/- 0.19, N = 12SE +/- 1.11, N = 344.3928.6678.00

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs32 vCPUs8 vCPUs20406080100SE +/- 0.40, N = 3SE +/- 0.17, N = 12SE +/- 0.33, N = 342.7526.5574.71

Redis

Test: GET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET16 vCPUs32 vCPUs8 vCPUs400K800K1200K1600K2000KSE +/- 17762.42, N = 3SE +/- 10764.67, N = 3SE +/- 19964.38, N = 51798635.371926297.791980967.481. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-Only16 vCPUs32 vCPUs8 vCPUs2004006008001000SE +/- 0.85, N = 3SE +/- 0.42, N = 3SE +/- 1.74, N = 3426.04214.41841.18

Redis

Test: SET

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET16 vCPUs32 vCPUs8 vCPUs300K600K900K1200K1500KSE +/- 10176.81, N = 3SE +/- 9294.72, N = 3SE +/- 15381.83, N = 31305143.581411234.921440521.551. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Facebook RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random Read16 vCPUs32 vCPUs8 vCPUs30M60M90M120M150MSE +/- 735054.27, N = 3SE +/- 376574.31, N = 3SE +/- 252880.06, N = 362048967124704201310556891. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update Random16 vCPUs32 vCPUs8 vCPUs70K140K210K280K350KSE +/- 2705.38, N = 8SE +/- 705.83, N = 3SE +/- 1758.40, N = 33159722117352046881. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-Only16 vCPUs32 vCPUs8 vCPUs2004006008001000SE +/- 0.22, N = 3SE +/- 0.07, N = 3SE +/- 1.99, N = 3506.10249.891016.66

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-Only16 vCPUs32 vCPUs8 vCPUs100200300400500SE +/- 0.50, N = 3SE +/- 0.10, N = 3SE +/- 0.04, N = 3226.26112.47447.71

Facebook RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While Writing16 vCPUs32 vCPUs8 vCPUs600K1200K1800K2400K3000KSE +/- 20446.66, N = 15SE +/- 32390.32, N = 12SE +/- 9067.95, N = 15126482626109925947021. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write Random16 vCPUs32 vCPUs8 vCPUs300K600K900K1200K1500KSE +/- 1701.74, N = 3SE +/- 9643.50, N = 15SE +/- 4976.00, N = 1588470013218275483531. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes16 vCPUs32 vCPUs8 vCPUs20K40K60K80K100KSE +/- 256.55, N = 3SE +/- 777.36, N = 3SE +/- 136.95, N = 10392968781917862

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only16 vCPUs32 vCPUs8 vCPUs70K140K210K280K350KSE +/- 697.61, N = 3SE +/- 1811.74, N = 3SE +/- 663.61, N = 3157894329539542371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency16 vCPUs32 vCPUs8 vCPUs0.41490.82981.24471.65962.0745SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.023, N = 30.6330.3041.8441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only16 vCPUs32 vCPUs8 vCPUs70K140K210K280K350KSE +/- 1418.81, N = 3SE +/- 4561.68, N = 12SE +/- 588.20, N = 12131607312239496281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency16 vCPUs32 vCPUs8 vCPUs1.13512.27023.40534.54045.6755SE +/- 0.021, N = 3SE +/- 0.012, N = 12SE +/- 0.060, N = 121.9000.8035.0451. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write16 vCPUs32 vCPUs8 vCPUs7001400210028003500SE +/- 17.11, N = 3SE +/- 6.74, N = 3SE +/- 9.80, N = 32508338325151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency16 vCPUs32 vCPUs8 vCPUs918273645SE +/- 0.27, N = 3SE +/- 0.06, N = 3SE +/- 0.15, N = 339.8729.5639.771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write16 vCPUs32 vCPUs8 vCPUs5001000150020002500SE +/- 15.69, N = 3SE +/- 154.25, N = 12SE +/- 15.88, N = 121220228211651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

PostgreSQL pgbench

Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency16 vCPUs32 vCPUs8 vCPUs50100150200250SE +/- 2.67, N = 3SE +/- 7.17, N = 12SE +/- 2.96, N = 12204.92114.80215.101. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm


Phoronix Test Suite v10.8.4