Tau T2A 16 vCPUs

KVM testing on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208123-NE-2208114NE01
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 13 Tests
Creator Workloads 3 Tests
Cryptography 2 Tests
Database Test Suite 5 Tests
Encoding 2 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 11 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 2 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 6 Tests
Multi-Core 15 Tests
OpenMPI Tests 8 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 4 Tests
Scientific Computing 4 Tests
Server 7 Tests
Server CPU Tests 8 Tests
Single-Threaded 2 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Tau T2A: 16 vCPUs
August 10 2022
  1 Day, 5 Hours, 46 Minutes
Tau T2A: 8 vCPUs
August 10 2022
  1 Day, 11 Hours, 20 Minutes
Tau T2A: 32 vCPUs
August 11 2022
  1 Day, 11 Hours, 24 Minutes
Invert Hiding All Results Option
  1 Day, 9 Hours, 30 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ProcessorMotherboardMemoryDiskNetworkOSKernelCompilerFile-SystemSystem LayerTau T2A 16 vCPUs 8 vCPUs 32 vCPUsARMv8 Neoverse-N1 (16 Cores)KVM Google Compute Engine64GB215GB nvme_card-pdGoogle Compute Engine VirtualUbuntu 22.045.15.0-1013-gcp (aarch64)GCC 12.0.1 20220319ext4KVMARMv8 Neoverse-N1 (8 Cores)32GBARMv8 Neoverse-N1 (32 Cores)128GB5.15.0-1016-gcp (aarch64)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- Tau T2A: 16 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 8 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - Tau T2A: 32 vCPUs: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected

pgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencyspec-jbb2015: SPECjbb2015-Composite critical-jOPScassandra: Writesnpb: BT.Cnpb: SP.Bblender: Classroom - CPU-Onlyastcenc: Thoroughaircrack-ng: astcenc: Exhaustiverocksdb: Rand Readcoremark: CoreMark Size 666 - Iterations Per Secondspark: 40000000 - 100 - Calculate Pi Benchmarkopenssl: SHA256openssl: RSA4096openssl: RSA4096spark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmarkblender: BMW27 - CPU-Onlynpb: EP.Dstress-ng: CPU Stresssysbench: CPUstress-ng: Matrix Mathstress-ng: Vector Mathblender: Fishy Cat - CPU-Onlyspec-jbb2015: SPECjbb2015-Composite max-jOPSgromacs: MPI CPU - water_GMX50_bareaskap: tConvolve OpenMP - Degriddingnpb: SP.Clammps: 20k Atomslammps: Rhodopsin Proteinspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframeaskap: tConvolve OpenMP - Griddingnpb: CG.Ctensorflow-lite: Inception V4build-mplayer: Time To Compileavifenc: 6spark: 40000000 - 2000 - Repartition Test Timebuild-gem5: Time To Compilegpaw: Carbon Nanotubebuild-ffmpeg: Time To Compilespark: 40000000 - 100 - Repartition Test Timenpb: FT.Cspark: 40000000 - 2000 - Broadcast Inner Join Test Timenpb: LU.Cspark: 40000000 - 2000 - Inner Join Test Timetensorflow-lite: Inception ResNet V2askap: Hogbom Clean OpenMPspark: 40000000 - 100 - Inner Join Test Timestress-ng: NUMAaskap: tConvolve MT - Degriddingspark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timeopenfoam: drivaerFastback, Medium Mesh Size - Execution Timerocksdb: Read Rand Write Randspark: 1000000 - 100 - Repartition Test Timespark: 40000000 - 2000 - SHA-512 Benchmark Timeavifenc: 6, Losslessspark: 1000000 - 2000 - Repartition Test Timetensorflow-lite: Mobilenet Floatspark: 1000000 - 2000 - Inner Join Test Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Group By Test Timehpcg: askap: tConvolve MPI - Griddinggraph500: 26askap: tConvolve MT - Griddingtensorflow-lite: Mobilenet Quantgraph500: 26spark: 40000000 - 100 - Group By Test Timenpb: MG.Crenaissance: Akka Unbalanced Cobwebbed Treetensorflow-lite: NASNet Mobilegraph500: 26graph500: 26tensorflow-lite: SqueezeNetavifenc: 0npb: IS.Dspark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timerocksdb: Update Randastcenc: Mediumavifenc: 10, Losslessrenaissance: Apache Spark ALSavifenc: 2stress-ng: System V Message Passingpgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencyspark: 1000000 - 2000 - Group By Test Timestress-ng: CPU Cachetnn: CPU - DenseNetrenaissance: In-Memory Database Shootoutvpxenc: Speed 5 - Bosphorus 4Kvpxenc: Speed 0 - Bosphorus 4Knginx: 500nginx: 1000redis: SETredis: GETrenaissance: Scala Dottyrenaissance: ALS Movie Lensdacapobench: Tradebeansvpxenc: Speed 5 - Bosphorus 1080pvpxenc: Speed 0 - Bosphorus 1080ppyhpc: CPU - Numpy - 16384 - Isoneutral Mixingrenaissance: Rand Forestrenaissance: Finagle HTTP Requestspyhpc: CPU - Numpy - 1048576 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingpyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: CPU - Numpy - 4194304 - Equation of Statedacapobench: Jythonrenaissance: Apache Spark PageRankrenaissance: Genetic Algorithm Using Jenetics + Futurestnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1tnn: CPU - SqueezeNet v2pyhpc: CPU - Numpy - 16384 - Equation of Staterocksdb: Read While Writingstress-ng: Futexpgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writeaskap: tConvolve MPI - Degriddingspark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - SHA-512 Benchmark Timerenaissance: Savina Reactors.IOrenaissance: Apache Spark Bayesdacapobench: Tradesoapdacapobench: H2Tau T2A 16 vCPUs 8 vCPUs 32 vCPUs1316071.9001578940.63392073929649125.9319552.45506.1014.214616697.924137.621862048967351562.543158136.8496130191292641152764247.0786.7137.761285190137.173976199137.040207724226.261634.994116.9654317.4276177.5649102.30426.04180920.8805023.719710.908.4998.8618.398.298.408.343631.8112171.9546113.447.62111.16835.45495.541208.96561.98337.2232644.8542.7555447.3144.3945445.9645.16144.621113.534083.1645.292.651534.728847002.5551.5514.7023.362481.963.65303.7151.4030.7017.09843343.252625630003789.011898.4125747800035.6433309.7624004.715855.195265500707502003955.89328.9701498.452.225.913159726.94497.6583906.7194.7745475267.36250839.8717.43551.253358.7286261.46.682.04261253.72258672.881305143.581798635.371979.216797.5552411.784.840.015997.78932.50.4003.8010.9402.05850105197.03067.3328.886305.22796.2080.00612648261198681.87204.91912202585.311.796.004.9315981.51262.055984843496285.045542371.84439211786214368.297338.981016.6629.05058308.580276.879831055689175037.765143278.366845677645608350732136.8393.7277.89278.472527229277.83447.71820.942065.5327237.2838215.9524633.99841.1891580.452421.437115.284.6624.81215.9215.8115.6515.712296.106855.8197646.188.83820.27366.27917.393381.196113.45568.6818574.2374.7132029.1478.0091592.6371.29580.021387.802196.7480.265.182426.165483534.5889.2323.3125.674395.735.98425.9593.6745.6111.09691977.892360.632482.3750.8727703.3316322.416159.16618.32456.2371104.263.488.092046889.05059.9975725.2245.6014507844.17251539.7708.85436.313842.1235537.16.111.92245968.24240628.221440521.551980967.481800.216183.5553811.274.650.015985.99234.80.3813.6400.9031.98151885020.83001.6331.344303.80395.7290.005594702937451.99215.09611651325.062.866.906.2826456.42249.8804043463122390.8033295390.304229558781969530.6434381.91249.897.161933647.54868.6557124704201700917.94473769.5725788919913128273.11570.269.7769.9269.79112.473265.688209.47108241.61151792.8397749.08214.41350751.7189181.2426843.5816.55016.5964.794.804.764.787262.7421433.9231657.328.9286.68222.22312.120130.35338.95824.3652309.8126.5587702.3028.6633994.9996.70030.32549.135522.0731.982.12994.5313218272.0139.2210.3412.602093.252.87206.446.3022.8422.09303899.285083720004456.553550.6547737700027.6450939.0529296.728372.81695420001247020003853.90266.3371822.772.134.962117355.98256.7754118.3169.6396128517.10338329.5596.72566.913056.8976566.56.992.13235749.36233484.081411234.921926297.791871.717606.6595412.114.990.0141047.39430.80.3923.7230.9152.05550795174.33084.2322.768301.15095.4730.00526109921437660.62114.80122823962.081.686.724.7910705.9766.450155175OpenBenchmarking.org

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only16 vCPUs8 vCPUs32 vCPUs70K140K210K280K350KSE +/- 1418.81, N = 3SE +/- 588.20, N = 12SE +/- 4561.68, N = 12131607496283122391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only16 vCPUs8 vCPUs32 vCPUs50K100K150K200K250KMin: 129226.11 / Avg: 131606.88 / Max: 134134.44Min: 46298.09 / Avg: 49628.29 / Max: 53275.59Min: 291136.68 / Avg: 312239.35 / Max: 329206.291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency16 vCPUs8 vCPUs32 vCPUs1.13512.27023.40534.54045.6755SE +/- 0.021, N = 3SE +/- 0.060, N = 12SE +/- 0.012, N = 121.9005.0450.8031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency16 vCPUs8 vCPUs32 vCPUs246810Min: 1.86 / Avg: 1.9 / Max: 1.94Min: 4.69 / Avg: 5.05 / Max: 5.4Min: 0.76 / Avg: 0.8 / Max: 0.861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only16 vCPUs8 vCPUs32 vCPUs70K140K210K280K350KSE +/- 697.61, N = 3SE +/- 663.61, N = 3SE +/- 1811.74, N = 3157894542373295391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only16 vCPUs8 vCPUs32 vCPUs60K120K180K240K300KMin: 156503.81 / Avg: 157893.65 / Max: 158694.62Min: 52936.44 / Avg: 54236.52 / Max: 55117.81Min: 325915.71 / Avg: 329538.71 / Max: 331400.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency16 vCPUs8 vCPUs32 vCPUs0.41490.82981.24471.65962.0745SE +/- 0.003, N = 3SE +/- 0.023, N = 3SE +/- 0.002, N = 30.6331.8440.3041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency16 vCPUs8 vCPUs32 vCPUs246810Min: 0.63 / Avg: 0.63 / Max: 0.64Min: 1.81 / Avg: 1.84 / Max: 1.89Min: 0.3 / Avg: 0.3 / Max: 0.311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

SPECjbb 2015

This is a benchmark of SPECjbb 2015. For this test profile to work, you must have a valid license/copy of the SPECjbb 2015 ISO (SPECjbb2015-1.02.iso) in your Phoronix Test Suite download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPS16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25K9207392122955

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 256.55, N = 3SE +/- 136.95, N = 10SE +/- 777.36, N = 3392961786287819
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes16 vCPUs8 vCPUs32 vCPUs15K30K45K60K75KMin: 38783 / Avg: 39296 / Max: 39561Min: 17522 / Avg: 17862.4 / Max: 19014Min: 86347 / Avg: 87819.33 / Max: 88988

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C16 vCPUs8 vCPUs32 vCPUs15K30K45K60K75KSE +/- 18.18, N = 3SE +/- 23.11, N = 3SE +/- 272.46, N = 349125.9314368.2969530.641. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C16 vCPUs8 vCPUs32 vCPUs12K24K36K48K60KMin: 49092.71 / Avg: 49125.93 / Max: 49155.36Min: 14343.76 / Avg: 14368.29 / Max: 14414.49Min: 68985.78 / Avg: 69530.64 / Max: 69809.681. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B16 vCPUs8 vCPUs32 vCPUs7K14K21K28K35KSE +/- 244.58, N = 3SE +/- 17.11, N = 3SE +/- 38.20, N = 319552.457338.9834381.911. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KMin: 19177.93 / Avg: 19552.45 / Max: 20012.21Min: 7305.07 / Avg: 7338.98 / Max: 7359.88Min: 34332.78 / Avg: 34381.91 / Max: 34457.151. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 0.22, N = 3SE +/- 1.99, N = 3SE +/- 0.07, N = 3506.101016.66249.89
OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Classroom - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs2004006008001000Min: 505.78 / Avg: 506.1 / Max: 506.52Min: 1014.24 / Avg: 1016.66 / Max: 1020.61Min: 249.81 / Avg: 249.89 / Max: 250.02

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough16 vCPUs8 vCPUs32 vCPUs714212835SE +/- 0.0106, N = 3SE +/- 0.0316, N = 3SE +/- 0.0033, N = 314.214629.05057.16191. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough16 vCPUs8 vCPUs32 vCPUs612182430Min: 14.2 / Avg: 14.21 / Max: 14.23Min: 28.99 / Avg: 29.05 / Max: 29.09Min: 7.16 / Avg: 7.16 / Max: 7.171. (CXX) g++ options: -O3 -march=native -flto -pthread

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.716 vCPUs8 vCPUs32 vCPUs7K14K21K28K35KSE +/- 192.97, N = 15SE +/- 103.85, N = 15SE +/- 287.54, N = 1516697.928308.5833647.55-lpcre-lpcre1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.716 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KMin: 15465.48 / Avg: 16697.92 / Max: 17149.11Min: 7746.84 / Avg: 8308.58 / Max: 8591.45Min: 30818.67 / Avg: 33647.55 / Max: 34100.551. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.04, N = 3SE +/- 3.08, N = 3SE +/- 0.08, N = 3137.62276.8868.661. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive16 vCPUs8 vCPUs32 vCPUs50100150200250Min: 137.54 / Avg: 137.62 / Max: 137.69Min: 271.57 / Avg: 276.88 / Max: 282.23Min: 68.57 / Avg: 68.66 / Max: 68.811. (CXX) g++ options: -O3 -march=native -flto -pthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random Read16 vCPUs8 vCPUs32 vCPUs30M60M90M120M150MSE +/- 735054.27, N = 3SE +/- 252880.06, N = 3SE +/- 376574.31, N = 362048967310556891247042011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random Read16 vCPUs8 vCPUs32 vCPUs20M40M60M80M100MMin: 60578972 / Avg: 62048967 / Max: 62799788Min: 30615005 / Avg: 31055689.33 / Max: 31490957Min: 123951463 / Avg: 124704201.33 / Max: 1251020971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second16 vCPUs8 vCPUs32 vCPUs150K300K450K600K750KSE +/- 87.09, N = 3SE +/- 85.46, N = 3SE +/- 385.56, N = 3351562.54175037.77700917.941. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second16 vCPUs8 vCPUs32 vCPUs120K240K360K480K600KMin: 351390.92 / Avg: 351562.54 / Max: 351674.12Min: 174939.86 / Avg: 175037.77 / Max: 175208.06Min: 700218.82 / Avg: 700917.94 / Max: 701549.251. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.06, N = 3SE +/- 0.14, N = 3SE +/- 0.08, N = 9136.85278.3769.57
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs50100150200250Min: 136.77 / Avg: 136.85 / Max: 136.97Min: 278.22 / Avg: 278.37 / Max: 278.66Min: 69.23 / Avg: 69.57 / Max: 69.99

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA25616 vCPUs8 vCPUs32 vCPUs6000M12000M18000M24000M30000MSE +/- 19283388.31, N = 3SE +/- 19026629.44, N = 3SE +/- 119493320.18, N = 3129264115276456083507257889199131. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA25616 vCPUs8 vCPUs32 vCPUs4000M8000M12000M16000M20000MMin: 12887954220 / Avg: 12926411526.67 / Max: 12948154910Min: 6418125410 / Avg: 6456083506.67 / Max: 6477391730Min: 25549965560 / Avg: 25788919913.33 / Max: 259117990701. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs8 vCPUs32 vCPUs30K60K90K120K150KSE +/- 8.35, N = 3SE +/- 10.26, N = 3SE +/- 29.86, N = 364247.032136.8128273.11. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KMin: 64230.3 / Avg: 64247 / Max: 64255.5Min: 32124 / Avg: 32136.8 / Max: 32157.1Min: 128219.9 / Avg: 128273.1 / Max: 128323.21. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs8 vCPUs32 vCPUs30060090012001500SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3786.7393.71570.21. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA409616 vCPUs8 vCPUs32 vCPUs30060090012001500Min: 786.6 / Avg: 786.7 / Max: 786.8Min: 393.6 / Avg: 393.67 / Max: 393.8Min: 1570.1 / Avg: 1570.2 / Max: 1570.31. (CC) gcc options: -pthread -O3 -march=native -lssl -lcrypto -ldl

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.11, N = 12SE +/- 0.17, N = 3SE +/- 0.06, N = 15137.76277.8969.77
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs50100150200250Min: 137.23 / Avg: 137.76 / Max: 138.45Min: 277.56 / Avg: 277.89 / Max: 278.12Min: 69.44 / Avg: 69.77 / Max: 70.23

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.20, N = 3SE +/- 0.35, N = 3SE +/- 0.06, N = 15137.17278.4769.92
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs50100150200250Min: 136.8 / Avg: 137.17 / Max: 137.49Min: 277.93 / Avg: 278.47 / Max: 279.13Min: 69.49 / Avg: 69.92 / Max: 70.28

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs60120180240300SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 12137.04277.8369.79
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark16 vCPUs8 vCPUs32 vCPUs50100150200250Min: 136.81 / Avg: 137.04 / Max: 137.38Min: 277.61 / Avg: 277.83 / Max: 278.06Min: 69.21 / Avg: 69.79 / Max: 70.3

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs100200300400500SE +/- 0.50, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3226.26447.71112.47
OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: BMW27 - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs80160240320400Min: 225.26 / Avg: 226.26 / Max: 226.77Min: 447.63 / Avg: 447.71 / Max: 447.75Min: 112.34 / Avg: 112.47 / Max: 112.67

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D16 vCPUs8 vCPUs32 vCPUs7001400210028003500SE +/- 1.03, N = 3SE +/- 0.56, N = 3SE +/- 2.04, N = 31634.99820.943265.681. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D16 vCPUs8 vCPUs32 vCPUs6001200180024003000Min: 1633.87 / Avg: 1634.99 / Max: 1637.05Min: 820.31 / Avg: 820.94 / Max: 822.06Min: 3263.14 / Avg: 3265.68 / Max: 3269.721. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stress16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 2.80, N = 3SE +/- 1.28, N = 3SE +/- 4.23, N = 34116.962065.538209.471. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Stress16 vCPUs8 vCPUs32 vCPUs14002800420056007000Min: 4111.64 / Avg: 4116.96 / Max: 4121.14Min: 2062.99 / Avg: 2065.53 / Max: 2067.08Min: 8203.26 / Avg: 8209.47 / Max: 8217.541. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 12.70, N = 3SE +/- 6.95, N = 3SE +/- 23.77, N = 354317.4227237.28108241.611. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KMin: 54292.36 / Avg: 54317.42 / Max: 54333.57Min: 27223.85 / Avg: 27237.28 / Max: 27247.09Min: 108194.99 / Avg: 108241.61 / Max: 108272.971. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Math16 vCPUs8 vCPUs32 vCPUs30K60K90K120K150KSE +/- 10.44, N = 3SE +/- 25.04, N = 3SE +/- 9.80, N = 376177.5638215.95151792.831. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix Math16 vCPUs8 vCPUs32 vCPUs30K60K90K120K150KMin: 76162.84 / Avg: 76177.56 / Max: 76197.75Min: 38166.79 / Avg: 38215.95 / Max: 38248.77Min: 151776.48 / Avg: 151792.83 / Max: 151810.361. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Math16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 27.43, N = 3SE +/- 6.31, N = 3SE +/- 190.70, N = 349102.3024633.9997749.081. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector Math16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KMin: 49049.12 / Avg: 49102.3 / Max: 49140.53Min: 24627.11 / Avg: 24633.99 / Max: 24646.6Min: 97367.81 / Avg: 97749.08 / Max: 97948.211. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 0.85, N = 3SE +/- 1.74, N = 3SE +/- 0.42, N = 3426.04841.18214.41
OpenBenchmarking.orgSeconds, Fewer Is BetterBlenderBlend File: Fishy Cat - Compute: CPU-Only16 vCPUs8 vCPUs32 vCPUs150300450600750Min: 424.71 / Avg: 426.04 / Max: 427.62Min: 838.96 / Avg: 841.18 / Max: 844.6Min: 213.67 / Avg: 214.41 / Max: 215.13

SPECjbb 2015

This is a benchmark of SPECjbb 2015. For this test profile to work, you must have a valid license/copy of the SPECjbb 2015 ISO (SPECjbb2015-1.02.iso) in your Phoronix Test Suite download cache. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPS16 vCPUs8 vCPUs32 vCPUs8K16K24K32K40K18092915835075

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare16 vCPUs8 vCPUs32 vCPUs0.38660.77321.15981.54641.933SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.010, N = 30.8800.4501.7181. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare16 vCPUs8 vCPUs32 vCPUs246810Min: 0.88 / Avg: 0.88 / Max: 0.88Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 1.7 / Avg: 1.72 / Max: 1.731. (CXX) g++ options: -O3 -march=native

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 0.00, N = 3SE +/- 33.24, N = 3SE +/- 0.00, N = 35023.702421.439181.241. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Degridding16 vCPUs8 vCPUs32 vCPUs16003200480064008000Min: 5023.7 / Avg: 5023.7 / Max: 5023.7Min: 2356.25 / Avg: 2421.43 / Max: 2465.33Min: 9181.24 / Avg: 9181.24 / Max: 9181.241. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 112.56, N = 3SE +/- 39.91, N = 3SE +/- 31.60, N = 319710.907115.2826843.581. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25KMin: 19495.83 / Avg: 19710.9 / Max: 19876.01Min: 7074.13 / Avg: 7115.28 / Max: 7195.08Min: 26799.99 / Avg: 26843.58 / Max: 26905.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atoms16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.112, N = 3SE +/- 0.025, N = 3SE +/- 0.004, N = 38.4994.66216.5501. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atoms16 vCPUs8 vCPUs32 vCPUs48121620Min: 8.36 / Avg: 8.5 / Max: 8.72Min: 4.61 / Avg: 4.66 / Max: 4.69Min: 16.55 / Avg: 16.55 / Max: 16.561. (CXX) g++ options: -O3 -march=native -ldl

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.037, N = 3SE +/- 0.011, N = 3SE +/- 0.012, N = 38.8614.81216.5961. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein16 vCPUs8 vCPUs32 vCPUs48121620Min: 8.79 / Avg: 8.86 / Max: 8.91Min: 4.79 / Avg: 4.81 / Max: 4.83Min: 16.57 / Avg: 16.6 / Max: 16.611. (CXX) g++ options: -O3 -march=native -ldl

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.02, N = 12SE +/- 0.02, N = 3SE +/- 0.01, N = 158.3915.924.79
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620Min: 8.28 / Avg: 8.39 / Max: 8.48Min: 15.9 / Avg: 15.92 / Max: 15.95Min: 4.72 / Avg: 4.79 / Max: 4.87

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 158.2915.814.80
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620Min: 8.26 / Avg: 8.29 / Max: 8.35Min: 15.71 / Avg: 15.81 / Max: 15.94Min: 4.72 / Avg: 4.8 / Max: 4.87

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 98.4015.654.76
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620Min: 8.37 / Avg: 8.4 / Max: 8.42Min: 15.55 / Avg: 15.65 / Max: 15.72Min: 4.7 / Avg: 4.76 / Max: 4.81

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 128.3415.714.78
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe16 vCPUs8 vCPUs32 vCPUs48121620Min: 8.31 / Avg: 8.34 / Max: 8.4Min: 15.7 / Avg: 15.71 / Max: 15.72Min: 4.7 / Avg: 4.78 / Max: 4.93

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding16 vCPUs8 vCPUs32 vCPUs16003200480064008000SE +/- 43.40, N = 3SE +/- 29.91, N = 3SE +/- 66.63, N = 33631.812296.107262.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - Gridding16 vCPUs8 vCPUs32 vCPUs13002600390052006500Min: 3550.08 / Avg: 3631.81 / Max: 3698Min: 2237.45 / Avg: 2296.1 / Max: 2335.58Min: 7196.11 / Avg: 7262.74 / Max: 73961. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25KSE +/- 171.15, N = 3SE +/- 49.63, N = 15SE +/- 35.67, N = 312171.956855.8121433.921. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C16 vCPUs8 vCPUs32 vCPUs4K8K12K16K20KMin: 11903.82 / Avg: 12171.95 / Max: 12490.3Min: 6420.22 / Avg: 6855.81 / Max: 7040.3Min: 21363.89 / Avg: 21433.92 / Max: 21480.751. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V416 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 84.24, N = 3SE +/- 49.87, N = 3SE +/- 149.01, N = 346113.497646.131657.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V416 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KMin: 45977.5 / Avg: 46113.4 / Max: 46267.6Min: 97585 / Avg: 97646.07 / Max: 97744.9Min: 31403.2 / Avg: 31657.3 / Max: 31919.2

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.26, N = 3SE +/- 0.03, N = 3SE +/- 0.35, N = 447.6288.8428.93
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile16 vCPUs8 vCPUs32 vCPUs20406080100Min: 47.23 / Avg: 47.62 / Max: 48.12Min: 88.78 / Avg: 88.84 / Max: 88.89Min: 28.46 / Avg: 28.93 / Max: 29.97

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 616 vCPUs8 vCPUs32 vCPUs510152025SE +/- 0.037, N = 3SE +/- 0.130, N = 3SE +/- 0.020, N = 311.16820.2736.6821. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 616 vCPUs8 vCPUs32 vCPUs510152025Min: 11.1 / Avg: 11.17 / Max: 11.22Min: 20.12 / Avg: 20.27 / Max: 20.53Min: 6.66 / Avg: 6.68 / Max: 6.721. (CXX) g++ options: -O3 -fPIC -march=native -lm

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1530456075SE +/- 0.20, N = 3SE +/- 0.36, N = 3SE +/- 0.24, N = 1235.4566.2722.22
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1326395265Min: 35.07 / Avg: 35.45 / Max: 35.74Min: 65.55 / Avg: 66.27 / Max: 66.66Min: 21.07 / Avg: 22.22 / Max: 24.03

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 1.16, N = 3SE +/- 0.66, N = 3SE +/- 1.96, N = 3495.54917.39312.12
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile16 vCPUs8 vCPUs32 vCPUs160320480640800Min: 493.65 / Avg: 495.54 / Max: 497.66Min: 916.09 / Avg: 917.39 / Max: 918.22Min: 309.1 / Avg: 312.12 / Max: 315.78

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube16 vCPUs8 vCPUs32 vCPUs80160240320400SE +/- 0.03, N = 3SE +/- 0.63, N = 3SE +/- 0.30, N = 3208.97381.20130.351. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube16 vCPUs8 vCPUs32 vCPUs70140210280350Min: 208.91 / Avg: 208.96 / Max: 209Min: 380.15 / Avg: 381.2 / Max: 382.33Min: 130.04 / Avg: 130.35 / Max: 130.951. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile16 vCPUs8 vCPUs32 vCPUs306090120150SE +/- 0.09, N = 3SE +/- 0.18, N = 3SE +/- 0.16, N = 361.98113.4638.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile16 vCPUs8 vCPUs32 vCPUs20406080100Min: 61.81 / Avg: 61.98 / Max: 62.14Min: 113.23 / Avg: 113.46 / Max: 113.8Min: 38.75 / Avg: 38.96 / Max: 39.28

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1530456075SE +/- 0.92, N = 3SE +/- 0.25, N = 3SE +/- 0.12, N = 937.2268.6824.36
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1326395265Min: 36.27 / Avg: 37.22 / Max: 39.06Min: 68.43 / Avg: 68.68 / Max: 69.17Min: 23.9 / Avg: 24.36 / Max: 25.05

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C16 vCPUs8 vCPUs32 vCPUs11K22K33K44K55KSE +/- 300.01, N = 3SE +/- 15.96, N = 3SE +/- 41.18, N = 332644.8518574.2352309.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C16 vCPUs8 vCPUs32 vCPUs9K18K27K36K45KMin: 32073.63 / Avg: 32644.85 / Max: 33089.54Min: 18556.67 / Avg: 18574.23 / Max: 18606.1Min: 52227.66 / Avg: 52309.81 / Max: 52356.11. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.40, N = 3SE +/- 0.33, N = 3SE +/- 0.17, N = 1242.7574.7126.55
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1428425670Min: 42.05 / Avg: 42.75 / Max: 43.43Min: 74.1 / Avg: 74.71 / Max: 75.22Min: 25.77 / Avg: 26.55 / Max: 27.69

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C16 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 701.24, N = 3SE +/- 50.76, N = 3SE +/- 137.48, N = 355447.3132029.1487702.301. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C16 vCPUs8 vCPUs32 vCPUs15K30K45K60K75KMin: 54108.08 / Avg: 55447.31 / Max: 56477.53Min: 31931.06 / Avg: 32029.14 / Max: 32100.86Min: 87427.73 / Avg: 87702.3 / Max: 87852.231. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.73, N = 3SE +/- 1.11, N = 3SE +/- 0.19, N = 1244.3978.0028.66
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1530456075Min: 42.98 / Avg: 44.39 / Max: 45.4Min: 76.07 / Avg: 78 / Max: 79.92Min: 27.96 / Avg: 28.66 / Max: 29.92

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V216 vCPUs8 vCPUs32 vCPUs20K40K60K80K100KSE +/- 16.18, N = 3SE +/- 70.02, N = 3SE +/- 379.42, N = 345445.991592.633994.9
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V216 vCPUs8 vCPUs32 vCPUs16K32K48K64K80KMin: 45415.2 / Avg: 45445.93 / Max: 45470.1Min: 91501.4 / Avg: 91592.57 / Max: 91730.2Min: 33299 / Avg: 33994.87 / Max: 34604.9

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 0.00, N = 3SE +/- 1.21, N = 3SE +/- 3.30, N = 3645.16371.30996.701. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMP16 vCPUs8 vCPUs32 vCPUs2004006008001000Min: 645.16 / Avg: 645.16 / Max: 645.16Min: 369 / Avg: 371.3 / Max: 373.13Min: 990.1 / Avg: 996.7 / Max: 10001. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.72, N = 3SE +/- 0.19, N = 3SE +/- 0.44, N = 944.6280.0230.32
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1530456075Min: 43.42 / Avg: 44.62 / Max: 45.9Min: 79.74 / Avg: 80.02 / Max: 80.37Min: 28.69 / Avg: 30.32 / Max: 33.24

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMA16 vCPUs8 vCPUs32 vCPUs30060090012001500SE +/- 4.29, N = 3SE +/- 1.98, N = 3SE +/- 1.69, N = 31113.531387.80549.131. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMA16 vCPUs8 vCPUs32 vCPUs2004006008001000Min: 1108.66 / Avg: 1113.53 / Max: 1122.08Min: 1385.42 / Avg: 1387.8 / Max: 1391.73Min: 545.77 / Avg: 549.13 / Max: 551.121. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding16 vCPUs8 vCPUs32 vCPUs12002400360048006000SE +/- 2.61, N = 3SE +/- 7.87, N = 3SE +/- 80.56, N = 154083.162196.745522.071. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Degridding16 vCPUs8 vCPUs32 vCPUs10002000300040005000Min: 4080.55 / Avg: 4083.16 / Max: 4088.38Min: 2182.43 / Avg: 2196.74 / Max: 2209.59Min: 5023.7 / Avg: 5522.07 / Max: 5974.891. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.44, N = 3SE +/- 0.22, N = 3SE +/- 0.26, N = 945.2980.2631.98
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1530456075Min: 44.71 / Avg: 45.29 / Max: 46.16Min: 79.97 / Avg: 80.26 / Max: 80.69Min: 30.81 / Avg: 31.98 / Max: 33.46

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1.16552.3313.49654.6625.8275SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 152.655.182.12
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs246810Min: 2.58 / Avg: 2.65 / Max: 2.74Min: 5.05 / Avg: 5.18 / Max: 5.3Min: 1.94 / Avg: 2.12 / Max: 2.28

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Execution Time16 vCPUs8 vCPUs32 vCPUs50010001500200025001534.722426.16994.53-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write Random16 vCPUs8 vCPUs32 vCPUs300K600K900K1200K1500KSE +/- 1701.74, N = 3SE +/- 4976.00, N = 15SE +/- 9643.50, N = 1588470054835313218271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write Random16 vCPUs8 vCPUs32 vCPUs200K400K600K800K1000KMin: 882824 / Avg: 884699.67 / Max: 888097Min: 512356 / Avg: 548352.87 / Max: 577817Min: 1278014 / Avg: 1321826.6 / Max: 14195841. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1.03052.0613.09154.1225.1525SE +/- 0.03, N = 12SE +/- 0.02, N = 3SE +/- 0.03, N = 152.554.582.01
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs246810Min: 2.45 / Avg: 2.55 / Max: 2.83Min: 4.55 / Avg: 4.58 / Max: 4.61Min: 1.85 / Avg: 2.01 / Max: 2.17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.08, N = 3SE +/- 0.52, N = 3SE +/- 0.55, N = 1251.5589.2339.22
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs20406080100Min: 51.38 / Avg: 51.55 / Max: 51.66Min: 88.48 / Avg: 89.23 / Max: 90.22Min: 37.02 / Avg: 39.22 / Max: 43.53

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Lossless16 vCPUs8 vCPUs32 vCPUs612182430SE +/- 0.19, N = 3SE +/- 0.11, N = 3SE +/- 0.00, N = 314.7023.3110.341. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, Lossless16 vCPUs8 vCPUs32 vCPUs510152025Min: 14.37 / Avg: 14.7 / Max: 15.03Min: 23.18 / Avg: 23.31 / Max: 23.54Min: 10.33 / Avg: 10.34 / Max: 10.351. (CXX) g++ options: -O3 -fPIC -march=native -lm

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs1.27582.55163.82745.10326.379SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 153.365.672.60
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Time16 vCPUs8 vCPUs32 vCPUs246810Min: 3.34 / Avg: 3.36 / Max: 3.39Min: 5.65 / Avg: 5.67 / Max: 5.69Min: 2.41 / Avg: 2.6 / Max: 2.87

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float16 vCPUs8 vCPUs32 vCPUs9001800270036004500SE +/- 4.32, N = 3SE +/- 3.06, N = 3SE +/- 17.55, N = 32481.964395.732093.25
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float16 vCPUs8 vCPUs32 vCPUs8001600240032004000Min: 2473.88 / Avg: 2481.96 / Max: 2488.67Min: 4390.71 / Avg: 4395.73 / Max: 4401.27Min: 2075.66 / Avg: 2093.25 / Max: 2128.36

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs1.34552.6914.03655.3826.7275SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 153.655.982.87
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs246810Min: 3.44 / Avg: 3.65 / Max: 3.84Min: 5.87 / Avg: 5.98 / Max: 6.16Min: 2.65 / Avg: 2.87 / Max: 3.14

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 9Input: drivaerFastback, Medium Mesh Size - Mesh Time16 vCPUs8 vCPUs32 vCPUs90180270360450303.71425.95206.40-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats-ltransportModels -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling-lfoamToVTK -ldynamicMesh -llagrangian -lfileFormats1. (CXX) g++ options: -std=c++14 -O3 -mcpu=native -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.22, N = 3SE +/- 0.85, N = 3SE +/- 0.45, N = 951.4093.6746.30
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs20406080100Min: 50.98 / Avg: 51.4 / Max: 51.69Min: 92.53 / Avg: 93.67 / Max: 95.35Min: 45.06 / Avg: 46.3 / Max: 49.46

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs1020304050SE +/- 0.23, N = 3SE +/- 0.57, N = 3SE +/- 0.32, N = 1230.7045.6122.84
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs918273645Min: 30.25 / Avg: 30.7 / Max: 30.94Min: 44.97 / Avg: 45.61 / Max: 46.74Min: 21.59 / Avg: 22.84 / Max: 25

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.116 vCPUs8 vCPUs32 vCPUs510152025SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 317.1011.1022.091. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.116 vCPUs8 vCPUs32 vCPUs510152025Min: 17.03 / Avg: 17.1 / Max: 17.15Min: 11.09 / Avg: 11.1 / Max: 11.1Min: 22.08 / Avg: 22.09 / Max: 22.11. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 32.26, N = 3SE +/- 23.42, N = 15SE +/- 42.99, N = 153343.251977.893899.281. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding16 vCPUs8 vCPUs32 vCPUs7001400210028003500Min: 3279.95 / Avg: 3343.25 / Max: 3385.75Min: 1822.19 / Avg: 1977.89 / Max: 2066.11Min: 3430.01 / Avg: 3899.28 / Max: 4036.861. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs110M220M330M440M550M2625630005083720001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Scale: 26

Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding16 vCPUs8 vCPUs32 vCPUs10002000300040005000SE +/- 5.95, N = 3SE +/- 5.73, N = 3SE +/- 35.89, N = 153789.012360.634456.551. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - Gridding16 vCPUs8 vCPUs32 vCPUs8001600240032004000Min: 3780.03 / Avg: 3789.01 / Max: 3800.26Min: 2353.64 / Avg: 2360.63 / Max: 2371.99Min: 4180.66 / Avg: 4456.55 / Max: 4653.31. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 6.49, N = 3SE +/- 6.00, N = 3SE +/- 14.04, N = 31898.412482.373550.65
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant16 vCPUs8 vCPUs32 vCPUs6001200180024003000Min: 1888.63 / Avg: 1898.41 / Max: 1910.69Min: 2475.08 / Avg: 2482.37 / Max: 2494.26Min: 3528.98 / Avg: 3550.65 / Max: 3576.94

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs100M200M300M400M500M2574780004773770001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Scale: 26

Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs1122334455SE +/- 0.24, N = 3SE +/- 0.53, N = 3SE +/- 0.16, N = 935.6450.8727.64
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs1020304050Min: 35.22 / Avg: 35.64 / Max: 36.04Min: 50.1 / Avg: 50.87 / Max: 51.89Min: 26.97 / Avg: 27.64 / Max: 28.3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C16 vCPUs8 vCPUs32 vCPUs11K22K33K44K55KSE +/- 102.49, N = 3SE +/- 46.23, N = 3SE +/- 31.40, N = 333309.7627703.3350939.051. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C16 vCPUs8 vCPUs32 vCPUs9K18K27K36K45KMin: 33174.43 / Avg: 33309.76 / Max: 33510.76Min: 27619.18 / Avg: 27703.33 / Max: 27778.6Min: 50888.97 / Avg: 50939.05 / Max: 50996.91. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Tree16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 164.12, N = 3SE +/- 267.82, N = 9SE +/- 344.06, N = 424004.716322.429296.7MIN: 18739.09 / MAX: 24332.49MIN: 10733.44 / MAX: 19126.93MIN: 20859.52 / MAX: 30225.51
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Tree16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25KMin: 23826.67 / Avg: 24004.65 / Max: 24332.49Min: 15082.33 / Avg: 16322.35 / Max: 17398.5Min: 28772.63 / Avg: 29296.69 / Max: 30225.51

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 180.13, N = 3SE +/- 27.10, N = 3SE +/- 355.48, N = 315855.116159.128372.8
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25KMin: 15674.3 / Avg: 15855.13 / Max: 16215.4Min: 16120.7 / Avg: 16159.07 / Max: 16211.4Min: 27971.5 / Avg: 28372.8 / Max: 29081.7

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs40M80M120M160M200M952655001695420001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Scale: 26

Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 2616 vCPUs32 vCPUs30M60M90M120M150M707502001247020001. (CC) gcc options: -fcommon -O3 -march=native -lpthread -lm -lmpi

Scale: 26

Tau T2A: 8 vCPUs: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node instance-2 exited on signal 9 (Killed).

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet16 vCPUs8 vCPUs32 vCPUs14002800420056007000SE +/- 11.05, N = 3SE +/- 9.96, N = 3SE +/- 31.57, N = 83955.896618.323853.90
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet16 vCPUs8 vCPUs32 vCPUs11002200330044005500Min: 3934.13 / Avg: 3955.89 / Max: 3970.08Min: 6606.33 / Avg: 6618.32 / Max: 6638.08Min: 3683.15 / Avg: 3853.9 / Max: 3940.72

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 016 vCPUs8 vCPUs32 vCPUs100200300400500SE +/- 0.80, N = 3SE +/- 0.90, N = 3SE +/- 0.65, N = 3328.97456.24266.341. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 016 vCPUs8 vCPUs32 vCPUs80160240320400Min: 327.62 / Avg: 328.97 / Max: 330.39Min: 454.45 / Avg: 456.24 / Max: 457.35Min: 265.62 / Avg: 266.34 / Max: 267.641. (CXX) g++ options: -O3 -fPIC -march=native -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D16 vCPUs8 vCPUs32 vCPUs400800120016002000SE +/- 14.70, N = 3SE +/- 1.14, N = 3SE +/- 0.86, N = 31498.451104.261822.771. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D16 vCPUs8 vCPUs32 vCPUs30060090012001500Min: 1470.82 / Avg: 1498.45 / Max: 1520.98Min: 1102.35 / Avg: 1104.26 / Max: 1106.3Min: 1821.35 / Avg: 1822.77 / Max: 1824.311. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs0.7831.5662.3493.1323.915SE +/- 0.03, N = 12SE +/- 0.04, N = 3SE +/- 0.02, N = 152.223.482.13
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs246810Min: 2.03 / Avg: 2.22 / Max: 2.32Min: 3.41 / Avg: 3.48 / Max: 3.54Min: 1.93 / Avg: 2.13 / Max: 2.3

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 155.918.094.96
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs3691215Min: 5.82 / Avg: 5.91 / Max: 5.98Min: 8 / Avg: 8.09 / Max: 8.26Min: 4.73 / Avg: 4.96 / Max: 5.21

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update Random16 vCPUs8 vCPUs32 vCPUs70K140K210K280K350KSE +/- 2705.38, N = 8SE +/- 1758.40, N = 3SE +/- 705.83, N = 33159722046882117351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update Random16 vCPUs8 vCPUs32 vCPUs50K100K150K200K250KMin: 301107 / Avg: 315971.88 / Max: 325155Min: 201659 / Avg: 204688 / Max: 207750Min: 210492 / Avg: 211735 / Max: 2129361. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Medium16 vCPUs8 vCPUs32 vCPUs3691215SE +/- 0.0194, N = 3SE +/- 0.0253, N = 3SE +/- 0.0035, N = 36.94499.05055.98251. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Medium16 vCPUs8 vCPUs32 vCPUs3691215Min: 6.91 / Avg: 6.94 / Max: 6.98Min: 9 / Avg: 9.05 / Max: 9.08Min: 5.98 / Avg: 5.98 / Max: 5.991. (CXX) g++ options: -O3 -march=native -flto -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Lossless16 vCPUs8 vCPUs32 vCPUs3691215SE +/- 0.065, N = 8SE +/- 0.096, N = 3SE +/- 0.072, N = 37.6589.9976.7751. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, Lossless16 vCPUs8 vCPUs32 vCPUs3691215Min: 7.56 / Avg: 7.66 / Max: 8.11Min: 9.81 / Avg: 10 / Max: 10.13Min: 6.66 / Avg: 6.78 / Max: 6.911. (CXX) g++ options: -O3 -fPIC -march=native -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALS16 vCPUs8 vCPUs32 vCPUs12002400360048006000SE +/- 27.36, N = 3SE +/- 7.85, N = 3SE +/- 32.04, N = 33906.75725.24118.3MIN: 3730.93 / MAX: 4142.3MIN: 5528.56 / MAX: 5954.26MIN: 3925.84 / MAX: 4358.22
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALS16 vCPUs8 vCPUs32 vCPUs10002000300040005000Min: 3869.32 / Avg: 3906.66 / Max: 3959.96Min: 5709.48 / Avg: 5725.19 / Max: 5733.25Min: 4055.57 / Avg: 4118.34 / Max: 4160.9

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 216 vCPUs8 vCPUs32 vCPUs50100150200250SE +/- 0.47, N = 3SE +/- 0.32, N = 3SE +/- 0.13, N = 3194.77245.60169.641. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 216 vCPUs8 vCPUs32 vCPUs4080120160200Min: 194.12 / Avg: 194.77 / Max: 195.7Min: 244.98 / Avg: 245.6 / Max: 246.05Min: 169.44 / Avg: 169.64 / Max: 169.91. (CXX) g++ options: -O3 -fPIC -march=native -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message Passing16 vCPUs8 vCPUs32 vCPUs1.3M2.6M3.9M5.2M6.5MSE +/- 15929.45, N = 3SE +/- 12538.93, N = 3SE +/- 7551.56, N = 35475267.364507844.176128517.101. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message Passing16 vCPUs8 vCPUs32 vCPUs1.1M2.2M3.3M4.4M5.5MMin: 5444646.96 / Avg: 5475267.36 / Max: 5498195.68Min: 4494799.53 / Avg: 4507844.17 / Max: 4532915.15Min: 6118435.12 / Avg: 6128517.1 / Max: 6143296.861. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write16 vCPUs8 vCPUs32 vCPUs7001400210028003500SE +/- 17.11, N = 3SE +/- 9.80, N = 3SE +/- 6.74, N = 32508251533831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write16 vCPUs8 vCPUs32 vCPUs6001200180024003000Min: 2480.5 / Avg: 2508.33 / Max: 2539.48Min: 2502.51 / Avg: 2514.54 / Max: 2533.95Min: 3374.15 / Avg: 3383.07 / Max: 3396.281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency16 vCPUs8 vCPUs32 vCPUs918273645SE +/- 0.27, N = 3SE +/- 0.15, N = 3SE +/- 0.06, N = 339.8739.7729.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency16 vCPUs8 vCPUs32 vCPUs816243240Min: 39.38 / Avg: 39.87 / Max: 40.31Min: 39.46 / Avg: 39.77 / Max: 39.96Min: 29.44 / Avg: 29.56 / Max: 29.641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.05, N = 157.438.856.72
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs3691215Min: 7.23 / Avg: 7.43 / Max: 7.56Min: 8.64 / Avg: 8.85 / Max: 9.09Min: 6.22 / Avg: 6.72 / Max: 7

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cache16 vCPUs8 vCPUs32 vCPUs120240360480600SE +/- 2.05, N = 3SE +/- 2.30, N = 3SE +/- 0.28, N = 3551.25436.31566.911. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU Cache16 vCPUs8 vCPUs32 vCPUs100200300400500Min: 547.93 / Avg: 551.25 / Max: 554.99Min: 431.86 / Avg: 436.31 / Max: 439.51Min: 566.61 / Avg: 566.91 / Max: 567.471. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 12.43, N = 3SE +/- 9.75, N = 3SE +/- 6.90, N = 33358.733842.123056.90MIN: 3163.2 / MAX: 3575.85MIN: 3619.38 / MAX: 4060.16MIN: 2928.19 / MAX: 3237.581. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet16 vCPUs8 vCPUs32 vCPUs7001400210028003500Min: 3337.6 / Avg: 3358.73 / Max: 3380.64Min: 3823.3 / Avg: 3842.12 / Max: 3855.97Min: 3043.51 / Avg: 3056.9 / Max: 3066.481. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootout16 vCPUs8 vCPUs32 vCPUs14002800420056007000SE +/- 41.18, N = 15SE +/- 66.45, N = 4SE +/- 37.00, N = 36261.45537.16566.5MIN: 5664.61 / MAX: 10501.86MIN: 5069.67 / MAX: 6244.83MIN: 5609.26 / MAX: 13128.6
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootout16 vCPUs8 vCPUs32 vCPUs11002200330044005500Min: 6006.53 / Avg: 6261.43 / Max: 6665.52Min: 5372.16 / Avg: 5537.11 / Max: 5697.52Min: 6527.84 / Avg: 6566.48 / Max: 6640.45

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.686.116.991. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K16 vCPUs8 vCPUs32 vCPUs3691215Min: 6.66 / Avg: 6.68 / Max: 6.7Min: 6.09 / Avg: 6.11 / Max: 6.13Min: 6.97 / Avg: 6.99 / Max: 7.031. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K16 vCPUs8 vCPUs32 vCPUs0.47930.95861.43791.91722.3965SE +/- 0.00, N = 3SE +/- 0.02, N = 6SE +/- 0.00, N = 32.041.922.131. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K16 vCPUs8 vCPUs32 vCPUs246810Min: 2.04 / Avg: 2.04 / Max: 2.04Min: 1.83 / Avg: 1.92 / Max: 1.95Min: 2.13 / Avg: 2.13 / Max: 2.131. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 50016 vCPUs8 vCPUs32 vCPUs60K120K180K240K300KSE +/- 676.50, N = 3SE +/- 840.72, N = 3SE +/- 245.82, N = 3261253.72245968.24235749.361. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 50016 vCPUs8 vCPUs32 vCPUs50K100K150K200K250KMin: 259914.09 / Avg: 261253.72 / Max: 262087.89Min: 244981.25 / Avg: 245968.24 / Max: 247640.65Min: 235285.7 / Avg: 235749.36 / Max: 236122.791. (CC) gcc options: -lcrypt -lz -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100016 vCPUs8 vCPUs32 vCPUs60K120K180K240K300KSE +/- 385.23, N = 3SE +/- 107.45, N = 3SE +/- 479.91, N = 3258672.88240628.22233484.081. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100016 vCPUs8 vCPUs32 vCPUs40K80K120K160K200KMin: 257942.91 / Avg: 258672.88 / Max: 259251.34Min: 240458.13 / Avg: 240628.22 / Max: 240827Min: 232524.74 / Avg: 233484.08 / Max: 2339901. (CC) gcc options: -lcrypt -lz -O3 -march=native

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET16 vCPUs8 vCPUs32 vCPUs300K600K900K1200K1500KSE +/- 10176.81, N = 3SE +/- 15381.83, N = 3SE +/- 9294.72, N = 31305143.581440521.551411234.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET16 vCPUs8 vCPUs32 vCPUs200K400K600K800K1000KMin: 1291489.12 / Avg: 1305143.58 / Max: 1325042.5Min: 1424943.88 / Avg: 1440521.55 / Max: 1471284.38Min: 1395868.25 / Avg: 1411234.92 / Max: 1427977.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET16 vCPUs8 vCPUs32 vCPUs400K800K1200K1600K2000KSE +/- 17762.42, N = 3SE +/- 19964.38, N = 5SE +/- 10764.67, N = 31798635.371980967.481926297.791. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET16 vCPUs8 vCPUs32 vCPUs300K600K900K1200K1500KMin: 1777208.5 / Avg: 1798635.37 / Max: 1833888.12Min: 1920491.75 / Avg: 1980967.48 / Max: 2027626.88Min: 1914987.38 / Avg: 1926297.79 / Max: 1947817.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dotty16 vCPUs8 vCPUs32 vCPUs400800120016002000SE +/- 12.38, N = 3SE +/- 18.70, N = 5SE +/- 32.38, N = 111979.21800.21871.7MIN: 1396.31 / MAX: 2843.97MIN: 1442.18 / MAX: 3536.53MIN: 1370.64 / MAX: 3034.49
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dotty16 vCPUs8 vCPUs32 vCPUs30060090012001500Min: 1954.65 / Avg: 1979.19 / Max: 1994.27Min: 1758.19 / Avg: 1800.16 / Max: 1864.35Min: 1755.54 / Avg: 1871.74 / Max: 2085.76

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lens16 vCPUs8 vCPUs32 vCPUs4K8K12K16K20KSE +/- 79.58, N = 3SE +/- 101.59, N = 3SE +/- 57.21, N = 316797.516183.517606.6MIN: 16713.33 / MAX: 18436.45MIN: 16078.67 / MAX: 18111.73MIN: 17544.26 / MAX: 19037.24
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lens16 vCPUs8 vCPUs32 vCPUs3K6K9K12K15KMin: 16713.33 / Avg: 16797.45 / Max: 16956.52Min: 16078.67 / Avg: 16183.5 / Max: 16386.66Min: 17544.26 / Avg: 17606.65 / Max: 17720.91

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans16 vCPUs8 vCPUs32 vCPUs13002600390052006500SE +/- 59.44, N = 4SE +/- 38.11, N = 20SE +/- 28.01, N = 4552455385954
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans16 vCPUs8 vCPUs32 vCPUs10002000300040005000Min: 5417 / Avg: 5523.5 / Max: 5693Min: 5186 / Avg: 5537.5 / Max: 5869Min: 5886 / Avg: 5954.25 / Max: 6023

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p16 vCPUs8 vCPUs32 vCPUs3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.7811.2712.111. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p16 vCPUs8 vCPUs32 vCPUs48121620Min: 11.74 / Avg: 11.78 / Max: 11.81Min: 11.26 / Avg: 11.27 / Max: 11.29Min: 12.09 / Avg: 12.11 / Max: 12.131. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p16 vCPUs8 vCPUs32 vCPUs1.12282.24563.36844.49125.614SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.844.654.991. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p16 vCPUs8 vCPUs32 vCPUs246810Min: 4.83 / Avg: 4.84 / Max: 4.85Min: 4.63 / Avg: 4.65 / Max: 4.67Min: 4.97 / Avg: 4.99 / Max: 5.011. (CXX) g++ options: -lm -lpthread -O3 -march=native -march=armv8-a -fPIC -U_FORTIFY_SOURCE -std=gnu++11

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs0.00340.00680.01020.01360.017SE +/- 0.000, N = 3SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0150.0150.014
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs12345Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.01 / Avg: 0.01 / Max: 0.02Min: 0.01 / Avg: 0.01 / Max: 0.02

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Forest16 vCPUs8 vCPUs32 vCPUs2004006008001000SE +/- 4.55, N = 3SE +/- 5.46, N = 3SE +/- 12.92, N = 3997.7985.91047.3MIN: 900.54 / MAX: 1197.51MIN: 902.84 / MAX: 1247.45MIN: 904.64 / MAX: 1280.13
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Forest16 vCPUs8 vCPUs32 vCPUs2004006008001000Min: 989.63 / Avg: 997.71 / Max: 1005.37Min: 977.35 / Avg: 985.88 / Max: 996.04Min: 1026.01 / Avg: 1047.32 / Max: 1070.63

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requests16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 62.21, N = 3SE +/- 40.11, N = 3SE +/- 122.37, N = 38932.59234.89430.8MIN: 8338.21 / MAX: 10055.02MIN: 8668.63 / MAX: 9879.54MIN: 8793.75 / MAX: 9955.78
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requests16 vCPUs8 vCPUs32 vCPUs16003200480064008000Min: 8809.97 / Avg: 8932.51 / Max: 9012.39Min: 9169.64 / Avg: 9234.81 / Max: 9307.91Min: 9216.45 / Avg: 9430.81 / Max: 9640.26

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs0.090.180.270.360.45SE +/- 0.003, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4000.3810.392
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs12345Min: 0.39 / Avg: 0.4 / Max: 0.41Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.39 / Avg: 0.39 / Max: 0.39

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs0.85521.71042.56563.42084.276SE +/- 0.018, N = 3SE +/- 0.020, N = 3SE +/- 0.016, N = 33.8013.6403.723
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs246810Min: 3.77 / Avg: 3.8 / Max: 3.82Min: 3.62 / Avg: 3.64 / Max: 3.68Min: 3.71 / Avg: 3.72 / Max: 3.76

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs0.21150.4230.63450.8461.0575SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 30.9400.9030.915
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing16 vCPUs8 vCPUs32 vCPUs246810Min: 0.93 / Avg: 0.94 / Max: 0.95Min: 0.9 / Avg: 0.9 / Max: 0.91Min: 0.91 / Avg: 0.91 / Max: 0.92

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs0.46310.92621.38931.85242.3155SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 32.0581.9812.055
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs246810Min: 2.06 / Avg: 2.06 / Max: 2.06Min: 1.98 / Avg: 1.98 / Max: 1.98Min: 2.05 / Avg: 2.05 / Max: 2.06

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython16 vCPUs8 vCPUs32 vCPUs11002200330044005500SE +/- 36.80, N = 4SE +/- 30.34, N = 4SE +/- 5.52, N = 4501051885079
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython16 vCPUs8 vCPUs32 vCPUs9001800270036004500Min: 4953 / Avg: 5009.75 / Max: 5116Min: 5123 / Avg: 5187.75 / Max: 5269Min: 5069 / Avg: 5079 / Max: 5090

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRank16 vCPUs8 vCPUs32 vCPUs11002200330044005500SE +/- 49.42, N = 3SE +/- 37.35, N = 11SE +/- 77.61, N = 125197.05020.85174.3MIN: 4732.96 / MAX: 5397.5MIN: 4537.14 / MAX: 5676.41MIN: 4316.47 / MAX: 6446.52
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRank16 vCPUs8 vCPUs32 vCPUs9001800270036004500Min: 5125.12 / Avg: 5197.02 / Max: 5291.71Min: 4900.52 / Avg: 5020.82 / Max: 5316.46Min: 4649.9 / Avg: 5174.3 / Max: 5678.23

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futures16 vCPUs8 vCPUs32 vCPUs7001400210028003500SE +/- 14.58, N = 3SE +/- 33.55, N = 3SE +/- 8.81, N = 33067.33001.63084.2MIN: 3011.77 / MAX: 3250.64MIN: 2862.8 / MAX: 3206.85MIN: 2993.8 / MAX: 3192.9
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futures16 vCPUs8 vCPUs32 vCPUs5001000150020002500Min: 3042.17 / Avg: 3067.3 / Max: 3092.67Min: 2954.37 / Avg: 3001.65 / Max: 3066.51Min: 3067.59 / Avg: 3084.16 / Max: 3097.64

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v216 vCPUs8 vCPUs32 vCPUs70140210280350SE +/- 1.36, N = 3SE +/- 0.78, N = 3SE +/- 0.05, N = 3328.89331.34322.77MIN: 322.15 / MAX: 373.8MIN: 327.36 / MAX: 339.94MIN: 319.63 / MAX: 326.431. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v216 vCPUs8 vCPUs32 vCPUs60120180240300Min: 326.83 / Avg: 328.89 / Max: 331.45Min: 330.44 / Avg: 331.34 / Max: 332.9Min: 322.67 / Avg: 322.77 / Max: 322.861. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.116 vCPUs8 vCPUs32 vCPUs70140210280350SE +/- 0.80, N = 3SE +/- 0.64, N = 3SE +/- 0.07, N = 3305.23303.80301.15MIN: 298.53 / MAX: 370.85MIN: 300.11 / MAX: 314.72MIN: 299.13 / MAX: 307.21. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.116 vCPUs8 vCPUs32 vCPUs50100150200250Min: 303.64 / Avg: 305.23 / Max: 306.26Min: 302.52 / Avg: 303.8 / Max: 304.54Min: 301.02 / Avg: 301.15 / Max: 301.261. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v216 vCPUs8 vCPUs32 vCPUs20406080100SE +/- 0.34, N = 3SE +/- 0.10, N = 3SE +/- 0.00, N = 396.2195.7395.47MIN: 95.08 / MAX: 100.72MIN: 95.23 / MAX: 97.46MIN: 95.15 / MAX: 96.881. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v216 vCPUs8 vCPUs32 vCPUs20406080100Min: 95.77 / Avg: 96.21 / Max: 96.87Min: 95.54 / Avg: 95.73 / Max: 95.86Min: 95.47 / Avg: 95.47 / Max: 95.481. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs0.00140.00280.00420.00560.007SE +/- 0.000, N = 15SE +/- 0.000, N = 3SE +/- 0.000, N = 140.0060.0050.005
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State16 vCPUs8 vCPUs32 vCPUs12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While Writing16 vCPUs8 vCPUs32 vCPUs600K1200K1800K2400K3000KSE +/- 20446.66, N = 15SE +/- 9067.95, N = 15SE +/- 32390.32, N = 12126482659470226109921. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While Writing16 vCPUs8 vCPUs32 vCPUs500K1000K1500K2000K2500KMin: 1163299 / Avg: 1264826.07 / Max: 1416458Min: 551868 / Avg: 594701.67 / Max: 680011Min: 2501493 / Avg: 2610991.75 / Max: 28412161. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Futex16 vCPUs8 vCPUs32 vCPUs300K600K900K1200K1500KSE +/- 30917.20, N = 15SE +/- 36001.77, N = 15SE +/- 15026.23, N = 31198681.87937451.991437660.621. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Futex16 vCPUs8 vCPUs32 vCPUs200K400K600K800K1000KMin: 990512.22 / Avg: 1198681.87 / Max: 1343625.03Min: 813798.38 / Avg: 937451.99 / Max: 1278731.9Min: 1416471.01 / Avg: 1437660.62 / Max: 1466711.141. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency16 vCPUs8 vCPUs32 vCPUs50100150200250SE +/- 2.67, N = 3SE +/- 2.96, N = 12SE +/- 7.17, N = 12204.92215.10114.801. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency16 vCPUs8 vCPUs32 vCPUs4080120160200Min: 201.85 / Avg: 204.92 / Max: 210.23Min: 203.68 / Avg: 215.1 / Max: 230.11Min: 73.6 / Avg: 114.8 / Max: 144.411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write16 vCPUs8 vCPUs32 vCPUs5001000150020002500SE +/- 15.69, N = 3SE +/- 15.88, N = 12SE +/- 154.25, N = 121220116522821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write16 vCPUs8 vCPUs32 vCPUs400800120016002000Min: 1189.15 / Avg: 1220.4 / Max: 1238.53Min: 1086.46 / Avg: 1164.68 / Max: 1227.4Min: 1731.22 / Avg: 2281.55 / Max: 3396.541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lm

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding16 vCPUs8 vCPUs32 vCPUs8001600240032004000SE +/- 12.80, N = 3SE +/- 24.91, N = 15SE +/- 54.84, N = 152585.311325.063962.081. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding16 vCPUs8 vCPUs32 vCPUs7001400210028003500Min: 2572.51 / Avg: 2585.31 / Max: 2610.9Min: 1135.91 / Avg: 1325.06 / Max: 1441.73Min: 3452.57 / Avg: 3962.08 / Max: 4181.61. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs0.64351.2871.93052.5743.2175SE +/- 0.04, N = 12SE +/- 0.03, N = 3SE +/- 0.03, N = 151.792.861.68
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time16 vCPUs8 vCPUs32 vCPUs246810Min: 1.68 / Avg: 1.79 / Max: 2.22Min: 2.79 / Avg: 2.86 / Max: 2.91Min: 1.49 / Avg: 1.68 / Max: 1.94

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.05, N = 12SE +/- 0.17, N = 3SE +/- 0.23, N = 156.006.906.72
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Time16 vCPUs8 vCPUs32 vCPUs3691215Min: 5.79 / Avg: 6 / Max: 6.43Min: 6.69 / Avg: 6.9 / Max: 7.23Min: 6.07 / Avg: 6.72 / Max: 9.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs246810SE +/- 0.05, N = 12SE +/- 0.03, N = 3SE +/- 0.11, N = 154.936.284.79
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time16 vCPUs8 vCPUs32 vCPUs3691215Min: 4.69 / Avg: 4.93 / Max: 5.29Min: 6.22 / Avg: 6.28 / Max: 6.32Min: 4.41 / Avg: 4.79 / Max: 5.83

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO16 vCPUs8 vCPUs32 vCPUs6K12K18K24K30KSE +/- 583.25, N = 12SE +/- 435.60, N = 9SE +/- 131.70, N = 415981.526456.410705.9MIN: 12776.53 / MAX: 36273.51MIN: 13667.16 / MAX: 42318.14MIN: 10505.49 / MAX: 14847.21
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO16 vCPUs8 vCPUs32 vCPUs5K10K15K20K25KMin: 12776.53 / Avg: 15981.53 / Max: 20011Min: 23597.29 / Avg: 26456.36 / Max: 28542.29Min: 10505.49 / Avg: 10705.92 / Max: 11080.73

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayes16 vCPUs8 vCPUs32 vCPUs5001000150020002500SE +/- 7.45, N = 3SE +/- 42.77, N = 15SE +/- 9.73, N = 31262.02249.8766.4MIN: 877.37 / MAX: 1398.23MIN: 1478.18 / MAX: 2434.18MIN: 495.95 / MAX: 1178.88
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayes16 vCPUs8 vCPUs32 vCPUs400800120016002000Min: 1248.7 / Avg: 1261.95 / Max: 1274.47Min: 1963.15 / Avg: 2249.78 / Max: 2434.18Min: 754.77 / Avg: 766.44 / Max: 785.77

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap16 vCPUs8 vCPUs32 vCPUs2K4K6K8K10KSE +/- 52.52, N = 4SE +/- 68.63, N = 4SE +/- 95.95, N = 20559880405015
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap16 vCPUs8 vCPUs32 vCPUs14002800420056007000Min: 5518 / Avg: 5597.5 / Max: 5751Min: 7846 / Avg: 8039.5 / Max: 8151Min: 4595 / Avg: 5015.35 / Max: 6201

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H216 vCPUs8 vCPUs32 vCPUs11002200330044005500SE +/- 39.59, N = 20SE +/- 38.62, N = 20SE +/- 83.04, N = 20484343465175
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H216 vCPUs8 vCPUs32 vCPUs9001800270036004500Min: 4604 / Avg: 4843.3 / Max: 5280Min: 4115 / Avg: 4346.1 / Max: 4706Min: 4867 / Avg: 5174.6 / Max: 6440

136 Results Shown

PostgreSQL pgbench:
  100 - 250 - Read Only
  100 - 250 - Read Only - Average Latency
  100 - 100 - Read Only
  100 - 100 - Read Only - Average Latency
SPECjbb 2015
Apache Cassandra
NAS Parallel Benchmarks:
  BT.C
  SP.B
Blender
ASTC Encoder
Aircrack-ng
ASTC Encoder
Facebook RocksDB
Coremark
Apache Spark
OpenSSL:
  SHA256
  RSA4096
  RSA4096
Apache Spark:
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 2000 - Calculate Pi Benchmark
  40000000 - 2000 - Calculate Pi Benchmark
Blender
NAS Parallel Benchmarks
Stress-NG
Sysbench
Stress-NG:
  Matrix Math
  Vector Math
Blender
SPECjbb 2015
GROMACS
ASKAP
NAS Parallel Benchmarks
LAMMPS Molecular Dynamics Simulator:
  20k Atoms
  Rhodopsin Protein
Apache Spark:
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  40000000 - 100 - Calculate Pi Benchmark Using Dataframe
  40000000 - 2000 - Calculate Pi Benchmark Using Dataframe
ASKAP
NAS Parallel Benchmarks
TensorFlow Lite
Timed MPlayer Compilation
libavif avifenc
Apache Spark
Timed Gem5 Compilation
GPAW
Timed FFmpeg Compilation
Apache Spark
NAS Parallel Benchmarks
Apache Spark
NAS Parallel Benchmarks
Apache Spark
TensorFlow Lite
ASKAP
Apache Spark
Stress-NG
ASKAP
Apache Spark:
  40000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 2000 - Broadcast Inner Join Test Time
OpenFOAM
Facebook RocksDB
Apache Spark:
  1000000 - 100 - Repartition Test Time
  40000000 - 2000 - SHA-512 Benchmark Time
libavif avifenc
Apache Spark
TensorFlow Lite
Apache Spark
OpenFOAM
Apache Spark:
  40000000 - 100 - SHA-512 Benchmark Time
  40000000 - 2000 - Group By Test Time
High Performance Conjugate Gradient
ASKAP
Graph500
ASKAP
TensorFlow Lite
Graph500
Apache Spark
NAS Parallel Benchmarks
Renaissance
TensorFlow Lite
Graph500:
  26:
    sssp max_TEPS
    sssp median_TEPS
TensorFlow Lite
libavif avifenc
NAS Parallel Benchmarks
Apache Spark:
  1000000 - 100 - Inner Join Test Time
  1000000 - 2000 - SHA-512 Benchmark Time
Facebook RocksDB
ASTC Encoder
libavif avifenc
Renaissance
libavif avifenc
Stress-NG
PostgreSQL pgbench:
  100 - 100 - Read Write
  100 - 100 - Read Write - Average Latency
Apache Spark
Stress-NG
TNN
Renaissance
VP9 libvpx Encoding:
  Speed 5 - Bosphorus 4K
  Speed 0 - Bosphorus 4K
nginx:
  500
  1000
Redis:
  SET
  GET
Renaissance:
  Scala Dotty
  ALS Movie Lens
DaCapo Benchmark
VP9 libvpx Encoding:
  Speed 5 - Bosphorus 1080p
  Speed 0 - Bosphorus 1080p
PyHPC Benchmarks
Renaissance:
  Rand Forest
  Finagle HTTP Requests
PyHPC Benchmarks:
  CPU - Numpy - 1048576 - Equation of State
  CPU - Numpy - 4194304 - Isoneutral Mixing
  CPU - Numpy - 1048576 - Isoneutral Mixing
  CPU - Numpy - 4194304 - Equation of State
DaCapo Benchmark
Renaissance:
  Apache Spark PageRank
  Genetic Algorithm Using Jenetics + Futures
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
  CPU - SqueezeNet v2
PyHPC Benchmarks
Facebook RocksDB
Stress-NG
PostgreSQL pgbench:
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
ASKAP
Apache Spark:
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 100 - Group By Test Time
  1000000 - 100 - SHA-512 Benchmark Time
Renaissance:
  Savina Reactors.IO
  Apache Spark Bayes
DaCapo Benchmark:
  Tradesoap
  H2