2023 new

AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302069-NE-2023NEW4563
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Creator Workloads 2 Tests
Database Test Suite 4 Tests
Python Tests 3 Tests
Server 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 05 2023
  7 Hours, 42 Minutes
b
February 06 2023
  7 Hours, 40 Minutes
c
February 06 2023
  7 Hours, 42 Minutes
Invert Hiding All Results Option
  7 Hours, 42 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2023 newOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads)ASUS ROG ZENITH II EXTREME (1603 BIOS)AMD Starship/Matisse4 x 16 GB DDR4-3600MT/s Corsair CMT64GX4M4Z3600C16Samsung SSD 980 PRO 500GBAMD Radeon RX 5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioASUS VP28UAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.19.0-051900rc7-generic (x86_64)GNOME Shell 42.2X Server4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47)1.2.204GCC 11.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution2023 New BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830104d- OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)- Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%101%103%104%RocksDBClickHouseCloudSuite Web ServingCloudSuite Data AnalyticsVVenCOpenEMSMemcachedApache SparkNeural Magic DeepSparseCloudSuite Graph AnalyticsCloudSuite In-Memory Analytics

2023 newvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Runspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 1000 - SHA-512 Benchmark Timespark: 1000000 - 1000 - Calculate Pi Benchmarkspark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Group By Test Timespark: 1000000 - 1000 - Repartition Test Timespark: 1000000 - 1000 - Inner Join Test Timespark: 1000000 - 1000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 10000000 - 100 - SHA-512 Benchmark Timespark: 10000000 - 100 - Calculate Pi Benchmarkspark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Group By Test Timespark: 10000000 - 100 - Repartition Test Timespark: 10000000 - 100 - Inner Join Test Timespark: 10000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 500 - SHA-512 Benchmark Timespark: 10000000 - 500 - Calculate Pi Benchmarkspark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 500 - Group By Test Timespark: 10000000 - 500 - Repartition Test Timespark: 10000000 - 500 - Inner Join Test Timespark: 10000000 - 500 - Broadcast Inner Join Test Timespark: 20000000 - 100 - SHA-512 Benchmark Timespark: 20000000 - 100 - Calculate Pi Benchmarkspark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 100 - Group By Test Timespark: 20000000 - 100 - Repartition Test Timespark: 20000000 - 100 - Inner Join Test Timespark: 20000000 - 100 - Broadcast Inner Join Test Timespark: 20000000 - 500 - SHA-512 Benchmark Timespark: 20000000 - 500 - Calculate Pi Benchmarkspark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 500 - Group By Test Timespark: 20000000 - 500 - Repartition Test Timespark: 20000000 - 500 - Inner Join Test Timespark: 20000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Group By Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 10000000 - 1000 - SHA-512 Benchmark Timespark: 10000000 - 1000 - Calculate Pi Benchmarkspark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 1000 - Group By Test Timespark: 10000000 - 1000 - Repartition Test Timespark: 10000000 - 1000 - Inner Join Test Timespark: 10000000 - 1000 - Broadcast Inner Join Test Timespark: 10000000 - 2000 - SHA-512 Benchmark Timespark: 10000000 - 2000 - Calculate Pi Benchmarkspark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 2000 - Group By Test Timespark: 10000000 - 2000 - Repartition Test Timespark: 10000000 - 2000 - Inner Join Test Timespark: 10000000 - 2000 - Broadcast Inner Join Test Timespark: 20000000 - 1000 - SHA-512 Benchmark Timespark: 20000000 - 1000 - Calculate Pi Benchmarkspark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 1000 - Group By Test Timespark: 20000000 - 1000 - Repartition Test Timespark: 20000000 - 1000 - Inner Join Test Timespark: 20000000 - 1000 - Broadcast Inner Join Test Timespark: 20000000 - 2000 - SHA-512 Benchmark Timespark: 20000000 - 2000 - Calculate Pi Benchmarkspark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 2000 - Group By Test Timespark: 20000000 - 2000 - Repartition Test Timespark: 20000000 - 2000 - Inner Join Test Timespark: 20000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 1000 - SHA-512 Benchmark Timespark: 40000000 - 1000 - Calculate Pi Benchmarkspark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 1000 - Group By Test Timespark: 40000000 - 1000 - Repartition Test Timespark: 40000000 - 1000 - Inner Join Test Timespark: 40000000 - 1000 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timememcached: 1:1memcached: 1:5memcached: 1:10memcached: 1:100deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcloudsuite-da: 1cloudsuite-da: 4cloudsuite-da: 8cloudsuite-da: 32cloudsuite-da: 64cloudsuite-ga: Page Rankcloudsuite-ga: Connected Componentscloudsuite-ma: Largecloudsuite-ma: Smallcloudsuite-ws: 100cloudsuite-ws: 400cloudsuite-ws: 500openems: pyEMS Coupleropenems: openEMS MSL_NotchFilterrocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingrocksdb: Read Rand Write Randabc3.8098.9337.43417.354216.80240.84231.923.1656.1630093353.434.551.6118802471.461.013.3456.4129173473.394.551.751.791.313.4256.4652800563.464.911.9141883241.891.503.5255.893.3831157814.932.191.942.168.64466992755.273.396.634.875.624.958.7257.473.386.284.735.724.8214.1656.053.398.248.929.869.1414.50673258954.8675412313.448.308.859.869.3525.49369618954.843.3816.8516.2318.1517.0740814524.98222100356.753.4317.47592102216.3018.0817.808.7055.963.426.404.875.584.958.9455.4674855053.376.725.616.366.0914.5185960557.3634342063.398.088.339.9310.0915.96874706855.163.419.019.2810.4310.2826.0455.5999269983.4417.85080269115.61947700717.76097860117.4625.7656.7788104553.4017.57752478516.1719.0418.231591288.823640442.113706097.763226477.9125.276632.271416.17461.8205267.420159.762777.873312.833789.9688177.795329.714233.645138.8201115.103567.089114.8946297.47453.7638134.77227.4128198.23680.683788.669211.271632.1884491.963721.307946.9171100.4802159.205642.92523.289925.3246630.031616.096262.119363396128216112838371286055128079211438143381433564594119.3537.1535.08319.8614.6391164013107335778863510303625559469637027631103.8038.8837.40717.31217.01227.29233.483.0955.603.494.271.791.381.053.3256.6638473113.374.841.671.801.393.5055.833.404.771.801.741.623.4855.603.494.762.202.081.928.4856.083.406.485.195.635.188.6955.813.436.484.865.565.0414.5655.163.707.878.829.929.0414.4456.0824303223.388.398.939.629.1426.5655.4326509623.4616.5316.2418.6217.2425.3556.953.4417.4216.3818.2217.328.6454.803.516.234.975.665.439.0356.343.406.815.286.576.1014.5154.983.388.118.7110.709.4814.2955.9213520543.368.848.9710.5810.5525.8856.123.4017.7316.5917.78651351917.4826.3855.2484157983.3917.6016.0618.8818.171602278.843667823.023712642.453238867.1324.9002637.164116.21161.6794266.85859.903676.623613.04389.3862178.955828.686334.8511137.9565115.759567.139514.8841299.120453.4673133.88877.4619198.686480.500687.421611.432832.8196486.151221.44846.6112100.6949158.867945.291122.073124.8742636.946616.225161.625462263128552312762061274682128391411307153871448634570418.237.01735.73319.614.6489653013203480878771410036343954466491827555513.7588.8567.41517.307206.93229.25226.343.2156.043.464.631.631.511.063.2556.543.374.551.711.571.183.4955.223.404.451.981.881.433.5356.583.464.712.032.131.868.8455.315963263.386.515.175.705.138.7555.953.516.364.765.815.0814.9456.413.448.318.8510.4415295349.2414.1754.883.388.099.229.739.2425.6456.3784645543.4716.73729564216.0518.1117.0726.8155.603.4317.4815.7018.1517.208.6555.103.446.294.886.645.409.1655.1236907293.436.905.206.436.1614.1655.553.398.199.279.799.3214.7156.363.418.769.2610.4410.4326.2356.8723530933.4517.6515.97334909118.4517.7925.9855.2978731493.3717.6516.3819.5118.811614717.753637603.673711642.323212714.2825.2574632.76116.102662.0953268.20559.615177.121412.958989.0438179.644628.373135.2363138.272115.487267.132714.8854299.932553.2992132.27697.553198.474680.586588.223111.328732.4421492.937921.395146.7264100.6041159.011945.635121.905725.3061630.951916.211661.67764351129081812838051277138128055011584141981430164550519.637.0534.619.8814.59113591319946797866111035331399646968072776604OpenBenchmarking.org

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastabc0.8571.7142.5713.4284.2853.8093.8033.7581. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterabc2468108.9338.8838.8561. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastacb2468107.4347.4157.4071. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterabc4812162017.3517.3117.311. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachebac50100150200250217.01216.80206.93MIN: 15.02 / MAX: 2608.7MIN: 14.82 / MAX: 3529.41MIN: 15.36 / MAX: 3000

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runacb50100150200250240.84229.25227.29MIN: 24.26 / MAX: 3750MIN: 24.51 / MAX: 3157.89MIN: 24.14 / MAX: 3333.33

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runbac50100150200250233.48231.92226.34MIN: 23.7 / MAX: 3157.89MIN: 22.59 / MAX: 2608.7MIN: 22.8 / MAX: 3333.33

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Timebac0.72231.44462.16692.88923.61153.093.163.21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmarkbca132639526555.6056.0456.16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeacb0.78531.57062.35593.14123.92653.433.463.49

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Timebac1.04182.08363.12544.16725.2094.274.554.63

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Timeacb0.40280.80561.20841.61122.0141.6118802471.6300000001.790000000

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Timebac0.33980.67961.01941.35921.6991.381.461.51

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Timeabc0.23850.4770.71550.9541.19251.011.051.06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Timecba0.75151.5032.25453.0063.75753.253.323.34

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmarkacb132639526556.4156.5456.66

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframebca0.76281.52562.28843.05123.8143.373.373.39

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test Timeacb1.0892.1783.2674.3565.4454.554.554.84

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test Timebca0.39380.78761.18141.57521.9691.671.711.75

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test Timecab0.4050.811.2151.622.0251.571.791.80

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Timecab0.31280.62560.93841.25121.5641.181.311.39

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Timeacb0.78751.5752.36253.153.93753.423.493.50

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmarkcba132639526555.2255.8356.47

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframebca0.77851.5572.33553.1143.89253.403.403.46

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Group By Test Timecba1.10482.20963.31444.41925.5244.454.774.91

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Repartition Test Timebac0.44550.8911.33651.7822.22751.8000000001.9141883241.980000000

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Inner Join Test Timebca0.42530.85061.27591.70122.12651.741.881.89

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Timecab0.36450.7291.09351.4581.82251.431.501.62

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Timebac0.79431.58862.38293.17723.97153.483.523.53

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmarkbac132639526555.6055.8956.58

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframeacb0.78531.57062.35593.14123.92653.3831157813.4600000003.490000000

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Timecba1.10932.21863.32794.43725.54654.714.764.93

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Timecab0.4950.991.4851.982.4752.032.192.20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Timeabc0.47930.95861.43791.91722.39651.942.082.13

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Timecba0.4860.9721.4581.9442.431.861.922.16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Timebac2468108.4800000008.6446699278.840000000

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmarkacb132639526555.2755.3256.08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframecab0.7651.532.2953.063.8253.383.393.40

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Group By Test Timebca2468106.486.516.63

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Repartition Test Timeacb1.16782.33563.50344.67125.8394.875.175.19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Inner Join Test Timeabc1.28252.5653.84755.136.41255.625.635.70

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Timeacb1.16552.3313.49654.6625.82754.955.135.18

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Timebac2468108.698.728.75

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmarkbca132639526555.8155.9557.47

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframeabc0.78981.57962.36943.15923.9493.383.433.51

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Group By Test Timeacb2468106.286.366.48

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Repartition Test Timeacb1.09352.1873.28054.3745.46754.734.764.86

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Inner Join Test Timebac1.30732.61463.92195.22926.53655.565.725.81

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Timeabc1.1432.2863.4294.5725.7154.825.045.08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Timeabc4812162014.1614.5614.94

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmarkbac132639526555.1656.0556.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeacb0.83251.6652.49753.334.16253.393.443.70

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Group By Test Timebac2468107.878.248.31

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Repartition Test Timebca2468108.828.858.92

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Inner Join Test Timeabc36912159.8600000009.92000000010.441529534

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Timebac36912159.049.149.24

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Timecba4812162014.1714.4414.51

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmarkacb132639526554.8754.8856.08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframebca0.7741.5482.3223.0963.873.383.383.44

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Group By Test Timecab2468108.098.308.39

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Repartition Test Timeabc36912158.858.939.22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Inner Join Test Timebca36912159.629.739.86

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Timebca36912159.149.249.35

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Timeacb61218243025.4925.6426.56

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmarkabc132639526554.8455.4356.38

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeabc0.78081.56162.34243.12323.9043.383.463.47

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Timebca4812162016.5316.7416.85

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Timecab4812162016.0516.2316.24

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Timecab51015202518.1118.1518.62

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Timecab4812162017.0717.0717.24

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Timeabc61218243024.9825.3526.81

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmarkcab132639526555.6056.7556.95

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframeacb0.7741.5482.3223.0963.873.433.433.44

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test Timebac4812162017.4217.4817.48

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test Timecab4812162015.7016.3016.38

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test Timeacb4812162018.0818.1518.22

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Timecba4812162017.2017.3217.80

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Timebca2468108.648.658.70

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmarkbca132639526554.8055.1055.96

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframeacb0.78981.57962.36943.15923.9493.423.443.51

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Group By Test Timebca2468106.236.296.40

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Repartition Test Timeacb1.11832.23663.35494.47325.59154.874.884.97

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Inner Join Test Timeabc2468105.585.666.64

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Timeacb1.22182.44363.66544.88726.1094.955.405.43

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Timeabc36912158.949.039.16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmarkcab132639526555.1255.4756.34

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframeabc0.77181.54362.31543.08723.8593.373.403.43

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Group By Test Timeabc2468106.726.816.90

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Repartition Test Timecba1.26232.52463.78695.04926.31155.205.285.61

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Inner Join Test Timeacb2468106.366.436.57

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Timeabc2468106.096.106.16

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Timecba4812162014.1614.5114.52

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmarkbca132639526554.9855.5557.36

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframebac0.76281.52562.28843.05123.8143.383.393.39

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Group By Test Timeabc2468108.088.118.19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Repartition Test Timeabc36912158.338.719.27

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Inner Join Test Timecab36912159.799.9310.70

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Timecba36912159.329.4810.09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Timebca4812162014.2914.7115.97

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmarkabc132639526555.1655.9256.36

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframebac0.76731.53462.30193.06923.83653.363.413.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Group By Test Timecba36912158.768.849.01

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Repartition Test Timebca36912158.979.269.28

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Inner Join Test Timeacb369121510.4310.4410.58

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Timeacb369121510.2810.4310.55

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Timebac61218243025.8826.0426.23

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmarkabc132639526555.6056.1256.87

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframebac0.77631.55262.32893.10523.88153.403.443.45

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Group By Test Timecba4812162017.6517.7317.85

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Repartition Test Timeacb4812162015.6215.9716.59

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Inner Join Test Timeabc51015202517.7617.7918.45

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Timeabc4812162017.4617.4817.79

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Timeacb61218243025.7625.9826.38

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmarkbca132639526555.2555.3056.78

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframecba0.7651.532.2953.063.8253.373.393.40

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test Timeabc4812162017.5817.6017.65

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test Timebac4812162016.0616.1716.38

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test Timebac51015202518.8819.0419.51

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Timebac51015202518.1718.2318.81

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:1cba300K600K900K1200K1500K1614717.751602278.841591288.821. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5bac800K1600K2400K3200K4000K3667823.023640442.113637603.671. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10bca800K1600K2400K3200K4000K3712642.453711642.323706097.761. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100bac700K1400K2100K2800K3500K3238867.133226477.913212714.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamacb61218243025.2825.2624.90

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamacb140280420560700632.27632.76637.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambac4812162016.2116.1716.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streambac142842567061.6861.8262.10

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcab60120180240300268.21267.42266.86

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcab132639526559.6259.7659.90

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamacb2040608010077.8777.1276.62

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamacb369121512.8312.9613.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc2040608010089.9789.3989.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc4080120160200177.80178.96179.64

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc71421283529.7128.6928.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc81624324033.6534.8535.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamacb306090120150138.82138.27137.96

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamacb306090120150115.10115.49115.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambca153045607567.1467.1367.09

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambca4812162014.8814.8914.89

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba70140210280350299.93299.12297.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba122436486053.3053.4753.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc306090120150134.77133.89132.28

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc2468107.41287.46197.5530

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambca4080120160200198.69198.47198.24

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streambca2040608010080.5080.5980.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamacb2040608010088.6788.2287.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamacb369121511.2711.3311.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambca81624324032.8232.4432.19

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambac110220330440550486.15491.96492.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambca51015202521.4521.4021.31

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambca112233445546.6146.7346.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambca20406080100100.69100.60100.48

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streambca4080120160200158.87159.01159.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcba102030405045.6445.2942.93

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcba61218243021.9122.0723.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamacb61218243025.3225.3124.87

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamacb140280420560700630.03630.95636.95

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambca4812162016.2316.2116.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streambca142842567061.6361.6862.12

CloudSuite Data Analytics

CloudSuite Data Analytics is a Docker-based benchmark and runs a Naive Bayes classifier on a Wikimedia dataset with Hadoop and Mahout. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 1bac14K28K42K56K70K622636339664351

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 4abc300K600K900K1200K1500K128216112855231290818

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 8bca300K600K900K1200K1500K127620612838051283837

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 32bca300K600K900K1200K1500K127468212771381286055

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 64cab300K600K900K1200K1500K128055012807921283914

CloudSuite Graph Analytics

CloudSuite Graph Analytics uses Apache GraphX + Spark to perform graph analytics (PageRank) on sample Twitter data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph Analytics 4.0GraphX Algorithm: Page Rankbac2K4K6K8K10K113071143811584

GraphX Algorithm: Triangle Count

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph Analytics 4.0GraphX Algorithm: Connected Componentscab3K6K9K12K15K141981433815387

CloudSuite In-Memory Analytics

CloudSuite In-Memory Analytics uses Apache Spark and runs a collaborative filtering algorithm in-memory on a dataset of user-movie ratings from Movielens. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory Analytics 4.0Training Set Size: Largecab30K60K90K120K150K143016143356144863

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory Analytics 4.0Training Set Size: Smallcba10K20K30K40K50K455054570445941

CloudSuite Web Serving

CloudSuite Web Serving is a Docker-based web server benchmark making use of a web server with Memcached and a MySQL database server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is BetterCloudSuite Web ServingLoad Scale: 100cab51015202519.6019.3518.20

OpenBenchmarking.orgops/sec, More Is BetterCloudSuite Web ServingLoad Scale: 400acb91827364537.1537.0537.02

OpenBenchmarking.orgops/sec, More Is BetterCloudSuite Web ServingLoad Scale: 500bac81624324035.7335.0834.60

OpenEMS

OpenEMS is a free and open electromagnetic field solver using the FDTD method. This test profile runs OpenEMS and pyEMS benchmark demos. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS Couplercab51015202519.8819.8619.601. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: openEMS MSL_NotchFilterbac4812162014.6414.6314.501. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillacb200K400K600K800K1000K9116409113598965301. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readbca30M60M90M120M150M1320348081319946791310733571. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randomabc200K400K600K800K1000K7886357877147866111. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillcab200K400K600K800K1000K1035331103036210036341. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Syncacb120024003600480060005559399639541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingcab1000K2000K3000K4000K5000K4696807469637046649181. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randomcab600K1200K1800K2400K3000K2776604276311027555511. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

180 Results Shown

VVenC:
  Bosphorus 4K - Fast
  Bosphorus 4K - Faster
  Bosphorus 1080p - Fast
  Bosphorus 1080p - Faster
ClickHouse:
  100M Rows Hits Dataset, First Run / Cold Cache
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, Third Run
Apache Spark:
  1000000 - 100 - SHA-512 Benchmark Time
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 500 - SHA-512 Benchmark Time
  1000000 - 500 - Calculate Pi Benchmark
  1000000 - 500 - Calculate Pi Benchmark Using Dataframe
  1000000 - 500 - Group By Test Time
  1000000 - 500 - Repartition Test Time
  1000000 - 500 - Inner Join Test Time
  1000000 - 500 - Broadcast Inner Join Test Time
  1000000 - 1000 - SHA-512 Benchmark Time
  1000000 - 1000 - Calculate Pi Benchmark
  1000000 - 1000 - Calculate Pi Benchmark Using Dataframe
  1000000 - 1000 - Group By Test Time
  1000000 - 1000 - Repartition Test Time
  1000000 - 1000 - Inner Join Test Time
  1000000 - 1000 - Broadcast Inner Join Test Time
  1000000 - 2000 - SHA-512 Benchmark Time
  1000000 - 2000 - Calculate Pi Benchmark
  1000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  1000000 - 2000 - Group By Test Time
  1000000 - 2000 - Repartition Test Time
  1000000 - 2000 - Inner Join Test Time
  1000000 - 2000 - Broadcast Inner Join Test Time
  10000000 - 100 - SHA-512 Benchmark Time
  10000000 - 100 - Calculate Pi Benchmark
  10000000 - 100 - Calculate Pi Benchmark Using Dataframe
  10000000 - 100 - Group By Test Time
  10000000 - 100 - Repartition Test Time
  10000000 - 100 - Inner Join Test Time
  10000000 - 100 - Broadcast Inner Join Test Time
  10000000 - 500 - SHA-512 Benchmark Time
  10000000 - 500 - Calculate Pi Benchmark
  10000000 - 500 - Calculate Pi Benchmark Using Dataframe
  10000000 - 500 - Group By Test Time
  10000000 - 500 - Repartition Test Time
  10000000 - 500 - Inner Join Test Time
  10000000 - 500 - Broadcast Inner Join Test Time
  20000000 - 100 - SHA-512 Benchmark Time
  20000000 - 100 - Calculate Pi Benchmark
  20000000 - 100 - Calculate Pi Benchmark Using Dataframe
  20000000 - 100 - Group By Test Time
  20000000 - 100 - Repartition Test Time
  20000000 - 100 - Inner Join Test Time
  20000000 - 100 - Broadcast Inner Join Test Time
  20000000 - 500 - SHA-512 Benchmark Time
  20000000 - 500 - Calculate Pi Benchmark
  20000000 - 500 - Calculate Pi Benchmark Using Dataframe
  20000000 - 500 - Group By Test Time
  20000000 - 500 - Repartition Test Time
  20000000 - 500 - Inner Join Test Time
  20000000 - 500 - Broadcast Inner Join Test Time
  40000000 - 100 - SHA-512 Benchmark Time
  40000000 - 100 - Calculate Pi Benchmark
  40000000 - 100 - Calculate Pi Benchmark Using Dataframe
  40000000 - 100 - Group By Test Time
  40000000 - 100 - Repartition Test Time
  40000000 - 100 - Inner Join Test Time
  40000000 - 100 - Broadcast Inner Join Test Time
  40000000 - 500 - SHA-512 Benchmark Time
  40000000 - 500 - Calculate Pi Benchmark
  40000000 - 500 - Calculate Pi Benchmark Using Dataframe
  40000000 - 500 - Group By Test Time
  40000000 - 500 - Repartition Test Time
  40000000 - 500 - Inner Join Test Time
  40000000 - 500 - Broadcast Inner Join Test Time
  10000000 - 1000 - SHA-512 Benchmark Time
  10000000 - 1000 - Calculate Pi Benchmark
  10000000 - 1000 - Calculate Pi Benchmark Using Dataframe
  10000000 - 1000 - Group By Test Time
  10000000 - 1000 - Repartition Test Time
  10000000 - 1000 - Inner Join Test Time
  10000000 - 1000 - Broadcast Inner Join Test Time
  10000000 - 2000 - SHA-512 Benchmark Time
  10000000 - 2000 - Calculate Pi Benchmark
  10000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  10000000 - 2000 - Group By Test Time
  10000000 - 2000 - Repartition Test Time
  10000000 - 2000 - Inner Join Test Time
  10000000 - 2000 - Broadcast Inner Join Test Time
  20000000 - 1000 - SHA-512 Benchmark Time
  20000000 - 1000 - Calculate Pi Benchmark
  20000000 - 1000 - Calculate Pi Benchmark Using Dataframe
  20000000 - 1000 - Group By Test Time
  20000000 - 1000 - Repartition Test Time
  20000000 - 1000 - Inner Join Test Time
  20000000 - 1000 - Broadcast Inner Join Test Time
  20000000 - 2000 - SHA-512 Benchmark Time
  20000000 - 2000 - Calculate Pi Benchmark
  20000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  20000000 - 2000 - Group By Test Time
  20000000 - 2000 - Repartition Test Time
  20000000 - 2000 - Inner Join Test Time
  20000000 - 2000 - Broadcast Inner Join Test Time
  40000000 - 1000 - SHA-512 Benchmark Time
  40000000 - 1000 - Calculate Pi Benchmark
  40000000 - 1000 - Calculate Pi Benchmark Using Dataframe
  40000000 - 1000 - Group By Test Time
  40000000 - 1000 - Repartition Test Time
  40000000 - 1000 - Inner Join Test Time
  40000000 - 1000 - Broadcast Inner Join Test Time
  40000000 - 2000 - SHA-512 Benchmark Time
  40000000 - 2000 - Calculate Pi Benchmark
  40000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  40000000 - 2000 - Group By Test Time
  40000000 - 2000 - Repartition Test Time
  40000000 - 2000 - Inner Join Test Time
  40000000 - 2000 - Broadcast Inner Join Test Time
Memcached:
  1:1
  1:5
  1:10
  1:100
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
CloudSuite Data Analytics:
  1
  4
  8
  32
  64
CloudSuite Graph Analytics:
  Page Rank
  Connected Components
CloudSuite In-Memory Analytics:
  Large
  Small
CloudSuite Web Serving:
  100
  400
  500
OpenEMS:
  pyEMS Coupler
  openEMS MSL_NotchFilter
RocksDB:
  Rand Fill
  Rand Read
  Update Rand
  Seq Fill
  Rand Fill Sync
  Read While Writing
  Read Rand Write Rand