2023 ryzen 5

AMD Ryzen 5 4500U testing with a LENOVO LNVNB161216 (EECN20WW BIOS) and AMD Renoir 512MB on Pop 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302063-NE-2023RYZEN21
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 05 2023
  7 Hours, 14 Minutes
b
February 06 2023
  7 Hours, 10 Minutes
c
February 06 2023
  7 Hours, 17 Minutes
Invert Behavior (Only Show Selected Data)
  7 Hours, 13 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2023 ryzen 5OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 5 4500U @ 2.38GHz (6 Cores)LENOVO LNVNB161216 (EECN20WW BIOS)AMD Renoir/Cezanne16GB256GB SK hynix HFM256GDHTNI-87A0BAMD Renoir 512MB (1500/400MHz)AMD Renoir Radeon HD AudioRealtek RTL8822CE 802.11ac PCIePop 22.045.17.5-76051705-generic (x86_64)GNOME Shell 42.1X Server 1.21.1.34.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.44)1.2.204GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution2023 Ryzen 5 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balanced - GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-025 - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%108%116%123%UnvanquishedET: LegacyPostgreSQLOpenEMSMemcachedVVenCClickHouseApache SparkNeural Magic DeepSparse

2023 ryzen 5etlegacy: 1920 x 1080unvanquished: 1920 x 1080 - Highunvanquished: 1920 x 1080 - Ultraunvanquished: 1920 x 1080 - Mediumvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Runspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 1000 - SHA-512 Benchmark Timespark: 1000000 - 1000 - Calculate Pi Benchmarkspark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Group By Test Timespark: 1000000 - 1000 - Repartition Test Timespark: 1000000 - 1000 - Inner Join Test Timespark: 1000000 - 1000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timememcached: 1:5memcached: 1:10memcached: 1:100pgbench: 1 - 1 - Read Onlypgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Writepgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 50 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlypgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 500 - Read Onlypgbench: 1 - 500 - Read Only - Average Latencypgbench: 1 - 800 - Read Onlypgbench: 1 - 800 - Read Only - Average Latencypgbench: 100 - 1 - Read Onlypgbench: 100 - 1 - Read Only - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 250 - Read Writepgbench: 1 - 250 - Read Write - Average Latencypgbench: 1 - 500 - Read Writepgbench: 1 - 500 - Read Write - Average Latencypgbench: 1 - 800 - Read Writepgbench: 1 - 800 - Read Write - Average Latencypgbench: 100 - 1 - Read Writepgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 50 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 50 - Read Writepgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 500 - Read Onlypgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 800 - Read Onlypgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 500 - Read Writepgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 800 - Read Writepgbench: 100 - 800 - Read Write - Average Latencydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamopenems: pyEMS Coupleropenems: openEMS MSL_NotchFilterabc125.9162.2126.7146.31.5913.585.33513.0762.0966.7768.216.20382.94010056527.8202821455.415.313.5696708033.196.60380.62884819527.7031140616.305.414.363.847.25380.68754313327.3938416717.046.145.414.577.89379.06458111527.3503767538.097.206.905.57686914.92647418.98646102.05188490.0535681.7591898550.2631889860.5291838921.35955789.6991806882.7671375575.816195490.051418239.139500499.8123881288.1743382368.3323392.9461850820.271630090.6131547621.615449111.1341686182.9651064277.517514819.424525847.548516296.8594518177.0863.4136875.42953.2897303.9729.4655101.729618.098255.236510.6658280.49716.4244155.634420.1011149.044617.722856.408943.151269.437737.99226.308630.14299.492224.69840.47884.0502734.58924.2014237.996415.0625199.125310.049599.49593.4341866.75733.2499307.69515.8860.99104130.490.7120.11.6483.6445.44513.2961.8067.0067.686.09376.45489705427.295.595.403.883.106.96376.19237658626.9117596596.535.464.533.847.75379.8927.366.805.865.444.607.93378.34523546527.628.027.037.356.12680062.15661606.77604441.11183500.0545741.7431907350.2621727300.5791784241.40155190.7611640723.0471102297.258167470.06545183.36497503.2974211188.7753292434.4673502.8561754040.2851703860.5871732501.443477010.4811338153.7361273876.28503619.855506049.4124721105.9094514177.233.3567887.62343.295303.478329.6151101.212316.982658.862510.6888280.5016.4058156.088120.1628148.432717.951755.688743.047269.641538.794225.764730.145799.448925.243139.60444.0529735.39924.1893238.684815.0536199.067910.515695.08533.4302864.37653.2812304.75816.0760.86104.2123.886.6124.11.6213.6015.413.06760.3269.1070.226.18379.30248988727.475.385.383.773.156.88377.4785913927.506.655.484.473.807.27378.63593828227.336.936.315.524.607.88379.19326503427.2215955188.057.037.365.54673400.06652041.2664570.89186940.0535641.7731871670.2671879690.5321841571.35855590.0371765502.8321206386.631202230.049474210.771489510.8993871293.0383062613.0283402.9391746010.2861626920.6151671411.496380713.134987905.0611214596.587434623.012450555.4914217118.5564104194.943.4003876.73073.2326309.333828.3787105.594416.588960.260710.5687282.93546.4858154.159420.0402149.484117.719956.417442.879269.895538.619925.880930.222399.21925.274439.5564.0287738.29284.1841238.979514.9442200.01211.056790.43163.4656860.17333.3428299.139215.4369.33OpenBenchmarking.org

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.81Resolution: 1920 x 1080cba306090120150104.2104.0125.9

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.54Resolution: 1920 x 1080 - Effects Quality: Highcba4080120160200123.8130.4162.2

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.54Resolution: 1920 x 1080 - Effects Quality: Ultracba30609012015086.690.7126.7

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.54Resolution: 1920 x 1080 - Effects Quality: Mediumcba306090120150124.1120.1146.3

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastcba0.37080.74161.11241.48321.8541.6211.6481.5911. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastercba0.81991.63982.45973.27964.09953.6013.6443.5801. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastcba1.22512.45023.67534.90046.12555.4005.4455.3351. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastercba369121513.0713.2913.071. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachecba142842567060.3261.8062.09MIN: 3.49 / MAX: 3157.89MIN: 3.46 / MAX: 2307.69MIN: 3.48 / MAX: 2608.7

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runcba153045607569.1067.0066.77MIN: 3.59 / MAX: 4615.38MIN: 3.55 / MAX: 3529.41MIN: 3.58 / MAX: 2400

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runcba163248648070.2267.6868.21MIN: 3.6 / MAX: 3157.89MIN: 3.57 / MAX: 2727.27MIN: 3.59 / MAX: 2608.7

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Timecba2468106.186.096.20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmarkcba80160240320400379.30376.45382.94

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframecba71421283527.4727.2927.82

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Timecba1.25782.51563.77345.03126.2895.385.595.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Timecba1.2152.433.6454.866.0755.385.405.31

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Timecba0.8731.7462.6193.4924.3653.7700000003.8800000003.569670803

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Timecba0.71781.43562.15342.87123.5893.153.103.19

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Timecba2468106.886.966.60

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmarkcba80160240320400377.48376.19380.63

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframecba71421283527.5026.9127.70

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test Timecba2468106.656.536.30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test Timecba1.2332.4663.6994.9326.1655.485.465.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test Timecba1.01932.03863.05794.07725.09654.474.534.36

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Timecba0.8641.7282.5923.4564.323.803.843.84

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Timecba2468107.277.757.25

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmarkcba80160240320400378.64379.89380.69

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframecba61218243027.3327.3627.39

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Group By Test Timecba2468106.936.807.04

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Repartition Test Timecba2468106.315.866.14

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Inner Join Test Timecba1.2422.4843.7264.9686.215.525.445.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Timecba1.0352.073.1054.145.1754.604.604.57

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Timecba2468107.887.937.89

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmarkcba80160240320400379.19378.35379.06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframecba61218243027.2227.6227.35

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Timecba2468108.058.028.09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Timecba2468107.037.037.20

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Timecba2468107.367.356.90

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Timecba2468105.546.125.57

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5cba150K300K450K600K750K673400.06680062.15686914.921. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10cba140K280K420K560K700K652041.20661606.77647418.981. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100cba140K280K420K560K700K664570.89604441.11646102.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Onlycba4K8K12K16K20K1869418350188491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latencycba0.01220.02440.03660.04880.0610.0530.0540.0531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Writecba1202403604806005645745681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latencycba0.39890.79781.19671.59561.99451.7731.7431.7591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Onlycba40K80K120K160K200K1871671907351898551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latencycba0.06010.12020.18030.24040.30050.2670.2620.2631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Onlycba40K80K120K160K200K1879691727301889861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latencycba0.13030.26060.39090.52120.65150.5320.5790.5291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 250 - Mode: Read Onlycba40K80K120K160K200K1841571784241838921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latencycba0.31520.63040.94561.26081.5761.3581.4011.3591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Writecba1202403604806005555515571. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latencycba2040608010090.0490.7689.701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 500 - Mode: Read Onlycba40K80K120K160K200K1765501640721806881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latencycba0.68561.37122.05682.74243.4282.8323.0472.7671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Onlycba30K60K90K120K150K1206381102291375571. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latencycba2468106.6317.2585.8161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Onlycba4K8K12K16K20K2022316747195491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latencycba0.01350.0270.04050.0540.06750.0490.0600.0511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Writecba1202403604806004745454181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latencycba50100150200250210.77183.36239.141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 250 - Mode: Read Writecba1102203304405504894975001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latencycba110220330440550510.90503.30499.811. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 500 - Mode: Read Writecba901802703604503874213881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latencycba300600900120015001293.041188.781288.171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Writecba701402102803503063293381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latencycba60012001800240030002613.032434.472368.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Writecba801602403204003403503391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latencycba0.66291.32581.98872.65163.31452.9392.8562.9461. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Onlycba40K80K120K160K200K1746011754041850821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latencycba0.06440.12880.19320.25760.3220.2860.2850.2701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Onlycba40K80K120K160K200K1626921703861630091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latencycba0.13840.27680.41520.55360.6920.6150.5870.6131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Onlycba40K80K120K160K200K1671411732501547621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latencycba0.36340.72681.09021.45361.8171.4961.4431.6151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Writecba100020003000400050003807477044911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latencycba369121513.1310.4811.131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Onlycba40K80K120K160K200K987901338151686181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latencycba1.13872.27743.41614.55485.69355.0613.7362.9651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Onlycba30K60K90K120K150K1214591273871064271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencycba2468106.5876.2807.5171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Writecba110022003300440055004346503651481. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latencycba61218243023.0119.8619.421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Writecba110022003300440055004505506052581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latencycba122436486055.4949.4147.551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Writecba110022003300440055004217472151621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latencycba306090120150118.56105.9196.861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Writecba100020003000400050004104451445181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latencycba4080120160200194.94177.23177.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcba0.76811.53622.30433.07243.84053.40033.35673.4136

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcba2004006008001000876.73887.62875.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcba0.74141.48282.22422.96563.7073.23263.29503.2897

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcba70140210280350309.33303.48303.97

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcba71421283528.3829.6229.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcba20406080100105.59101.21101.73

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcba4812162016.5916.9818.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcba132639526560.2658.8655.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcba369121510.5710.6910.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcba60120180240300282.94280.50280.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcba2468106.48586.40586.4244

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcba306090120150154.16156.09155.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcba51015202520.0420.1620.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcba306090120150149.48148.43149.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcba4812162017.7217.9517.72

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcba132639526556.4255.6956.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba102030405042.8843.0543.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba163248648069.9069.6469.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcba91827364538.6238.7937.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcba61218243025.8825.7626.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcba71421283530.2230.1530.14

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcba2040608010099.2299.4599.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcba61218243025.2725.2424.70

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcba91827364539.5639.6040.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcba0.91191.82382.73573.64764.55954.02874.05294.0502

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcba160320480640800738.29735.40734.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcba0.94531.89062.83593.78124.72654.18414.18934.2014

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcba50100150200250238.98238.68238.00

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamcba4812162014.9415.0515.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamcba4080120160200200.01199.07199.13

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcba369121511.0610.5210.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcba2040608010090.4395.0999.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcba0.77981.55962.33943.11923.8993.46563.43023.4341

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcba2004006008001000860.17864.38866.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcba0.75211.50422.25633.00843.76053.34283.28123.2499

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcba70140210280350299.14304.76307.70

OpenEMS

OpenEMS is a free and open electromagnetic field solver using the FDTD method. This test profile runs OpenEMS and pyEMS benchmark demos. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS Couplercba4812162015.4316.0715.881. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: openEMS MSL_NotchFiltercba153045607569.3360.8660.991. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

128 Results Shown

ET: Legacy
Unvanquished:
  1920 x 1080 - High
  1920 x 1080 - Ultra
  1920 x 1080 - Medium
VVenC:
  Bosphorus 4K - Fast
  Bosphorus 4K - Faster
  Bosphorus 1080p - Fast
  Bosphorus 1080p - Faster
ClickHouse:
  100M Rows Hits Dataset, First Run / Cold Cache
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, Third Run
Apache Spark:
  1000000 - 100 - SHA-512 Benchmark Time
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 500 - SHA-512 Benchmark Time
  1000000 - 500 - Calculate Pi Benchmark
  1000000 - 500 - Calculate Pi Benchmark Using Dataframe
  1000000 - 500 - Group By Test Time
  1000000 - 500 - Repartition Test Time
  1000000 - 500 - Inner Join Test Time
  1000000 - 500 - Broadcast Inner Join Test Time
  1000000 - 1000 - SHA-512 Benchmark Time
  1000000 - 1000 - Calculate Pi Benchmark
  1000000 - 1000 - Calculate Pi Benchmark Using Dataframe
  1000000 - 1000 - Group By Test Time
  1000000 - 1000 - Repartition Test Time
  1000000 - 1000 - Inner Join Test Time
  1000000 - 1000 - Broadcast Inner Join Test Time
  1000000 - 2000 - SHA-512 Benchmark Time
  1000000 - 2000 - Calculate Pi Benchmark
  1000000 - 2000 - Calculate Pi Benchmark Using Dataframe
  1000000 - 2000 - Group By Test Time
  1000000 - 2000 - Repartition Test Time
  1000000 - 2000 - Inner Join Test Time
  1000000 - 2000 - Broadcast Inner Join Test Time
Memcached:
  1:5
  1:10
  1:100
PostgreSQL:
  1 - 1 - Read Only
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Write
  1 - 1 - Read Write - Average Latency
  1 - 50 - Read Only
  1 - 50 - Read Only - Average Latency
  1 - 100 - Read Only
  1 - 100 - Read Only - Average Latency
  1 - 250 - Read Only
  1 - 250 - Read Only - Average Latency
  1 - 50 - Read Write
  1 - 50 - Read Write - Average Latency
  1 - 500 - Read Only
  1 - 500 - Read Only - Average Latency
  1 - 800 - Read Only
  1 - 800 - Read Only - Average Latency
  100 - 1 - Read Only
  100 - 1 - Read Only - Average Latency
  1 - 100 - Read Write
  1 - 100 - Read Write - Average Latency
  1 - 250 - Read Write
  1 - 250 - Read Write - Average Latency
  1 - 500 - Read Write
  1 - 500 - Read Write - Average Latency
  1 - 800 - Read Write
  1 - 800 - Read Write - Average Latency
  100 - 1 - Read Write
  100 - 1 - Read Write - Average Latency
  100 - 50 - Read Only
  100 - 50 - Read Only - Average Latency
  100 - 100 - Read Only
  100 - 100 - Read Only - Average Latency
  100 - 250 - Read Only
  100 - 250 - Read Only - Average Latency
  100 - 50 - Read Write
  100 - 50 - Read Write - Average Latency
  100 - 500 - Read Only
  100 - 500 - Read Only - Average Latency
  100 - 800 - Read Only
  100 - 800 - Read Only - Average Latency
  100 - 100 - Read Write
  100 - 100 - Read Write - Average Latency
  100 - 250 - Read Write
  100 - 250 - Read Write - Average Latency
  100 - 500 - Read Write
  100 - 500 - Read Write - Average Latency
  100 - 800 - Read Write
  100 - 800 - Read Write - Average Latency
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
OpenEMS:
  pyEMS Coupler
  openEMS MSL_NotchFilter