2023 new

AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2302069-NE-2023NEW4563&grs.

2023 newProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionabcAMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads)ASUS ROG ZENITH II EXTREME (1603 BIOS)AMD Starship/Matisse4 x 16 GB DDR4-3600MT/s Corsair CMT64GX4M4Z3600C16Samsung SSD 980 PRO 500GBAMD Radeon RX 5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioASUS VP28UAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.19.0-051900rc7-generic (x86_64)GNOME Shell 42.2X Server4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47)1.2.204GCC 11.3.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830104dJava Details- OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

2023 newrocksdb: Rand Fill Syncspark: 10000000 - 1000 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 1000 - Broadcast Inner Join Test Timespark: 20000000 - 2000 - SHA-512 Benchmark Timespark: 20000000 - 1000 - Repartition Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 1000 - Group By Test Timespark: 1000000 - 1000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 10000000 - 1000 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 20000000 - 1000 - Inner Join Test Timespark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Inner Join Test Timespark: 1000000 - 100 - Group By Test Timecloudsuite-ga: Connected Componentsspark: 1000000 - 2000 - Repartition Test Timespark: 20000000 - 1000 - Broadcast Inner Join Test Timespark: 10000000 - 2000 - Repartition Test Timecloudsuite-ws: 100spark: 40000000 - 500 - SHA-512 Benchmark Timespark: 10000000 - 100 - Repartition Test Timespark: 1000000 - 500 - Group By Test Timedeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamspark: 40000000 - 1000 - Repartition Test Timeclickhouse: 100M Rows Hits Dataset, Second Runspark: 20000000 - 100 - Inner Join Test Timespark: 20000000 - 100 - Group By Test Timespark: 20000000 - 100 - SHA-512 Benchmark Timespark: 10000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timeclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachespark: 1000000 - 500 - Repartition Test Timedeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamspark: 1000000 - 2000 - Group By Test Timespark: 10000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 500 - Inner Join Test Timespark: 20000000 - 1000 - Calculate Pi Benchmarkspark: 40000000 - 500 - Repartition Test Timespark: 10000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 20000000 - 500 - Repartition Test Timespark: 1000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 1000 - Inner Join Test Timespark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 500 - Group By Test Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 20000000 - 2000 - Repartition Test Timecloudsuite-da: 1spark: 40000000 - 2000 - Inner Join Test Timespark: 10000000 - 2000 - Inner Join Test Timecloudsuite-ws: 500spark: 10000000 - 500 - Group By Test Timespark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframerocksdb: Seq Fillclickhouse: 100M Rows Hits Dataset, Third Runspark: 10000000 - 500 - Calculate Pi Benchmarkspark: 20000000 - 2000 - Group By Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 500 - SHA-512 Benchmark Timespark: 10000000 - 500 - Repartition Test Timespark: 10000000 - 1000 - Group By Test Timespark: 10000000 - 2000 - Group By Test Timespark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 2000 - Broadcast Inner Join Test Timespark: 20000000 - 1000 - SHA-512 Benchmark Timespark: 20000000 - 500 - Inner Join Test Timespark: 10000000 - 2000 - SHA-512 Benchmark Timecloudsuite-ga: Page Rankspark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 20000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 1000 - SHA-512 Benchmark Timespark: 10000000 - 100 - Group By Test Timespark: 20000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 1000 - Calculate Pi Benchmarkspark: 20000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 1000 - Calculate Pi Benchmarkspark: 20000000 - 500 - Calculate Pi Benchmarkspark: 20000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 2000 - Calculate Pi Benchmarkspark: 20000000 - 2000 - Calculate Pi Benchmarkspark: 10000000 - 1000 - Calculate Pi Benchmarkspark: 10000000 - 1000 - Repartition Test Timespark: 40000000 - 2000 - Repartition Test Timedeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamspark: 40000000 - 100 - Group By Test Timedeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamspark: 40000000 - 1000 - Broadcast Inner Join Test Timedeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamspark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframerocksdb: Rand Filldeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamspark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframememcached: 1:1spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Calculate Pi Benchmarkspark: 20000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timedeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamopenems: pyEMS Couplerdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamspark: 10000000 - 100 - Inner Join Test Timedeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamspark: 20000000 - 1000 - Group By Test Timevvenc: Bosphorus 4K - Fastspark: 40000000 - 1000 - SHA-512 Benchmark Timecloudsuite-ma: Largespark: 40000000 - 100 - Repartition Test Timespark: 10000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 1000 - Group By Test Timespark: 20000000 - 100 - Repartition Test Timedeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamspark: 1000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Broadcast Inner Join Test Timeopenems: openEMS MSL_NotchFiltercloudsuite-ma: Smallcloudsuite-da: 32spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframedeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamvvenc: Bosphorus 4K - Fastermemcached: 1:5deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streammemcached: 1:100deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamspark: 40000000 - 500 - Inner Join Test Timedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamrocksdb: Read Rand Write Randrocksdb: Rand Readspark: 10000000 - 1000 - SHA-512 Benchmark Timespark: 10000000 - 500 - SHA-512 Benchmark Timerocksdb: Read While Writingcloudsuite-da: 4deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamcloudsuite-da: 8spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframedeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamspark: 1000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Group By Test Timevvenc: Bosphorus 1080p - Fastcloudsuite-ws: 400spark: 40000000 - 500 - Group By Test Timespark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframevvenc: Bosphorus 1080p - Fastercloudsuite-da: 64rocksdb: Update Randdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streammemcached: 1:10deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamcloudsuite-ga: Triangle Countabc55595.581.312.161.791.5015.9687470688.331.6118802474.911.9141883241.944.951.469.933.391.894.55143382.1910.095.6119.3524.9822210034.874.5523.289942.92515.619477007240.849.868.2414.164.821.01216.801.7533.64529.71424.934.955.7257.36343420616.308.64466992725.4936961898.853.1617.7609786013.388.3018.2317.809.286339619.046.3635.0836.283.3831157811030362231.9257.479.0118.1554.8456.7788104553.344.736.406.723.383.4210.2814.518596059.868.941143856.7525.7614.5067325893.426.639.3555.59992699856.0556.46528005654.8675412319.1455.46748550555.1655.964.8716.1732.188416.857.412817.46134.772225.32463.373.443.4655.893.4391164077.873312.833725.2763.411591288.823.4455.2710.433.5211.271619.8688.66925.62491.96378.083.80926.0414335616.236.0917.8508026918.92630.0316177.795389.968856.16300933517.0740814514.634594112860553.4053.76388.9333640442.11297.4743226477.9162.119316.096218.08632.271427631101310733578.708.724696370128216161.820516.17421.307946.9171138.820112838373.393.39115.1035267.420159.762756.41291734717.5775247857.43437.1517.4759210223.393.4317.354128079278863580.6837198.236100.4802159.20563706097.7667.089114.894639545.661.391.921.801.6214.298.711.794.771.802.085.431.3810.703.701.744.27153872.209.485.2818.225.355.194.8422.073145.291116.59227.299.927.8714.565.041.05217.011.6734.851128.68634.765.185.5654.9816.388.4826.568.933.0917.7865135193.438.3918.1717.328.976226318.886.5735.7336.483.491003634233.4855.818.8418.6255.43265096255.2484157983.324.866.236.813.463.5110.5514.519.629.031130756.9526.3814.443.506.489.1456.1255.1655.8356.0824303229.0456.3455.92135205454.804.9716.0632.819616.537.461917.48133.888724.87423.403.383.4055.603.4989653076.623613.04324.90023.361602278.843.4056.0810.583.4811.432819.687.42165.63486.15128.113.80325.8814486316.246.1017.738.82636.9466178.955889.386255.6017.2414.644570412746823.3953.46738.8833667823.02299.12043238867.1361.625416.225118.22637.164127555511320348088.648.694664918128552361.679416.21121.44846.6112137.956512762063.373.40115.7595266.85859.903656.66384731117.607.40737.01717.423.383.4417.31128391478771480.5006198.6864100.6949158.86793712642.4567.139514.884139966.641.181.861.571.4314.719.271.634.451.982.135.401.519.793.441.884.63141982.039.325.2019.626.815.174.5521.905745.635115.973349091229.2510.4415295348.3114.945.081.06206.931.7135.236328.37314.715.135.8155.5515.708.8425.649.223.2118.453.518.0918.8117.209.266435119.516.4334.66.363.461035331226.3455.958.7618.1156.37846455455.2978731493.254.766.296.903.473.4410.4314.169.739.161158455.6025.9814.173.496.519.2456.87235309356.4155.2254.889.2455.12369072956.3655.104.8816.3832.442116.7372956427.55317.79132.276925.30613.433.383.4056.583.4691135977.121412.958925.25743.411614717.753.4555.3159632610.443.5311.328719.8888.22315.70492.93798.193.75826.2314301616.056.1617.658.85630.9519179.644689.043856.0417.0714.54550512771383.3753.29928.8563637603.67299.93253212714.2861.67716.211618.15632.76127766041319946798.658.754696807129081862.095316.102621.395146.7264138.27212838053.373.38115.4872268.20559.615156.5417.657.41537.0517.483.393.4317.307128055078661180.5865198.4746100.6041159.01193711642.3267.132714.8854OpenBenchmarking.org

RocksDB

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Syncabc120024003600480060005559395439961. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Inner Join Test Timeabc2468105.585.666.64

Apache Spark

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Timeabc0.31280.62560.93841.25121.5641.311.391.18

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Timeabc0.4860.9721.4581.9442.432.161.921.86

Apache Spark

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test Timeabc0.4050.811.2151.622.0251.791.801.57

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Timeabc0.36450.7291.09351.4581.82251.501.621.43

Apache Spark

Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Timeabc4812162015.9714.2914.71

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Repartition Test Timeabc36912158.338.719.27

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Timeabc0.40280.80561.20841.61122.0141.6118802471.7900000001.630000000

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Group By Test Timeabc1.10482.20963.31444.41925.5244.914.774.45

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Repartition Test Timeabc0.44550.8911.33651.7822.22751.9141883241.8000000001.980000000

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test Timeabc0.47930.95861.43791.91722.39651.942.082.13

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Timeabc1.22182.44363.66544.88726.1094.955.435.40

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Timeabc0.33980.67961.01941.35921.6991.461.381.51

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Inner Join Test Timeabc36912159.9310.709.79

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeabc0.83251.6652.49753.334.16253.393.703.44

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Inner Join Test Timeabc0.42530.85061.27591.70122.12651.891.741.88

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Timeabc1.04182.08363.12544.16725.2094.554.274.63

CloudSuite Graph Analytics

GraphX Algorithm: Connected Components

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph Analytics 4.0GraphX Algorithm: Connected Componentsabc3K6K9K12K15K143381538714198

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test Timeabc0.4950.991.4851.982.4752.192.202.03

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Timeabc369121510.099.489.32

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Repartition Test Timeabc1.26232.52463.78695.04926.31155.615.285.20

CloudSuite Web Serving

Load Scale: 100

OpenBenchmarking.orgops/sec, More Is BetterCloudSuite Web ServingLoad Scale: 100abc51015202519.3518.2019.60

Apache Spark

Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Timeabc61218243024.9825.3526.81

Apache Spark

Row Count: 10000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Repartition Test Timeabc1.16782.33563.50344.67125.8394.875.195.17

Apache Spark

Row Count: 1000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test Timeabc1.0892.1783.2674.3565.4454.554.844.55

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc61218243023.2922.0721.91

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc102030405042.9345.2945.64

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Repartition Test Timeabc4812162015.6216.5915.97

ClickHouse

100M Rows Hits Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runabc50100150200250240.84227.29229.25MIN: 24.26 / MAX: 3750MIN: 24.14 / MAX: 3333.33MIN: 24.51 / MAX: 3157.89

Apache Spark

Row Count: 20000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Inner Join Test Timeabc36912159.8600000009.92000000010.441529534

Apache Spark

Row Count: 20000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Group By Test Timeabc2468108.247.878.31

Apache Spark

Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Timeabc4812162014.1614.5614.94

Apache Spark

Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Timeabc1.1432.2863.4294.5725.7154.825.045.08

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Timeabc0.23850.4770.71550.9541.19251.011.051.06

ClickHouse

100M Rows Hits Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cacheabc50100150200250216.80217.01206.93MIN: 14.82 / MAX: 3529.41MIN: 15.02 / MAX: 2608.7MIN: 15.36 / MAX: 3000

Apache Spark

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test Timeabc0.39380.78761.18141.57521.9691.751.671.71

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc81624324033.6534.8535.24

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc71421283529.7128.6928.37

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test Timeabc1.10932.21863.32794.43725.54654.934.764.71

Apache Spark

Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Timeabc1.16552.3313.49654.6625.82754.955.185.13

Apache Spark

Row Count: 10000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Inner Join Test Timeabc1.30732.61463.92195.22926.53655.725.565.81

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmarkabc132639526557.3654.9855.55

Apache Spark

Row Count: 40000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test Timeabc4812162016.3016.3815.70

Apache Spark

Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Timeabc2468108.6446699278.4800000008.840000000

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Timeabc61218243025.4926.5625.64

Apache Spark

Row Count: 20000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Repartition Test Timeabc36912158.858.939.22

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Timeabc0.72231.44462.16692.88923.61153.163.093.21

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Inner Join Test Timeabc51015202517.7617.7918.45

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframeabc0.78981.57962.36943.15923.9493.383.433.51

Apache Spark

Row Count: 20000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Group By Test Timeabc2468108.308.398.09

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Timeabc51015202518.2318.1718.81

Apache Spark

Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Timeabc4812162017.8017.3217.20

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Repartition Test Timeabc36912159.288.979.26

CloudSuite Data Analytics

Hadoop Slaves: 1

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 1abc14K28K42K56K70K633966226364351

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test Timeabc51015202519.0418.8819.51

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Inner Join Test Timeabc2468106.366.576.43

CloudSuite Web Serving

Load Scale: 500

OpenBenchmarking.orgops/sec, More Is BetterCloudSuite Web ServingLoad Scale: 500abc81624324035.0835.7334.60

Apache Spark

Row Count: 10000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Group By Test Timeabc2468106.286.486.36

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframeabc0.78531.57062.35593.14123.92653.3831157813.4900000003.460000000

RocksDB

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillabc200K400K600K800K1000K1030362100363410353311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

ClickHouse

100M Rows Hits Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runabc50100150200250231.92233.48226.34MIN: 22.59 / MAX: 2608.7MIN: 23.7 / MAX: 3157.89MIN: 22.8 / MAX: 3333.33

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmarkabc132639526557.4755.8155.95

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Group By Test Timeabc36912159.018.848.76

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test Timeabc51015202518.1518.6218.11

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmarkabc132639526554.8455.4356.38

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmarkabc132639526556.7855.2555.30

Apache Spark

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Timeabc0.75151.5032.25453.0063.75753.343.323.25

Apache Spark

Row Count: 10000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Repartition Test Timeabc1.09352.1873.28054.3745.46754.734.864.76

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Group By Test Timeabc2468106.406.236.29

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Group By Test Timeabc2468106.726.816.90

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeabc0.78081.56162.34243.12323.9043.383.463.47

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframeabc0.78981.57962.36943.15923.9493.423.513.44

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Timeabc369121510.2810.5510.43

Apache Spark

Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Timeabc4812162014.5214.5114.16

Apache Spark

Row Count: 20000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Inner Join Test Timeabc36912159.869.629.73

Apache Spark

Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Timeabc36912158.949.039.16

CloudSuite Graph Analytics

GraphX Algorithm: Page Rank

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph Analytics 4.0GraphX Algorithm: Page Rankabc2K4K6K8K10K114381130711584

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmarkabc132639526556.7556.9555.60

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Timeabc61218243025.7626.3825.98

Apache Spark

Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Timeabc4812162014.5114.4414.17

Apache Spark

Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Timeabc0.78751.5752.36253.153.93753.423.503.49

Apache Spark

Row Count: 10000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Group By Test Timeabc2468106.636.486.51

Apache Spark

Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Timeabc36912159.359.149.24

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmarkabc132639526555.6056.1256.87

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmarkabc132639526556.0555.1656.41

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmarkabc132639526556.4755.8355.22

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmarkabc132639526554.8756.0854.88

Apache Spark

Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Timeabc36912159.149.049.24

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmarkabc132639526555.4756.3455.12

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmarkabc132639526555.1655.9256.36

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmarkabc132639526555.9654.8055.10

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Repartition Test Timeabc1.11832.23663.35494.47325.59154.874.974.88

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test Timeabc4812162016.1716.0616.38

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc81624324032.1932.8232.44

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test Timeabc4812162016.8516.5316.74

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc2468107.41287.46197.5530

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Timeabc4812162017.4617.4817.79

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc306090120150134.77133.89132.28

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc61218243025.3224.8725.31

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframeabc0.77181.54362.31543.08723.8593.373.403.43

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframeabc0.7741.5482.3223.0963.873.443.383.38

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframeabc0.77851.5572.33553.1143.89253.463.403.40

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmarkabc132639526555.8955.6056.58

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeabc0.78531.57062.35593.14123.92653.433.493.46

RocksDB

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillabc200K400K600K800K1000K9116408965309113591. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc2040608010077.8776.6277.12

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc369121512.8313.0412.96

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc61218243025.2824.9025.26

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframeabc0.76731.53462.30193.06923.83653.413.363.41

Memcached

Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:1abc300K600K900K1200K1500K1591288.821602278.841614717.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframeabc0.77631.55262.32893.10523.88153.443.403.45

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmarkabc132639526555.2756.0855.32

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Inner Join Test Timeabc369121510.4310.5810.44

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Timeabc0.79431.58862.38293.17723.97153.523.483.53

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc369121511.2711.4311.33

OpenEMS

Test: pyEMS Coupler

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS Couplerabc51015202519.8619.6019.881. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc2040608010088.6787.4288.22

Apache Spark

Row Count: 10000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Inner Join Test Timeabc1.28252.5653.84755.136.41255.625.635.70

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc110220330440550491.96486.15492.94

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Group By Test Timeabc2468108.088.118.19

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastabc0.8571.7142.5713.4284.2853.8093.8033.7581. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Apache Spark

Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Timeabc61218243026.0425.8826.23

CloudSuite In-Memory Analytics

Training Set Size: Large

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory Analytics 4.0Training Set Size: Largeabc30K60K90K120K150K143356144863143016

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test Timeabc4812162016.2316.2416.05

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Timeabc2468106.096.106.16

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Group By Test Timeabc4812162017.8517.7317.65

Apache Spark

Row Count: 20000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Repartition Test Timeabc2468108.928.828.85

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc140280420560700630.03636.95630.95

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc4080120160200177.80178.96179.64

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc2040608010089.9789.3989.04

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmarkabc132639526556.1655.6056.04

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Timeabc4812162017.0717.2417.07

OpenEMS

Test: openEMS MSL_NotchFilter

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: openEMS MSL_NotchFilterabc4812162014.6314.6414.501. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

CloudSuite In-Memory Analytics

Training Set Size: Small

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory Analytics 4.0Training Set Size: Smallabc10K20K30K40K50K459414570445505

CloudSuite Data Analytics

Hadoop Slaves: 32

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 32abc300K600K900K1200K1500K128605512746821277138

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframeabc0.7651.532.2953.063.8253.403.393.37

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc122436486053.7653.4753.30

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterabc2468108.9338.8838.8561. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Memcached

Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5abc800K1600K2400K3200K4000K3640442.113667823.023637603.671. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc70140210280350297.47299.12299.93

Memcached

Set To Get Ratio: 1:100

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100abc700K1400K2100K2800K3500K3226477.913238867.133212714.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc142842567062.1261.6361.68

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc4812162016.1016.2316.21

Apache Spark

Row Count: 40000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test Timeabc4812162018.0818.2218.15

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc140280420560700632.27637.16632.76

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randomabc600K1200K1800K2400K3000K2763110275555127766041. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readabc30M60M90M120M150M1310733571320348081319946791. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Apache Spark

Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Timeabc2468108.708.648.65

Apache Spark

Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Timeabc2468108.728.698.75

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingabc1000K2000K3000K4000K5000K4696370466491846968071. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

CloudSuite Data Analytics

Hadoop Slaves: 4

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 4abc300K600K900K1200K1500K128216112855231290818

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc142842567061.8261.6862.10

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc4812162016.1716.2116.10

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc51015202521.3121.4521.40

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc112233445546.9246.6146.73

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc306090120150138.82137.96138.27

CloudSuite Data Analytics

Hadoop Slaves: 8

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 8abc300K600K900K1200K1500K128383712762061283805

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframeabc0.76281.52562.28843.05123.8143.393.373.37

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframeabc0.7651.532.2953.063.8253.393.403.38

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc306090120150115.10115.76115.49

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc60120180240300267.42266.86268.21

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc132639526559.7659.9059.62

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmarkabc132639526556.4156.6656.54

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test Timeabc4812162017.5817.6017.65

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastabc2468107.4347.4077.4151. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

CloudSuite Web Serving

Load Scale: 400

OpenBenchmarking.orgops/sec, More Is BetterCloudSuite Web ServingLoad Scale: 400abc91827364537.1537.0237.05

Apache Spark

Row Count: 40000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test Timeabc4812162017.4817.4217.48

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframeabc0.76281.52562.28843.05123.8143.393.383.39

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframeabc0.7741.5482.3223.0963.873.433.443.43

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterabc4812162017.3517.3117.311. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

CloudSuite Data Analytics

Hadoop Slaves: 64

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Data Analytics 4.0Hadoop Slaves: 64abc300K600K900K1200K1500K128079212839141280550

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randomabc200K400K600K800K1000K7886357877147866111. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc2040608010080.6880.5080.59

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc4080120160200198.24198.69198.47

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc20406080100100.48100.69100.60

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc4080120160200159.21158.87159.01

Memcached

Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10abc800K1600K2400K3200K4000K3706097.763712642.453711642.321. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc153045607567.0967.1467.13

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc4812162014.8914.8814.89


Phoronix Test Suite v10.8.4