sfsd

Intel Core i7-8700K testing with a ASUS TUF Z370-PLUS GAMING (2001 BIOS) and ASUS Intel UHD 630 CFL GT2 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2209060-NE-SFSD7970842&sro&grr.

sfsdProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABCIntel Core i7-8700K @ 4.70GHz (6 Cores / 12 Threads)ASUS TUF Z370-PLUS GAMING (2001 BIOS)Intel 8th Gen Core16GB128GB Toshiba THNSN5128GPU7ASUS Intel UHD 630 CFL GT2 16GB (1200MHz)Realtek ALC887-VDDELL S2409WIntel I219-VUbuntu 22.045.19.0-rc6-phx-retbleed (x86_64)GNOME Shell 42.4X Server + Wayland4.6 Mesa 22.0.11.2.204GCC 11.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled

sfsdbuild-nodejs: Time To Compileai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scorespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Group By Test Timespark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 1000 - Broadcast Inner Join Test Timespark: 40000000 - 1000 - Inner Join Test Timespark: 40000000 - 1000 - Repartition Test Timespark: 40000000 - 1000 - Group By Test Timespark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 1000 - Calculate Pi Benchmarkspark: 40000000 - 1000 - SHA-512 Benchmark Timespark: 1000000 - 1000 - Broadcast Inner Join Test Timespark: 1000000 - 1000 - Inner Join Test Timespark: 1000000 - 1000 - Repartition Test Timespark: 1000000 - 1000 - Group By Test Timespark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Calculate Pi Benchmarkspark: 1000000 - 1000 - SHA-512 Benchmark Timespark: 20000000 - 100 - Broadcast Inner Join Test Timespark: 20000000 - 100 - Inner Join Test Timespark: 20000000 - 100 - Repartition Test Timespark: 20000000 - 100 - Group By Test Timespark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 100 - Calculate Pi Benchmarkspark: 20000000 - 100 - SHA-512 Benchmark Timespark: 20000000 - 2000 - Broadcast Inner Join Test Timespark: 20000000 - 2000 - Inner Join Test Timespark: 20000000 - 2000 - Repartition Test Timespark: 20000000 - 2000 - Group By Test Timespark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 2000 - Calculate Pi Benchmarkspark: 20000000 - 2000 - SHA-512 Benchmark Timespark: 20000000 - 500 - Broadcast Inner Join Test Timespark: 20000000 - 500 - Inner Join Test Timespark: 20000000 - 500 - Repartition Test Timespark: 20000000 - 500 - Group By Test Timespark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 500 - Calculate Pi Benchmarkspark: 20000000 - 500 - SHA-512 Benchmark Timespark: 20000000 - 1000 - Broadcast Inner Join Test Timespark: 20000000 - 1000 - Inner Join Test Timespark: 20000000 - 1000 - Repartition Test Timespark: 20000000 - 1000 - Group By Test Timespark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 1000 - Calculate Pi Benchmarkspark: 20000000 - 1000 - SHA-512 Benchmark Timeprimesieve: 1e13spark: 10000000 - 2000 - Broadcast Inner Join Test Timespark: 10000000 - 2000 - Inner Join Test Timespark: 10000000 - 2000 - Repartition Test Timespark: 10000000 - 2000 - Group By Test Timespark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 2000 - Calculate Pi Benchmarkspark: 10000000 - 2000 - SHA-512 Benchmark Timespark: 10000000 - 1000 - Broadcast Inner Join Test Timespark: 10000000 - 1000 - Inner Join Test Timespark: 10000000 - 1000 - Repartition Test Timespark: 10000000 - 1000 - Group By Test Timespark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 1000 - Calculate Pi Benchmarkspark: 10000000 - 1000 - SHA-512 Benchmark Timespark: 10000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 100 - Inner Join Test Timespark: 10000000 - 100 - Repartition Test Timespark: 10000000 - 100 - Group By Test Timespark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Calculate Pi Benchmarkspark: 10000000 - 100 - SHA-512 Benchmark Timespark: 10000000 - 500 - Broadcast Inner Join Test Timespark: 10000000 - 500 - Inner Join Test Timespark: 10000000 - 500 - Repartition Test Timespark: 10000000 - 500 - Group By Test Timespark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 500 - Calculate Pi Benchmarkspark: 10000000 - 500 - SHA-512 Benchmark Timeclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachemnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetbuild-python: Released Build, PGO + LTO Optimizedbuild-erlang: Time To Compilencnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetsvt-av1: Preset 4 - Bosphorus 4Kredis: LPUSH - 1000unvanquished: 1920 x 1080 - Ultraredis: LPUSH - 50redis: LPUSH - 500redis: LPOP - 1000redis: SET - 1000astcenc: Exhaustiveredis: SADD - 50redis: GET - 1000build-php: Time To Compilebuild-wasmer: Time To Compileredis: GET - 50redis: SET - 500redis: SADD - 500redis: LPOP - 50redis: LPOP - 500node-web-tooling: dragonflydb: 200 - 5:1memtier-benchmark: Redis - 500 - 5:1dragonflydb: 200 - 1:1dragonflydb: 200 - 1:5memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 100 - 1:10memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 500 - 1:1memtier-benchmark: Redis - 500 - 1:5memtier-benchmark: Redis - 500 - 1:10dragonflydb: 50 - 1:5dragonflydb: 50 - 5:1openvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUmemtier-benchmark: Redis - 100 - 1:5dragonflydb: 50 - 1:1memtier-benchmark: Redis - 50 - 1:5memtier-benchmark: Redis - 50 - 5:1openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUmemtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 50 - 1:1openvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUgraphics-magick: Enhancedgraphics-magick: Noise-Gaussianopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUgraphics-magick: Swirlopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUgraphics-magick: Resizinggraphics-magick: Sharpengraphics-magick: Rotategraphics-magick: HWB Color Spaceunvanquished: 1920 x 1080 - Highastcenc: Thoroughunvanquished: 1920 x 1080 - Mediumnatron: Spaceshipredis: SET - 50svt-av1: Preset 4 - Bosphorus 1080pastcenc: Fastcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingprimesieve: 1e12aircrack-ng: redis: SADD - 1000inkscape: SVG Files To PNGsvt-av1: Preset 8 - Bosphorus 4Kredis: GET - 500astcenc: Mediumbuild-python: Defaultsvt-av1: Preset 10 - Bosphorus 4Kblosc: blosclz bitshufflesvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kunpack-linux: linux-5.19.tar.xzblosc: blosclz shufflesvt-av1: Preset 10 - Bosphorus 1080plammps: Rhodopsin Proteinsvt-av1: Preset 12 - Bosphorus 1080pABC957.97119489869623.273.5982461734.065.2314.043861196263.6525689344.942.232.573.574.4813.905990534264.2265477574.6757.0258.6051.91613444435.6813.94263.31580113364.3875738691.862.343.313.8214.04265.1532263544.4155.73883161859.1550.0133.3213.86263.76705939561.2657.4155.3148.5834.5813.99264.98196995462.2958.09702831159.0546.6552317232.9514.00264.00564107662.082.569569622.803.664.8014.07264.0529416174.66422185831.6029.61382041225.98692184814.97058239314.05264.38178005735.4427.9529.1225.2315.3413.924694595265.64273126233.4828.69613542929.1924.8814.6329841114.099430503263.63965716733.7027.3329.2024.35018248814.5113.909854317264.54099840233.09102933369.55114.5915.8713.665397629.6014.20266.09530546418.7314.5115.0513.539.9613.75263.34891066118.7714.9615.7813.8070766589.4714.05263.65208859818.7213.4414.39640577213.140971229.0514.02263.75441167517.86115.74113.48102.8133.9784.2593.0664.86735.3923.3791.63513.323288.398141.8633.97230.1510.1916.3524.3421.188.2310.2655.2312.351.16.823.473.313.594.6415.521.1951848508.38571906321.122046449.252090338.52074037.50.48972629399.25291261786.84287.613273258823313702627615.753444536.753301260.7512.161992128.411512323.242087416.282211895.731806170.521992499.121716481.411623068.241761822.771790527.722144617.321927867.313194.291.241970123.241988043.91989301.281726134.013200.371.231950043.181945581.312022.731.971051.993.818221.9616.13247.6718.22219.3337.49106.6520.61193.914317215.76380.42640.69929.881.115321.464091655628128.44.7866188.32.12361094.54.079100.6353387384819631.1523361.8542609105.527.85424.9382815841.537.180524.53251.151598968.85576.4437.24110148.2158.0824.942285.723957.36319509859653.553.703.925.2614.35263.9604442585.312.342.453.564.3013.78264.1409513154.5156.92380852558.3444501757.9235.0814.02263.85494709964.851.932.283.103.9913.88263.5407499893.9855.2460.76290474851.18643296233.0114.865667127263.76369023261.7255.5356.9947.21524385434.5413.96263.96197845562.6156.3855.6550.6033.8713.99262.92088035561.829100472.403.003.634.4814.10263.7763673064.6429.2528.9425.1715.1914.085522088265.85055167634.9328.7928.9924.8714.74399345913.99266.22516373333.3728.75054509328.7926.8415.1913.957632938263.10588012333.7526.9528.2523.9714.24417666313.80264.29858919432.985493107373.91514.7116.0413.7810.1013.80263.78608643718.90245495314.5915.1613.6910.0714.02264.01511800718.9814.62926564815.49599405613.479.0614.05263.14342138918.70207534113.7314.9413.2574311288.9914.04263.97171472317.762720002121.84111.19100.9933.9524.2713.1234.85135.3043.3991.6513.653288.253142.0443.86228.891016.3424.3521.058.2410.2755.2112.271.16.783.413.183.574.615.591.203189733656.61786065.121778839.882043605.622222163.250.48952605961.25234701987.02186.6692861609.2523276802492194.52141549.52035452.6212.432005750.941533754.232051704.422194182.241761644.312022525.581737144.541593958.741772298.561789992.052154148.51931122.183299.941.21919288.211994368.211942966.661713885.813196.881.231899683.691711625.322019.271.981049.733.81181.1322.0616.18246.9818.1220.7932.5122.9820.43195.6614317215.76380.422640.69884.041.125297.9664491625638132.74.78761822.11924962.254.068100.3432390494834530.72423363.4612158587.2527.96525.4572823473.7537.234324.49553.3155946.468.3478.2487.10810174.6158.0564.941285.843955.94619339849493.383.624.035.0314.07265.8117260435.322.372.623.414.3714.00266.504.6958.7557.9450.0336.3214.19266.0465.481.912.283.283.9014.10265.964.2454.8458.0547.4833.8913.88266.1861.8754.7657.1948.1834.2913.97266.7162.4355.6755.9947.2033.1813.95266.4862.142.522.943.664.6514.11266.0250281624.7631.1829.7126.7415.3513.99265.8334.5727.6529.1325.4115.3013.93266.7133.5427.6928.3224.5614.7813.89265.92085674033.7127.7627.8124.4514.5814.12265.9932.77382.85914.5716.6013.9010.1513.98265.8518.8014.2315.1013.599.8214.80266.0718.8115.6815.8513.909.3413.96265.6318.6414.1414.9613.299.3514.00266.28317523617.89115.00113.79100.0333.9054.2753.0804.87735.2823.4081.63613.104289.768142.3163.97228.5910.2316.3324.3121.038.2310.2655.1112.381.106.843.483.323.614.6215.611.2011945768.3357.01878914.921933436.952028878.432226848.230.48592552054.122732949.3086.24585.9022830668.702201393.842568888.293105232.873208768.7712.142032385.141545189.592074466.952217552.261813065.591996437.791711961.581620962.121720148.881748466.512159425.651934424.723338.271.171943695.241996857.541929613.471695938.523310.581.181944401.051819804.242056.361.931100.093.63190.3921.0016.49242.2218.68213.9733.55119.1321.76183.6514317216.46364.282650.639454.831.165106.2264391656639134.94.7847179.22.12306357.424.072100.2210389624724930.98923366.5982594956.1727.66625.6522799995.5837.187524.30253.4075977.669.09578.1087.06210229.1158.6344.912285.997OpenBenchmarking.org

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileABC2004006008001000SE +/- 0.84, N = 3957.97957.36955.95

AI Benchmark Alpha

Device AI Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreABC400800120016002000194819501933

AI Benchmark Alpha

Device Training Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreABC2004006008001000986985984

AI Benchmark Alpha

Device Inference Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreABC2004006008001000962965949

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC0.79881.59762.39643.19523.994SE +/- 0.06, N = 93.273.553.38

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test TimeABC0.83251.6652.49753.334.1625SE +/- 0.039029208, N = 93.5982461733.7000000003.620000000

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test TimeABC0.91351.8272.74053.6544.5675SE +/- 0.05, N = 94.063.924.03

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test TimeABC1.18352.3673.55054.7345.9175SE +/- 0.03, N = 95.235.265.03

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.10, N = 914.0414.3514.07

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.16, N = 9263.65263.96265.81

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC1.1972.3943.5914.7885.985SE +/- 0.05, N = 94.945.315.32

Apache Spark

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC0.53331.06661.59992.13322.6665SE +/- 0.11, N = 92.232.342.37

Apache Spark

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeABC0.58951.1791.76852.3582.9475SE +/- 0.03, N = 92.572.452.62

Apache Spark

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeABC0.80331.60662.40993.21324.0165SE +/- 0.02, N = 93.573.563.41

Apache Spark

Row Count: 1000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeABC1.0082.0163.0244.0325.04SE +/- 0.02, N = 94.484.304.37

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.03, N = 913.9113.7814.00

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.36, N = 9264.23264.14266.50

Apache Spark

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeABC1.05532.11063.16594.22125.2765SE +/- 0.05, N = 94.674.514.69

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC1326395265SE +/- 0.97, N = 457.0256.9258.75

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test TimeABC1326395265SE +/- 1.35, N = 458.6058.3457.94

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test TimeABC1326395265SE +/- 0.71, N = 451.9257.9250.03

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test TimeABC816243240SE +/- 0.10, N = 435.6835.0836.32

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.26, N = 413.9414.0214.19

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.20, N = 4263.32263.85266.04

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark TimeABC1530456075SE +/- 0.79, N = 464.3964.8565.48

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC0.43430.86861.30291.73722.1715SE +/- 0.02, N = 91.861.931.91

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeABC0.52651.0531.57952.1062.6325SE +/- 0.02, N = 92.342.282.28

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeABC0.74481.48962.23442.97923.724SE +/- 0.02, N = 93.313.103.28

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeABC0.89781.79562.69343.59124.489SE +/- 0.04, N = 93.823.993.90

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.09, N = 914.0413.8814.10

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.43, N = 9265.15263.54265.96

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeABC0.99231.98462.97693.96924.9615SE +/- 0.06, N = 94.413.984.24

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC1326395265SE +/- 0.68, N = 355.7455.2454.84

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test TimeABC1428425670SE +/- 0.81, N = 359.1560.7658.05

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test TimeABC1224364860SE +/- 0.42, N = 350.0151.1947.48

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test TimeABC816243240SE +/- 0.10, N = 333.3233.0133.89

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.06, N = 313.8614.8713.88

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.18, N = 3263.77263.76266.18

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC1428425670SE +/- 0.31, N = 361.2661.7261.87

Apache Spark

Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC1326395265SE +/- 1.23, N = 357.4155.5354.76

Apache Spark

Row Count: 40000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeABC1326395265SE +/- 0.91, N = 355.3156.9957.19

Apache Spark

Row Count: 40000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeABC1122334455SE +/- 0.93, N = 348.5847.2248.18

Apache Spark

Row Count: 40000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeABC816243240SE +/- 0.40, N = 334.5834.5434.29

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.02, N = 313.9913.9613.97

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.59, N = 3264.98263.96266.71

Apache Spark

Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeABC1428425670SE +/- 0.45, N = 362.2962.6162.43

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC1326395265SE +/- 1.93, N = 358.1056.3855.67

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Inner Join Test TimeABC1326395265SE +/- 0.99, N = 359.0555.6555.99

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Repartition Test TimeABC1122334455SE +/- 0.26, N = 346.6650.6047.20

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Group By Test TimeABC816243240SE +/- 0.03, N = 332.9533.8733.18

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.07, N = 314.0013.9913.95

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.27, N = 3264.01262.92266.48

Apache Spark

Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC1428425670SE +/- 0.13, N = 362.0861.8362.14

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC0.57821.15641.73462.31282.891SE +/- 0.02035353, N = 62.569569622.400000002.52000000

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Inner Join Test TimeABC0.6751.352.0252.73.375SE +/- 0.03, N = 62.803.002.94

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Repartition Test TimeABC0.82351.6472.47053.2944.1175SE +/- 0.02, N = 63.663.633.66

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Group By Test TimeABC1.082.163.244.325.4SE +/- 0.06, N = 64.804.484.65

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.14, N = 614.0714.1014.11

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.19, N = 6264.05263.78266.03

Apache Spark

Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC1.0712.1423.2134.2845.355SE +/- 0.083377130, N = 64.6642218584.6400000004.760000000

Apache Spark

Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.32, N = 331.6029.2531.18

Apache Spark

Row Count: 20000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Inner Join Test TimeABC714212835SE +/- 0.45, N = 329.6128.9429.71

Apache Spark

Row Count: 20000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Repartition Test TimeABC612182430SE +/- 0.16, N = 325.9925.1726.74

Apache Spark

Row Count: 20000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Group By Test TimeABC48121620SE +/- 0.19, N = 314.9715.1915.35

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.03, N = 314.0514.0913.99

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.45, N = 3264.38265.85265.83

Apache Spark

Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark TimeABC816243240SE +/- 0.15, N = 335.4434.9334.57

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.38, N = 327.9528.7927.65

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Inner Join Test TimeABC714212835SE +/- 0.30, N = 329.1228.9929.13

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Repartition Test TimeABC612182430SE +/- 0.18, N = 325.2324.8725.41

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Group By Test TimeABC48121620SE +/- 0.33, N = 315.3414.7415.30

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.08, N = 313.9213.9913.93

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.31, N = 3265.64266.23266.71

Apache Spark

Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC816243240SE +/- 0.08, N = 333.4833.3733.54

Apache Spark

Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.24, N = 328.7028.7527.69

Apache Spark

Row Count: 20000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Inner Join Test TimeABC714212835SE +/- 0.16, N = 329.1928.7928.32

Apache Spark

Row Count: 20000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Repartition Test TimeABC612182430SE +/- 0.08, N = 324.8826.8424.56

Apache Spark

Row Count: 20000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Group By Test TimeABC48121620SE +/- 0.19, N = 314.6315.1914.78

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.08, N = 314.1013.9613.89

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.37, N = 3263.64263.11265.92

Apache Spark

Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark TimeABC816243240SE +/- 0.27, N = 333.7033.7533.71

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.49, N = 327.3326.9527.76

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Inner Join Test TimeABC714212835SE +/- 0.22, N = 329.2028.2527.81

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Repartition Test TimeABC612182430SE +/- 0.06, N = 324.3523.9724.45

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Group By Test TimeABC48121620SE +/- 0.40, N = 314.5114.2414.58

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.30, N = 313.9113.8014.12

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.39, N = 3264.54264.30265.99

Apache Spark

Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC816243240SE +/- 0.23, N = 333.0932.9932.77

Primesieve

Length: 1e13

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13ABC80160240320400SE +/- 1.09, N = 3369.55373.92382.861. (CXX) g++ options: -O3

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.04, N = 314.5914.7114.57

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Inner Join Test TimeABC48121620SE +/- 0.34, N = 315.8716.0416.60

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Repartition Test TimeABC48121620SE +/- 0.18, N = 313.6713.7813.90

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Group By Test TimeABC3691215SE +/- 0.09, N = 39.6010.1010.15

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.09, N = 314.2013.8013.98

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.20, N = 3266.10263.79265.85

Apache Spark

Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC510152025SE +/- 0.01, N = 318.7318.9018.80

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.14, N = 314.5114.5914.23

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Inner Join Test TimeABC48121620SE +/- 0.25, N = 315.0515.1615.10

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Repartition Test TimeABC48121620SE +/- 0.10, N = 313.5313.6913.59

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Group By Test TimeABC3691215SE +/- 0.21, N = 39.9610.079.82

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.46, N = 313.7514.0214.80

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.24, N = 3263.35264.02266.07

Apache Spark

Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC510152025SE +/- 0.15, N = 318.7718.9818.81

Apache Spark

Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.54, N = 314.9614.6315.68

Apache Spark

Row Count: 10000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Inner Join Test TimeABC48121620SE +/- 0.27, N = 315.7815.5015.85

Apache Spark

Row Count: 10000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Repartition Test TimeABC48121620SE +/- 0.19, N = 313.8113.4713.90

Apache Spark

Row Count: 10000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Group By Test TimeABC3691215SE +/- 0.06, N = 39.479.069.34

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.09, N = 314.0514.0513.96

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.13, N = 3263.65263.14265.63

Apache Spark

Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark TimeABC510152025SE +/- 0.13, N = 318.7218.7018.64

Apache Spark

Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.50, N = 313.4413.7314.14

Apache Spark

Row Count: 10000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Inner Join Test TimeABC48121620SE +/- 0.31, N = 314.4014.9414.96

Apache Spark

Row Count: 10000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Repartition Test TimeABC3691215SE +/- 0.08, N = 313.1413.2613.29

Apache Spark

Row Count: 10000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Group By Test TimeABC3691215SE +/- 0.04, N = 39.058.999.35

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.06, N = 314.0214.0414.00

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.09, N = 3263.75263.97266.28

Apache Spark

Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark TimeABC48121620SE +/- 0.14, N = 317.8617.7617.89

ClickHouse

100M Rows Web Analytics Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunABC306090120150SE +/- 1.03, N = 5115.74121.84115.00MIN: 8.6 / MAX: 20000MIN: 8.5 / MAX: 20000MIN: 7.14 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunABC306090120150SE +/- 1.47, N = 5113.48111.19113.79MIN: 7.71 / MAX: 12000MIN: 8.34 / MAX: 12000MIN: 7.08 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheABC20406080100SE +/- 1.12, N = 5102.81100.99100.03MIN: 7.39 / MAX: 15000MIN: 7.71 / MAX: 8571.43MIN: 6.85 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3ABC816243240SE +/- 0.05, N = 333.9833.9533.91MIN: 33.82 / MAX: 46.32MIN: 33.83 / MAX: 45.2MIN: 33.51 / MAX: 45.921. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ABC0.96191.92382.88573.84764.8095SE +/- 0.003, N = 34.2594.2714.275MIN: 4.21 / MAX: 5.18MIN: 4.24 / MAX: 5.01MIN: 4.18 / MAX: 5.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABC0.70271.40542.10812.81083.5135SE +/- 0.003, N = 33.0663.1233.080MIN: 2.98 / MAX: 4.12MIN: 3.01 / MAX: 4.22MIN: 3 / MAX: 4.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ABC1.09732.19463.29194.38925.4865SE +/- 0.010, N = 34.8674.8514.877MIN: 4.75 / MAX: 6.13MIN: 4.76 / MAX: 7.9MIN: 4.74 / MAX: 6.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50ABC816243240SE +/- 0.08, N = 335.3935.3035.28MIN: 35.23 / MAX: 46.59MIN: 35.1 / MAX: 46.47MIN: 35.07 / MAX: 47.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ABC0.76681.53362.30043.06723.834SE +/- 0.007, N = 33.3793.3993.408MIN: 3.3 / MAX: 15.39MIN: 3.32 / MAX: 4.38MIN: 3.3 / MAX: 4.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ABC0.37130.74261.11391.48521.8565SE +/- 0.014, N = 31.6351.6501.636MIN: 1.58 / MAX: 2.02MIN: 1.59 / MAX: 2.58MIN: 1.5 / MAX: 2.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetABC48121620SE +/- 0.15, N = 313.3213.6513.10MIN: 12.9 / MAX: 14.46MIN: 13.28 / MAX: 25.73MIN: 12.57 / MAX: 25.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedABC60120180240300288.40288.25289.77

Timed Erlang/OTP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileABC306090120150SE +/- 0.19, N = 3141.86142.04142.32

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetABC0.89331.78662.67993.57324.4665SE +/- 0.03, N = 33.973.863.97MIN: 3.85 / MAX: 4.15MIN: 3.78 / MAX: 4.09MIN: 3.82 / MAX: 11.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABC50100150200250SE +/- 0.02, N = 3230.15228.89228.59MIN: 229.93 / MAX: 237.36MIN: 228.7 / MAX: 235.46MIN: 228.35 / MAX: 235.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mABC3691215SE +/- 0.02, N = 310.1910.0010.23MIN: 10.11 / MAX: 11.07MIN: 9.9 / MAX: 10.76MIN: 10.13 / MAX: 11.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdABC48121620SE +/- 0.01, N = 316.3516.3416.33MIN: 16.25 / MAX: 16.66MIN: 16.26 / MAX: 16.69MIN: 16.2 / MAX: 17.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyABC612182430SE +/- 0.01, N = 324.3424.3524.31MIN: 24.19 / MAX: 26.13MIN: 24.26 / MAX: 24.91MIN: 24.15 / MAX: 24.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50ABC510152025SE +/- 0.02, N = 321.1821.0521.03MIN: 20.9 / MAX: 22.52MIN: 20.93 / MAX: 22.12MIN: 20.8 / MAX: 22.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetABC246810SE +/- 0.00, N = 38.238.248.23MIN: 8.16 / MAX: 9.38MIN: 8.17 / MAX: 9.15MIN: 8.14 / MAX: 8.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18ABC3691215SE +/- 0.03, N = 310.2610.2710.26MIN: 10.14 / MAX: 11.15MIN: 10.11 / MAX: 17.12MIN: 10.11 / MAX: 11.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16ABC1224364860SE +/- 0.02, N = 355.2355.2155.11MIN: 54.95 / MAX: 61.87MIN: 54.99 / MAX: 56.79MIN: 54.86 / MAX: 61.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetABC3691215SE +/- 0.04, N = 312.3512.2712.38MIN: 12.26 / MAX: 13.23MIN: 12.17 / MAX: 13.31MIN: 12.25 / MAX: 19.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceABC0.24750.4950.74250.991.2375SE +/- 0.00, N = 31.101.101.10MIN: 1.08 / MAX: 1.72MIN: 1.07 / MAX: 1.75MIN: 1.07 / MAX: 1.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0ABC246810SE +/- 0.01, N = 36.826.786.84MIN: 6.74 / MAX: 7.75MIN: 6.72 / MAX: 7.62MIN: 6.72 / MAX: 7.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetABC0.7831.5662.3493.1323.915SE +/- 0.01, N = 33.473.413.48MIN: 3.43 / MAX: 4.15MIN: 3.37 / MAX: 4.1MIN: 3.38 / MAX: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2ABC0.7471.4942.2412.9883.735SE +/- 0.06, N = 33.313.183.32MIN: 3.26 / MAX: 4.01MIN: 3.15 / MAX: 3.87MIN: 3.11 / MAX: 4.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ABC0.81231.62462.43693.24924.0615SE +/- 0.02, N = 33.593.573.61MIN: 3.51 / MAX: 4.29MIN: 3.51 / MAX: 4.6MIN: 3.49 / MAX: 4.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2ABC1.0442.0883.1324.1765.22SE +/- 0.03, N = 34.644.604.62MIN: 4.51 / MAX: 5.33MIN: 4.49 / MAX: 5.66MIN: 4.46 / MAX: 5.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetABC48121620SE +/- 0.01, N = 315.5215.5915.61MIN: 15.45 / MAX: 15.79MIN: 15.39 / MAX: 16.29MIN: 15.4 / MAX: 16.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KABC0.27070.54140.81211.08281.3535SE +/- 0.004, N = 31.1951.2031.2011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Redis

Test: LPUSH - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 1000ABC400K800K1200K1600K2000KSE +/- 33163.21, N = 151848508.381897336.001945768.331. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraABC1326395265SE +/- 0.03, N = 357.056.657.0

Redis

Test: LPUSH - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 50ABC400K800K1200K1600K2000KSE +/- 18846.80, N = 151906321.121786065.121878914.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 500ABC400K800K1200K1600K2000KSE +/- 32181.23, N = 152046449.251778839.881933436.951. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 1000ABC400K800K1200K1600K2000KSE +/- 38452.70, N = 152090338.502043605.622028878.431. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000ABC500K1000K1500K2000K2500KSE +/- 36124.68, N = 152074037.502222163.252226848.231. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveABC0.11020.22040.33060.44080.551SE +/- 0.0010, N = 30.48970.48950.48591. (CXX) g++ options: -O3 -flto -pthread

Redis

Test: SADD - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 50ABC600K1200K1800K2400K3000KSE +/- 49552.86, N = 152629399.252605961.252552054.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: GET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000ABC600K1200K1800K2400K3000KSE +/- 60647.19, N = 152912617.002347019.002732949.301. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileABC20406080100SE +/- 0.42, N = 386.8487.0286.25

Timed Wasmer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileABC20406080100SE +/- 0.49, N = 387.6186.6785.901. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Redis

Test: GET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50ABC600K1200K1800K2400K3000KSE +/- 36134.78, N = 152732588.002861609.252830668.701. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500ABC500K1000K1500K2000K2500KSE +/- 55052.29, N = 122331370.002327680.002201393.841. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 500ABC600K1200K1800K2400K3000KSE +/- 38463.64, N = 122627615.752492194.502568888.291. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 50ABC700K1400K2100K2800K3500KSE +/- 120387.99, N = 153444536.752141549.503105232.871. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 500ABC700K1400K2100K2800K3500KSE +/- 69012.03, N = 153301260.752035452.623208768.771. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Node.js V8 Web Tooling Benchmark

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkABC3691215SE +/- 0.06, N = 312.1612.4312.14

Dragonflydb

Clients: 200 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 6872.33, N = 31992128.412005750.942032385.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1ABC300K600K900K1200K1500KSE +/- 7898.00, N = 31512323.241533754.231545189.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 8670.43, N = 32087416.282051704.422074466.951. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500KSE +/- 1318.89, N = 32211895.732194182.242217552.261. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 8267.67, N = 31806170.521761644.311813065.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ABC400K800K1200K1600K2000KSE +/- 10080.98, N = 31992499.122022525.581996437.791. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 8849.68, N = 31716481.411737144.541711961.581. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1ABC300K600K900K1200K1500KSE +/- 13712.27, N = 31623068.241593958.741620962.121. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ABC400K800K1200K1600K2000KSE +/- 8909.71, N = 31761822.771772298.561720148.881. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ABC400K800K1200K1600K2000KSE +/- 8041.06, N = 31790527.721789992.051748466.511. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500KSE +/- 4648.71, N = 32144617.322154148.502159425.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 5831.26, N = 31927867.311931122.181934424.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC7001400210028003500SE +/- 8.61, N = 33194.293299.943338.27MIN: 1761.36 / MAX: 3455.63MIN: 3117.38 / MAX: 3443.22MIN: 2906.46 / MAX: 3515.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC0.2790.5580.8371.1161.395SE +/- 0.00, N = 31.241.201.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ABC400K800K1200K1600K2000KSE +/- 18068.84, N = 31970123.241919288.211943695.241. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 6994.81, N = 31988043.901994368.211996857.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ABC400K800K1200K1600K2000KSE +/- 12369.07, N = 31989301.281942966.661929613.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 4351.00, N = 31726134.011713885.811695938.521. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC7001400210028003500SE +/- 4.37, N = 33200.373196.883310.58MIN: 1758.1 / MAX: 3458.98MIN: 1702.1 / MAX: 3449.98MIN: 1782.74 / MAX: 3536.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC0.27680.55360.83041.10721.384SE +/- 0.00, N = 31.231.231.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ABC400K800K1200K1600K2000KSE +/- 27485.35, N = 31950043.181899683.691944401.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 11256.90, N = 31945581.311711625.321819804.241. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC400800120016002000SE +/- 14.58, N = 32022.732019.272056.36MIN: 1942.04 / MAX: 2048.56MIN: 1952.63 / MAX: 2066.71MIN: 1731.28 / MAX: 2158.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC0.44550.8911.33651.7822.2275SE +/- 0.01, N = 31.971.981.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC2004006008001000SE +/- 4.29, N = 31051.991049.731100.09MIN: 1040.91 / MAX: 1059.93MIN: 1041.12 / MAX: 1057.94MIN: 1042 / MAX: 1134.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC0.85731.71462.57193.42924.2865SE +/- 0.01, N = 33.803.813.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC4080120160200SE +/- 0.66, N = 3182.00181.13190.39MIN: 97.01 / MAX: 200.14MIN: 93.18 / MAX: 198.15MIN: 137.7 / MAX: 208.91. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC510152025SE +/- 0.07, N = 321.9622.0621.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC48121620SE +/- 0.05, N = 316.1316.1816.49MIN: 14.36 / MAX: 19.06MIN: 9 / MAX: 21.75MIN: 9.1 / MAX: 33.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC50100150200250SE +/- 0.67, N = 3247.67246.98242.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC510152025SE +/- 0.03, N = 318.2218.1018.68MIN: 16.77 / MAX: 28.3MIN: 10.18 / MAX: 27.41MIN: 10.54 / MAX: 28.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC50100150200250SE +/- 0.38, N = 3219.33220.79213.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC918273645SE +/- 0.33, N = 337.4932.5033.55MIN: 29.62 / MAX: 50.43MIN: 15.55 / MAX: 40.11MIN: 15.9 / MAX: 44.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC306090120150SE +/- 1.14, N = 3106.65122.98119.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC510152025SE +/- 0.08, N = 320.6120.4321.76MIN: 15.16 / MAX: 28.91MIN: 18.15 / MAX: 29.83MIN: 14.5 / MAX: 32.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC4080120160200SE +/- 0.66, N = 3193.90195.66183.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

GraphicsMagick

Operation: Enhanced

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedABC306090120150SE +/- 0.00, N = 31431431431. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Noise-Gaussian

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianABC4080120160200SE +/- 0.33, N = 31721721721. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC48121620SE +/- 0.06, N = 315.7615.7616.46MIN: 9.85 / MAX: 25.38MIN: 11.99 / MAX: 24.27MIN: 12.44 / MAX: 25.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC80160240320400SE +/- 1.30, N = 3380.40380.42364.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

GraphicsMagick

Operation: Swirl

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlABC60120180240300SE +/- 0.00, N = 32642642651. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC0.14180.28360.42540.56720.709SE +/- 0.00, N = 30.600.600.63MIN: 0.39 / MAX: 8.97MIN: 0.36 / MAX: 9.05MIN: 0.38 / MAX: 10.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC2K4K6K8K10KSE +/- 41.17, N = 39929.889884.049454.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC0.2610.5220.7831.0441.305SE +/- 0.01, N = 31.111.121.16MIN: 0.67 / MAX: 12.18MIN: 0.66 / MAX: 2.71MIN: 0.67 / MAX: 13.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC11002200330044005500SE +/- 18.95, N = 35321.405297.965106.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

GraphicsMagick

Operation: Resizing

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingABC140280420560700SE +/- 0.58, N = 36406446431. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Sharpen

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenABC20406080100SE +/- 0.00, N = 39191911. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Rotate

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateABC140280420560700SE +/- 3.71, N = 36556256561. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: HWB Color Space

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceABC140280420560700SE +/- 0.33, N = 36286386391. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: High

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighABC306090120150SE +/- 1.15, N = 3128.4132.7134.9

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughABC1.07722.15443.23164.30885.386SE +/- 0.0015, N = 34.78664.78764.78471. (CXX) g++ options: -O3 -flto -pthread

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Medium

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: MediumABC4080120160200SE +/- 2.18, N = 4188.3182.0179.2

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipABC0.47250.9451.41751.892.3625SE +/- 0.00, N = 32.12.12.1

Redis

Test: SET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50ABC500K1000K1500K2000K2500KSE +/- 21826.28, N = 62361094.501924962.252306357.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pABC0.91781.83562.75343.67124.589SE +/- 0.007, N = 34.0794.0684.0721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

ASTC Encoder

Preset: Fast

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastABC20406080100SE +/- 0.07, N = 3100.64100.34100.221. (CXX) g++ options: -O3 -flto -pthread

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingABC8K16K24K32K40KSE +/- 65.37, N = 33873839049389621. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingABC10K20K30K40K50KSE +/- 84.10, N = 34819648345472491. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Primesieve

Length: 1e12

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12ABC714212835SE +/- 0.03, N = 331.1530.7230.991. (CXX) g++ options: -O3

Aircrack-ng

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7ABC5K10K15K20K25KSE +/- 2.01, N = 323361.8523363.4623366.601. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Redis

Test: SADD - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 1000ABC600K1200K1800K2400K3000KSE +/- 35265.51, N = 32609105.502158587.252594956.171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Inkscape

Operation: SVG Files To PNG

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGABC714212835SE +/- 0.13, N = 327.8527.9727.671. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KABC612182430SE +/- 0.09, N = 324.9425.4625.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Redis

Test: GET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500ABC600K1200K1800K2400K3000KSE +/- 16101.19, N = 32815841.502823473.752799995.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumABC918273645SE +/- 0.01, N = 337.1837.2337.191. (CXX) g++ options: -O3 -flto -pthread

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultABC61218243024.5324.5024.30

SVT-AV1

Encoder Mode: Preset 10 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KABC1224364860SE +/- 0.09, N = 351.1553.3253.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

C-Blosc

Test: blosclz bitshuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleABC13002600390052006500SE +/- 28.37, N = 35989.05946.45977.61. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pABC1530456075SE +/- 0.23, N = 368.8668.3469.101. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KABC20406080100SE +/- 0.25, N = 376.4478.2578.111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Unpacking The Linux Kernel

linux-5.19.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzABC246810SE +/- 0.031, N = 47.2417.1087.062

C-Blosc

Test: blosclz shuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleABC2K4K6K8K10KSE +/- 23.89, N = 310148.210174.610229.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

SVT-AV1

Encoder Mode: Preset 10 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pABC4080120160200SE +/- 0.48, N = 3158.08158.06158.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinABC1.1122.2243.3364.4485.56SE +/- 0.012, N = 34.9424.9414.9121. (CXX) g++ options: -O3 -lm -ldl

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pABC60120180240300SE +/- 1.67, N = 3285.72285.84286.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq


Phoronix Test Suite v10.8.4