sfsd

Intel Core i7-8700K testing with a ASUS TUF Z370-PLUS GAMING (2001 BIOS) and ASUS Intel UHD 630 CFL GT2 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2209060-NE-SFSD7970842&sor.

sfsdProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABCIntel Core i7-8700K @ 4.70GHz (6 Cores / 12 Threads)ASUS TUF Z370-PLUS GAMING (2001 BIOS)Intel 8th Gen Core16GB128GB Toshiba THNSN5128GPU7ASUS Intel UHD 630 CFL GT2 16GB (1200MHz)Realtek ALC887-VDDELL S2409WIntel I219-VUbuntu 22.045.19.0-rc6-phx-retbleed (x86_64)GNOME Shell 42.4X Server + Wayland4.6 Mesa 22.0.11.2.204GCC 11.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled

sfsdunpack-linux: linux-5.19.tar.xzunvanquished: 1920 x 1080 - Highunvanquished: 1920 x 1080 - Ultraunvanquished: 1920 x 1080 - Mediumblosc: blosclz shuffleblosc: blosclz bitshufflelammps: Rhodopsin Proteingraphics-magick: Swirlgraphics-magick: Rotategraphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Resizinggraphics-magick: Noise-Gaussiangraphics-magick: HWB Color Spacesvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 10 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingbuild-nodejs: Time To Compilebuild-php: Time To Compilebuild-python: Defaultbuild-python: Released Build, PGO + LTO Optimizedprimesieve: 1e12primesieve: 1e13build-erlang: Time To Compilebuild-wasmer: Time To Compileaircrack-ng: node-web-tooling: clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, Third Runspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 1000 - SHA-512 Benchmark Timespark: 1000000 - 1000 - Calculate Pi Benchmarkspark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Group By Test Timespark: 1000000 - 1000 - Repartition Test Timespark: 1000000 - 1000 - Inner Join Test Timespark: 1000000 - 1000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 10000000 - 100 - SHA-512 Benchmark Timespark: 10000000 - 100 - Calculate Pi Benchmarkspark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Group By Test Timespark: 10000000 - 100 - Repartition Test Timespark: 10000000 - 100 - Inner Join Test Timespark: 10000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 500 - SHA-512 Benchmark Timespark: 10000000 - 500 - Calculate Pi Benchmarkspark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 500 - Group By Test Timespark: 10000000 - 500 - Repartition Test Timespark: 10000000 - 500 - Inner Join Test Timespark: 10000000 - 500 - Broadcast Inner Join Test Timespark: 20000000 - 100 - SHA-512 Benchmark Timespark: 20000000 - 100 - Calculate Pi Benchmarkspark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 100 - Group By Test Timespark: 20000000 - 100 - Repartition Test Timespark: 20000000 - 100 - Inner Join Test Timespark: 20000000 - 100 - Broadcast Inner Join Test Timespark: 20000000 - 500 - SHA-512 Benchmark Timespark: 20000000 - 500 - Calculate Pi Benchmarkspark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 500 - Group By Test Timespark: 20000000 - 500 - Repartition Test Timespark: 20000000 - 500 - Inner Join Test Timespark: 20000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Group By Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 10000000 - 1000 - SHA-512 Benchmark Timespark: 10000000 - 1000 - Calculate Pi Benchmarkspark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 1000 - Group By Test Timespark: 10000000 - 1000 - Repartition Test Timespark: 10000000 - 1000 - Inner Join Test Timespark: 10000000 - 1000 - Broadcast Inner Join Test Timespark: 10000000 - 2000 - SHA-512 Benchmark Timespark: 10000000 - 2000 - Calculate Pi Benchmarkspark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 2000 - Group By Test Timespark: 10000000 - 2000 - Repartition Test Timespark: 10000000 - 2000 - Inner Join Test Timespark: 10000000 - 2000 - Broadcast Inner Join Test Timespark: 20000000 - 1000 - SHA-512 Benchmark Timespark: 20000000 - 1000 - Calculate Pi Benchmarkspark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 1000 - Group By Test Timespark: 20000000 - 1000 - Repartition Test Timespark: 20000000 - 1000 - Inner Join Test Timespark: 20000000 - 1000 - Broadcast Inner Join Test Timespark: 20000000 - 2000 - SHA-512 Benchmark Timespark: 20000000 - 2000 - Calculate Pi Benchmarkspark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 2000 - Group By Test Timespark: 20000000 - 2000 - Repartition Test Timespark: 20000000 - 2000 - Inner Join Test Timespark: 20000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 1000 - SHA-512 Benchmark Timespark: 40000000 - 1000 - Calculate Pi Benchmarkspark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 1000 - Group By Test Timespark: 40000000 - 1000 - Repartition Test Timespark: 40000000 - 1000 - Inner Join Test Timespark: 40000000 - 1000 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timedragonflydb: 50 - 1:1dragonflydb: 50 - 1:5dragonflydb: 50 - 5:1dragonflydb: 200 - 1:1dragonflydb: 200 - 1:5dragonflydb: 200 - 5:1redis: GET - 50redis: SET - 50redis: GET - 500redis: LPOP - 50redis: SADD - 50redis: SET - 500redis: GET - 1000redis: LPOP - 500redis: LPUSH - 50redis: SADD - 500redis: SET - 1000redis: LPOP - 1000redis: LPUSH - 500redis: SADD - 1000redis: LPUSH - 1000astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustiveinkscape: SVG Files To PNGmemtier-benchmark: Redis - 50 - 1:1memtier-benchmark: Redis - 50 - 1:5memtier-benchmark: Redis - 50 - 5:1memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 500 - 1:1memtier-benchmark: Redis - 500 - 1:5memtier-benchmark: Redis - 500 - 5:1memtier-benchmark: Redis - 100 - 1:10memtier-benchmark: Redis - 500 - 1:10mnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUnatron: Spaceshipai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI ScoreABC7.241128.457188.310148.259894.942264655911436401726281.19524.93851.15176.4434.07968.855158.082285.7234819638738957.97186.84224.532288.39831.15369.551141.86387.61323361.85412.16102.81113.48115.744.41265.15322635414.043.823.312.341.864.67264.22654775713.9059905344.483.572.572.234.664221858264.05294161714.074.803.662.802.569569624.94263.65256893414.0438611965.234.063.5982461733.2718.72263.65208859814.059.4713.80707665815.7814.9617.86263.75441167514.029.0513.1409712214.39640577213.4435.44264.38178005714.0514.97058239325.98692184829.61382041231.6033.70263.63965716714.09943050314.6329841124.8829.1928.69613542964.387573869263.31580113313.9435.6851.91613444458.6057.0262.29264.98196995413.9934.5848.5855.3157.4118.77263.34891066113.759.9613.5315.0514.5118.73266.09530546414.209.6013.6653976215.8714.5933.09102933264.54099840213.90985431714.5124.35018248829.2027.3333.48265.64273126213.92469459515.3425.2329.1227.9562.08264.00564107614.0032.9546.6552317259.0558.09702831161.26263.76705939513.8633.3250.0159.1555.7388316181988043.92144617.321927867.312087416.282211895.731992128.4127325882361094.52815841.53444536.752629399.25233137029126173301260.751906321.122627615.752074037.52090338.52046449.252609105.51848508.38100.635337.18054.78660.489727.8541945581.311989301.281726134.011806170.521970123.241716481.411950043.181623068.241761822.771512323.241992499.121790527.7213.3231.6353.37935.3924.8673.0664.25933.97815.524.643.593.313.476.821.112.3555.2310.268.2321.1824.3416.3510.19230.153.971.972022.731.233200.371.243194.29106.6537.493.81051.99219.3318.22193.920.6121.96182380.415.76247.6716.135321.41.119929.880.62.196298619487.108132.756.618210174.65946.44.941264625911436441726381.20325.45753.31578.2484.06868.34158.056285.8434834539049957.36387.02124.495288.25330.724373.915142.04486.66923363.46112.43100.99111.19121.843.98263.54074998913.883.993.102.281.934.51264.14095131513.784.303.562.452.344.64263.77636730614.104.483.633.002.405.31263.96044425814.355.263.923.703.5518.702075341263.14342138914.059.0613.4715.49599405614.62926564817.762720002263.97171472314.048.9913.25743112814.9413.7334.93265.85055167614.08552208815.1925.1728.9429.2533.75263.10588012313.95763293815.1926.8428.7928.75054509364.85263.85494709914.0235.0857.9258.3444501756.92380852562.61263.96197845513.9634.5447.21524385456.9955.5318.98264.01511800714.0210.0713.6915.1614.5918.902454953263.78608643713.8010.1013.7816.0414.7132.985493107264.29858919413.8014.24417666323.9728.2526.9533.37266.22516373313.9914.74399345924.8728.9928.7961.82910047262.92088035513.9933.8750.6055.6556.3861.72263.76369023214.86566712733.0151.18643296260.76290474855.241994368.212154148.51931122.182051704.422194182.242005750.942861609.251924962.252823473.752141549.52605961.25232768023470192035452.621786065.122492194.52222163.252043605.621778839.882158587.251897336100.343237.23434.78760.489527.9651711625.321942966.661713885.811761644.311919288.211737144.541899683.691593958.741772298.561533754.232022525.581789992.0513.6531.653.39935.3044.8513.1234.27133.95215.594.63.573.183.416.781.112.2755.2110.278.2421.0524.3516.3410228.893.861.982019.271.233196.881.23299.94122.9832.53.811049.73220.7918.1195.6620.4322.06181.13380.4215.76246.9816.185297.961.129884.040.62.196598519507.062134.957.0179.210229.15977.64.912265656911436431726391.20125.65253.40778.1084.07269.095158.634285.9974724938962955.94686.24524.302289.76830.989382.859142.31685.90223366.59812.14100.03113.79115.004.24265.9614.103.903.282.281.914.69266.5014.004.373.412.622.374.76266.02502816214.114.653.662.942.525.32265.81172604314.075.034.033.623.3818.64265.6313.969.3413.9015.8515.6817.89266.28317523614.009.3513.2914.9614.1434.57265.8313.9915.3526.7429.7131.1833.71265.92085674013.8914.7824.5628.3227.6965.48266.0414.1936.3250.0357.9458.7562.43266.7113.9734.2948.1857.1954.7618.81266.0714.809.8213.5915.1014.2318.80265.8513.9810.1513.9016.6014.5732.77265.9914.1214.5824.4527.8127.7633.54266.7113.9315.3025.4129.1327.6562.14266.4813.9533.1847.2055.9955.6761.87266.1813.8833.8947.4858.0554.841996857.542159425.651934424.722074466.952217552.262032385.142830668.702306357.422799995.583105232.872552054.122201393.842732949.303208768.771878914.922568888.292226848.232028878.431933436.952594956.171945768.33100.221037.18754.78470.485927.6661819804.241929613.471695938.521813065.591943695.241711961.581944401.051620962.121720148.881545189.591996437.791748466.5113.1041.6363.40835.2824.8773.0804.27533.90515.614.623.613.323.486.841.1012.3855.1110.268.2321.0324.3116.3310.23228.593.971.932056.361.183310.581.173338.27119.1333.553.631100.09213.9718.68183.6521.7621.00190.39364.2816.46242.2216.495106.221.169454.830.632.19499841933OpenBenchmarking.org

Unpacking The Linux Kernel

linux-5.19.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCBA246810SE +/- 0.031, N = 47.0627.1087.241

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: High

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighCBA306090120150SE +/- 1.15, N = 3134.9132.7128.4

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraCAB1326395265SE +/- 0.03, N = 357.057.056.6

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Medium

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: MediumABC4080120160200SE +/- 2.18, N = 4188.3182.0179.2

C-Blosc

Test: blosclz shuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCBA2K4K6K8K10KSE +/- 23.89, N = 310229.110174.610148.21. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

C-Blosc

Test: blosclz bitshuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleACB13002600390052006500SE +/- 28.37, N = 35989.05977.65946.41. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinABC1.1122.2243.3364.4485.56SE +/- 0.012, N = 34.9424.9414.9121. (CXX) g++ options: -O3 -lm -ldl

GraphicsMagick

Operation: Swirl

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlCBA60120180240300SE +/- 0.00, N = 32652642641. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Rotate

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCAB140280420560700SE +/- 3.71, N = 36566556251. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Sharpen

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCBA20406080100SE +/- 0.00, N = 39191911. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Enhanced

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCBA306090120150SE +/- 0.00, N = 31431431431. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Resizing

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingBCA140280420560700SE +/- 0.58, N = 36446436401. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Noise-Gaussian

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCBA4080120160200SE +/- 0.33, N = 31721721721. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: HWB Color Space

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCBA140280420560700SE +/- 0.33, N = 36396386281. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KBCA0.27070.54140.81211.08281.3535SE +/- 0.004, N = 31.2031.2011.1951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCBA612182430SE +/- 0.09, N = 325.6525.4624.941. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 10 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCBA1224364860SE +/- 0.09, N = 353.4153.3251.151. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KBCA20406080100SE +/- 0.25, N = 378.2578.1176.441. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pACB0.91781.83562.75343.67124.589SE +/- 0.007, N = 34.0794.0724.0681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pCAB1530456075SE +/- 0.23, N = 369.1068.8668.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 10 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pCAB4080120160200SE +/- 0.48, N = 3158.63158.08158.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pCBA60120180240300SE +/- 1.67, N = 3286.00285.84285.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingBAC10K20K30K40K50KSE +/- 84.10, N = 34834548196472491. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingBCA8K16K24K32K40KSE +/- 65.37, N = 33904938962387381. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileCBA2004006008001000SE +/- 0.84, N = 3955.95957.36957.97

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileCAB20406080100SE +/- 0.42, N = 386.2586.8487.02

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultCBA61218243024.3024.5024.53

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedBAC60120180240300288.25288.40289.77

Primesieve

Length: 1e12

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12BCA714212835SE +/- 0.03, N = 330.7230.9931.151. (CXX) g++ options: -O3

Primesieve

Length: 1e13

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13ABC80160240320400SE +/- 1.09, N = 3369.55373.92382.861. (CXX) g++ options: -O3

Timed Erlang/OTP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileABC306090120150SE +/- 0.19, N = 3141.86142.04142.32

Timed Wasmer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileCBA20406080100SE +/- 0.49, N = 385.9086.6787.611. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Aircrack-ng

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7CBA5K10K15K20K25KSE +/- 2.01, N = 323366.6023363.4623361.851. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Node.js V8 Web Tooling Benchmark

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkBAC3691215SE +/- 0.06, N = 312.4312.1612.14

ClickHouse

100M Rows Web Analytics Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheABC20406080100SE +/- 1.12, N = 5102.81100.99100.03MIN: 7.39 / MAX: 15000MIN: 7.71 / MAX: 8571.43MIN: 6.85 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCAB306090120150SE +/- 1.47, N = 5113.79113.48111.19MIN: 7.08 / MAX: 20000MIN: 7.71 / MAX: 12000MIN: 8.34 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunBAC306090120150SE +/- 1.03, N = 5121.84115.74115.00MIN: 8.5 / MAX: 20000MIN: 8.6 / MAX: 20000MIN: 7.14 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeBCA0.99231.98462.97693.96924.9615SE +/- 0.06, N = 93.984.244.41

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.43, N = 9263.54265.15265.96

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeBAC48121620SE +/- 0.09, N = 913.8814.0414.10

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeACB0.89781.79562.69343.59124.489SE +/- 0.04, N = 93.823.903.99

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeBCA0.74481.48962.23442.97923.724SE +/- 0.02, N = 93.103.283.31

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeBCA0.52651.0531.57952.1062.6325SE +/- 0.02, N = 92.282.282.34

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeACB0.43430.86861.30291.73722.1715SE +/- 0.02, N = 91.861.911.93

Apache Spark

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeBAC1.05532.11063.16594.22125.2765SE +/- 0.05, N = 94.514.674.69

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.36, N = 9264.14264.23266.50

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeBAC48121620SE +/- 0.03, N = 913.7813.9114.00

Apache Spark

Row Count: 1000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeBCA1.0082.0163.0244.0325.04SE +/- 0.02, N = 94.304.374.48

Apache Spark

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeCBA0.80331.60662.40993.21324.0165SE +/- 0.02, N = 93.413.563.57

Apache Spark

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeBAC0.58951.1791.76852.3582.9475SE +/- 0.03, N = 92.452.572.62

Apache Spark

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC0.53331.06661.59992.13322.6665SE +/- 0.11, N = 92.232.342.37

Apache Spark

Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark TimeBAC1.0712.1423.2134.2845.355SE +/- 0.083377130, N = 64.6400000004.6642218584.760000000

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.19, N = 6263.78264.05266.03

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.14, N = 614.0714.1014.11

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Group By Test TimeBCA1.082.163.244.325.4SE +/- 0.06, N = 64.484.654.80

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Repartition Test TimeBAC0.82351.6472.47053.2944.1175SE +/- 0.02, N = 63.633.663.66

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Inner Join Test TimeACB0.6751.352.0252.73.375SE +/- 0.03, N = 62.802.943.00

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test TimeBCA0.57821.15641.73462.31282.891SE +/- 0.02035353, N = 62.400000002.520000002.56956962

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC1.1972.3943.5914.7885.985SE +/- 0.05, N = 94.945.315.32

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.16, N = 9263.65263.96265.81

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeACB48121620SE +/- 0.10, N = 914.0414.0714.35

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test TimeCAB1.18352.3673.55054.7345.9175SE +/- 0.03, N = 95.035.235.26

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test TimeBCA0.91351.8272.74053.6544.5675SE +/- 0.05, N = 93.924.034.06

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test TimeACB0.83251.6652.49753.334.1625SE +/- 0.039029208, N = 93.5982461733.6200000003.700000000

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test TimeACB0.79881.59762.39643.19523.994SE +/- 0.06, N = 93.273.383.55

Apache Spark

Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark TimeCBA510152025SE +/- 0.13, N = 318.6418.7018.72

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.13, N = 3263.14263.65265.63

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeCAB48121620SE +/- 0.09, N = 313.9614.0514.05

Apache Spark

Row Count: 10000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Group By Test TimeBCA3691215SE +/- 0.06, N = 39.069.349.47

Apache Spark

Row Count: 10000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Repartition Test TimeBAC48121620SE +/- 0.19, N = 313.4713.8113.90

Apache Spark

Row Count: 10000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Inner Join Test TimeBAC48121620SE +/- 0.27, N = 315.5015.7815.85

Apache Spark

Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test TimeBAC48121620SE +/- 0.54, N = 314.6314.9615.68

Apache Spark

Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark TimeBAC48121620SE +/- 0.14, N = 317.7617.8617.89

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.09, N = 3263.75263.97266.28

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCAB48121620SE +/- 0.06, N = 314.0014.0214.04

Apache Spark

Row Count: 10000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Group By Test TimeBAC3691215SE +/- 0.04, N = 38.999.059.35

Apache Spark

Row Count: 10000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Repartition Test TimeABC3691215SE +/- 0.08, N = 313.1413.2613.29

Apache Spark

Row Count: 10000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Inner Join Test TimeABC48121620SE +/- 0.31, N = 314.4014.9414.96

Apache Spark

Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.50, N = 313.4413.7314.14

Apache Spark

Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark TimeCBA816243240SE +/- 0.15, N = 334.5734.9335.44

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi BenchmarkACB60120180240300SE +/- 0.45, N = 3264.38265.83265.85

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeCAB48121620SE +/- 0.03, N = 313.9914.0514.09

Apache Spark

Row Count: 20000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Group By Test TimeABC48121620SE +/- 0.19, N = 314.9715.1915.35

Apache Spark

Row Count: 20000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Repartition Test TimeBAC612182430SE +/- 0.16, N = 325.1725.9926.74

Apache Spark

Row Count: 20000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Inner Join Test TimeBAC714212835SE +/- 0.45, N = 328.9429.6129.71

Apache Spark

Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test TimeBCA714212835SE +/- 0.32, N = 329.2531.1831.60

Apache Spark

Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark TimeACB816243240SE +/- 0.27, N = 333.7033.7133.75

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.37, N = 3263.11263.64265.92

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCBA48121620SE +/- 0.08, N = 313.8913.9614.10

Apache Spark

Row Count: 20000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Group By Test TimeACB48121620SE +/- 0.19, N = 314.6314.7815.19

Apache Spark

Row Count: 20000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Repartition Test TimeCAB612182430SE +/- 0.08, N = 324.5624.8826.84

Apache Spark

Row Count: 20000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Inner Join Test TimeCBA714212835SE +/- 0.16, N = 328.3228.7929.19

Apache Spark

Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test TimeCAB714212835SE +/- 0.24, N = 327.6928.7028.75

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark TimeABC1530456075SE +/- 0.79, N = 464.3964.8565.48

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.20, N = 4263.32263.85266.04

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.26, N = 413.9414.0214.19

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test TimeBAC816243240SE +/- 0.10, N = 435.0835.6836.32

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test TimeCAB1326395265SE +/- 0.71, N = 450.0351.9257.92

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test TimeCBA1326395265SE +/- 1.35, N = 457.9458.3458.60

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test TimeBAC1326395265SE +/- 0.97, N = 456.9257.0258.75

Apache Spark

Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeACB1428425670SE +/- 0.45, N = 362.2962.4362.61

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.59, N = 3263.96264.98266.71

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeBCA48121620SE +/- 0.02, N = 313.9613.9713.99

Apache Spark

Row Count: 40000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeCBA816243240SE +/- 0.40, N = 334.2934.5434.58

Apache Spark

Row Count: 40000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeBCA1122334455SE +/- 0.93, N = 347.2248.1848.58

Apache Spark

Row Count: 40000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeABC1326395265SE +/- 0.91, N = 355.3156.9957.19

Apache Spark

Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeCBA1326395265SE +/- 1.23, N = 354.7655.5357.41

Apache Spark

Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark TimeACB510152025SE +/- 0.15, N = 318.7718.8118.98

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.24, N = 3263.35264.02266.07

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.46, N = 313.7514.0214.80

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Group By Test TimeCAB3691215SE +/- 0.21, N = 39.829.9610.07

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Repartition Test TimeACB48121620SE +/- 0.10, N = 313.5313.5913.69

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Inner Join Test TimeACB48121620SE +/- 0.25, N = 315.0515.1015.16

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test TimeCAB48121620SE +/- 0.14, N = 314.2314.5114.59

Apache Spark

Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark TimeACB510152025SE +/- 0.01, N = 318.7318.8018.90

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi BenchmarkBCA60120180240300SE +/- 0.20, N = 3263.79265.85266.10

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeBCA48121620SE +/- 0.09, N = 313.8013.9814.20

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Group By Test TimeABC3691215SE +/- 0.09, N = 39.6010.1010.15

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Repartition Test TimeABC48121620SE +/- 0.18, N = 313.6713.7813.90

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Inner Join Test TimeABC48121620SE +/- 0.34, N = 315.8716.0416.60

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test TimeCAB48121620SE +/- 0.04, N = 314.5714.5914.71

Apache Spark

Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark TimeCBA816243240SE +/- 0.23, N = 332.7732.9933.09

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.39, N = 3264.30264.54265.99

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeBAC48121620SE +/- 0.30, N = 313.8013.9114.12

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Group By Test TimeBAC48121620SE +/- 0.40, N = 314.2414.5114.58

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Repartition Test TimeBAC612182430SE +/- 0.06, N = 323.9724.3524.45

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Inner Join Test TimeCBA714212835SE +/- 0.22, N = 327.8128.2529.20

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test TimeBAC714212835SE +/- 0.49, N = 326.9527.3327.76

Apache Spark

Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark TimeBAC816243240SE +/- 0.08, N = 333.3733.4833.54

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.31, N = 3265.64266.23266.71

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeACB48121620SE +/- 0.08, N = 313.9213.9313.99

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Group By Test TimeBCA48121620SE +/- 0.33, N = 314.7415.3015.34

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Repartition Test TimeBAC612182430SE +/- 0.18, N = 324.8725.2325.41

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Inner Join Test TimeBAC714212835SE +/- 0.30, N = 328.9929.1229.13

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test TimeCAB714212835SE +/- 0.38, N = 327.6527.9528.79

Apache Spark

Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark TimeBAC1428425670SE +/- 0.13, N = 361.8362.0862.14

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.27, N = 3262.92264.01266.48

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeCBA48121620SE +/- 0.07, N = 313.9513.9914.00

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Group By Test TimeACB816243240SE +/- 0.03, N = 332.9533.1833.87

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Repartition Test TimeACB1122334455SE +/- 0.26, N = 346.6647.2050.60

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Inner Join Test TimeBCA1326395265SE +/- 0.99, N = 355.6555.9959.05

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test TimeCBA1326395265SE +/- 1.93, N = 355.6756.3858.10

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC1428425670SE +/- 0.31, N = 361.2661.7261.87

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi BenchmarkBAC60120180240300SE +/- 0.18, N = 3263.76263.77266.18

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeACB48121620SE +/- 0.06, N = 313.8613.8814.87

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test TimeBAC816243240SE +/- 0.10, N = 333.0133.3233.89

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test TimeCAB1224364860SE +/- 0.42, N = 347.4850.0151.19

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test TimeCAB1428425670SE +/- 0.81, N = 358.0559.1560.76

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test TimeCBA1326395265SE +/- 0.68, N = 354.8455.2455.74

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1CBA400K800K1200K1600K2000KSE +/- 6994.81, N = 31996857.541994368.211988043.901. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5CBA500K1000K1500K2000K2500KSE +/- 4648.71, N = 32159425.652154148.502144617.321. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1CBA400K800K1200K1600K2000KSE +/- 5831.26, N = 31934424.721931122.181927867.311. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1ACB400K800K1200K1600K2000KSE +/- 8670.43, N = 32087416.282074466.952051704.421. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5CAB500K1000K1500K2000K2500KSE +/- 1318.89, N = 32217552.262211895.732194182.241. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1CBA400K800K1200K1600K2000KSE +/- 6872.33, N = 32032385.142005750.941992128.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Test: GET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50BCA600K1200K1800K2400K3000KSE +/- 36134.78, N = 152861609.252830668.702732588.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50ACB500K1000K1500K2000K2500KSE +/- 21826.28, N = 62361094.502306357.421924962.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: GET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500BAC600K1200K1800K2400K3000KSE +/- 16101.19, N = 32823473.752815841.502799995.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 50ACB700K1400K2100K2800K3500KSE +/- 120387.99, N = 153444536.753105232.872141549.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 50ABC600K1200K1800K2400K3000KSE +/- 49552.86, N = 152629399.252605961.252552054.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500ABC500K1000K1500K2000K2500KSE +/- 55052.29, N = 122331370.002327680.002201393.841. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: GET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000ACB600K1200K1800K2400K3000KSE +/- 60647.19, N = 152912617.002732949.302347019.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 500ACB700K1400K2100K2800K3500KSE +/- 69012.03, N = 153301260.753208768.772035452.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 50ACB400K800K1200K1600K2000KSE +/- 18846.80, N = 151906321.121878914.921786065.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 500ACB600K1200K1800K2400K3000KSE +/- 38463.64, N = 122627615.752568888.292492194.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CBA500K1000K1500K2000K2500KSE +/- 36124.68, N = 152226848.232222163.252074037.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 1000ABC400K800K1200K1600K2000KSE +/- 38452.70, N = 152090338.502043605.622028878.431. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 500ACB400K800K1200K1600K2000KSE +/- 32181.23, N = 152046449.251933436.951778839.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 1000ACB600K1200K1800K2400K3000KSE +/- 35265.51, N = 32609105.502594956.172158587.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 1000CBA400K800K1200K1600K2000KSE +/- 33163.21, N = 151945768.331897336.001848508.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

Preset: Fast

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastABC20406080100SE +/- 0.07, N = 3100.64100.34100.221. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumBCA918273645SE +/- 0.01, N = 337.2337.1937.181. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughBAC1.07722.15443.23164.30885.386SE +/- 0.0015, N = 34.78764.78664.78471. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveABC0.11020.22040.33060.44080.551SE +/- 0.0010, N = 30.48970.48950.48591. (CXX) g++ options: -O3 -flto -pthread

Inkscape

Operation: SVG Files To PNG

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGCAB714212835SE +/- 0.13, N = 327.6727.8527.971. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1ACB400K800K1200K1600K2000KSE +/- 11256.90, N = 31945581.311819804.241711625.321. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ABC400K800K1200K1600K2000KSE +/- 12369.07, N = 31989301.281942966.661929613.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 4351.00, N = 31726134.011713885.811695938.521. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1CAB400K800K1200K1600K2000KSE +/- 8267.67, N = 31813065.591806170.521761644.311. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ACB400K800K1200K1600K2000KSE +/- 18068.84, N = 31970123.241943695.241919288.211. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1BAC400K800K1200K1600K2000KSE +/- 8849.68, N = 31737144.541716481.411711961.581. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ACB400K800K1200K1600K2000KSE +/- 27485.35, N = 31950043.181944401.051899683.691. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1ACB300K600K900K1200K1500KSE +/- 13712.27, N = 31623068.241620962.121593958.741. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5BAC400K800K1200K1600K2000KSE +/- 8909.71, N = 31772298.561761822.771720148.881. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1CBA300K600K900K1200K1500KSE +/- 7898.00, N = 31545189.591533754.231512323.241. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10BCA400K800K1200K1600K2000KSE +/- 10080.98, N = 32022525.581996437.791992499.121. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ABC400K800K1200K1600K2000KSE +/- 8041.06, N = 31790527.721789992.051748466.511. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCAB48121620SE +/- 0.15, N = 313.1013.3213.65MIN: 12.57 / MAX: 25.82MIN: 12.9 / MAX: 14.46MIN: 13.28 / MAX: 25.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ACB0.37130.74261.11391.48521.8565SE +/- 0.014, N = 31.6351.6361.650MIN: 1.58 / MAX: 2.02MIN: 1.5 / MAX: 2.6MIN: 1.59 / MAX: 2.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ABC0.76681.53362.30043.06723.834SE +/- 0.007, N = 33.3793.3993.408MIN: 3.3 / MAX: 15.39MIN: 3.32 / MAX: 4.38MIN: 3.3 / MAX: 4.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CBA816243240SE +/- 0.08, N = 335.2835.3035.39MIN: 35.07 / MAX: 47.22MIN: 35.1 / MAX: 46.47MIN: 35.23 / MAX: 46.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0BAC1.09732.19463.29194.38925.4865SE +/- 0.010, N = 34.8514.8674.877MIN: 4.76 / MAX: 7.9MIN: 4.75 / MAX: 6.13MIN: 4.74 / MAX: 6.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ACB0.70271.40542.10812.81083.5135SE +/- 0.003, N = 33.0663.0803.123MIN: 2.98 / MAX: 4.12MIN: 3 / MAX: 4.62MIN: 3.01 / MAX: 4.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ABC0.96191.92382.88573.84764.8095SE +/- 0.003, N = 34.2594.2714.275MIN: 4.21 / MAX: 5.18MIN: 4.24 / MAX: 5.01MIN: 4.18 / MAX: 5.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CBA816243240SE +/- 0.05, N = 333.9133.9533.98MIN: 33.51 / MAX: 45.92MIN: 33.83 / MAX: 45.2MIN: 33.82 / MAX: 46.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetABC48121620SE +/- 0.01, N = 315.5215.5915.61MIN: 15.45 / MAX: 15.79MIN: 15.39 / MAX: 16.29MIN: 15.4 / MAX: 16.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2BCA1.0442.0883.1324.1765.22SE +/- 0.03, N = 34.604.624.64MIN: 4.49 / MAX: 5.66MIN: 4.46 / MAX: 5.65MIN: 4.51 / MAX: 5.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3BAC0.81231.62462.43693.24924.0615SE +/- 0.02, N = 33.573.593.61MIN: 3.51 / MAX: 4.6MIN: 3.51 / MAX: 4.29MIN: 3.49 / MAX: 4.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2BAC0.7471.4942.2412.9883.735SE +/- 0.06, N = 33.183.313.32MIN: 3.15 / MAX: 3.87MIN: 3.26 / MAX: 4.01MIN: 3.11 / MAX: 4.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetBAC0.7831.5662.3493.1323.915SE +/- 0.01, N = 33.413.473.48MIN: 3.37 / MAX: 4.1MIN: 3.43 / MAX: 4.15MIN: 3.38 / MAX: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0BAC246810SE +/- 0.01, N = 36.786.826.84MIN: 6.72 / MAX: 7.62MIN: 6.74 / MAX: 7.75MIN: 6.72 / MAX: 7.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceABC0.24750.4950.74250.991.2375SE +/- 0.00, N = 31.101.101.10MIN: 1.08 / MAX: 1.72MIN: 1.07 / MAX: 1.75MIN: 1.07 / MAX: 1.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetBAC3691215SE +/- 0.04, N = 312.2712.3512.38MIN: 12.17 / MAX: 13.31MIN: 12.26 / MAX: 13.23MIN: 12.25 / MAX: 19.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16CBA1224364860SE +/- 0.02, N = 355.1155.2155.23MIN: 54.86 / MAX: 61.92MIN: 54.99 / MAX: 56.79MIN: 54.95 / MAX: 61.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18ACB3691215SE +/- 0.03, N = 310.2610.2610.27MIN: 10.14 / MAX: 11.15MIN: 10.11 / MAX: 11.51MIN: 10.11 / MAX: 17.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetACB246810SE +/- 0.00, N = 38.238.238.24MIN: 8.16 / MAX: 9.38MIN: 8.14 / MAX: 8.98MIN: 8.17 / MAX: 9.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50CBA510152025SE +/- 0.02, N = 321.0321.0521.18MIN: 20.8 / MAX: 22.42MIN: 20.93 / MAX: 22.12MIN: 20.9 / MAX: 22.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyCAB612182430SE +/- 0.01, N = 324.3124.3424.35MIN: 24.15 / MAX: 24.93MIN: 24.19 / MAX: 26.13MIN: 24.26 / MAX: 24.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdCBA48121620SE +/- 0.01, N = 316.3316.3416.35MIN: 16.2 / MAX: 17.41MIN: 16.26 / MAX: 16.69MIN: 16.25 / MAX: 16.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mBAC3691215SE +/- 0.02, N = 310.0010.1910.23MIN: 9.9 / MAX: 10.76MIN: 10.11 / MAX: 11.07MIN: 10.13 / MAX: 11.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerCBA50100150200250SE +/- 0.02, N = 3228.59228.89230.15MIN: 228.35 / MAX: 235.87MIN: 228.7 / MAX: 235.46MIN: 229.93 / MAX: 237.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetBAC0.89331.78662.67993.57324.4665SE +/- 0.03, N = 33.863.973.97MIN: 3.78 / MAX: 4.09MIN: 3.85 / MAX: 4.15MIN: 3.82 / MAX: 11.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUBAC0.44550.8911.33651.7822.2275SE +/- 0.01, N = 31.981.971.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUBAC400800120016002000SE +/- 14.58, N = 32019.272022.732056.36MIN: 1952.63 / MAX: 2066.71MIN: 1942.04 / MAX: 2048.56MIN: 1731.28 / MAX: 2158.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUBAC0.27680.55360.83041.10721.384SE +/- 0.00, N = 31.231.231.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUBAC7001400210028003500SE +/- 4.37, N = 33196.883200.373310.58MIN: 1702.1 / MAX: 3449.98MIN: 1758.1 / MAX: 3458.98MIN: 1782.74 / MAX: 3536.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC0.2790.5580.8371.1161.395SE +/- 0.00, N = 31.241.201.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC7001400210028003500SE +/- 8.61, N = 33194.293299.943338.27MIN: 1761.36 / MAX: 3455.63MIN: 3117.38 / MAX: 3443.22MIN: 2906.46 / MAX: 3515.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUBCA306090120150SE +/- 1.14, N = 3122.98119.13106.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUBCA918273645SE +/- 0.33, N = 332.5033.5537.49MIN: 15.55 / MAX: 40.11MIN: 15.9 / MAX: 44.39MIN: 29.62 / MAX: 50.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUBAC0.85731.71462.57193.42924.2865SE +/- 0.01, N = 33.813.803.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUBAC2004006008001000SE +/- 4.29, N = 31049.731051.991100.09MIN: 1041.12 / MAX: 1057.94MIN: 1040.91 / MAX: 1059.93MIN: 1042 / MAX: 1134.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUBAC50100150200250SE +/- 0.38, N = 3220.79219.33213.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUBAC510152025SE +/- 0.03, N = 318.1018.2218.68MIN: 10.18 / MAX: 27.41MIN: 16.77 / MAX: 28.3MIN: 10.54 / MAX: 28.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUBAC4080120160200SE +/- 0.66, N = 3195.66193.90183.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUBAC510152025SE +/- 0.08, N = 320.4320.6121.76MIN: 18.15 / MAX: 29.83MIN: 15.16 / MAX: 28.91MIN: 14.5 / MAX: 32.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUBAC510152025SE +/- 0.07, N = 322.0621.9621.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUBAC4080120160200SE +/- 0.66, N = 3181.13182.00190.39MIN: 93.18 / MAX: 198.15MIN: 97.01 / MAX: 200.14MIN: 137.7 / MAX: 208.91. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUBAC80160240320400SE +/- 1.30, N = 3380.42380.40364.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC48121620SE +/- 0.06, N = 315.7615.7616.46MIN: 9.85 / MAX: 25.38MIN: 11.99 / MAX: 24.27MIN: 12.44 / MAX: 25.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC50100150200250SE +/- 0.67, N = 3247.67246.98242.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC48121620SE +/- 0.05, N = 316.1316.1816.49MIN: 14.36 / MAX: 19.06MIN: 9 / MAX: 21.75MIN: 9.1 / MAX: 33.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC11002200330044005500SE +/- 18.95, N = 35321.405297.965106.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC0.2610.5220.7831.0441.305SE +/- 0.01, N = 31.111.121.16MIN: 0.67 / MAX: 12.18MIN: 0.66 / MAX: 2.71MIN: 0.67 / MAX: 13.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC2K4K6K8K10KSE +/- 41.17, N = 39929.889884.049454.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC0.14180.28360.42540.56720.709SE +/- 0.00, N = 30.600.600.63MIN: 0.39 / MAX: 8.97MIN: 0.36 / MAX: 9.05MIN: 0.38 / MAX: 10.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCBA0.47250.9451.41751.892.3625SE +/- 0.00, N = 32.12.12.1

AI Benchmark Alpha

Device Inference Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreBAC2004006008001000965962949

AI Benchmark Alpha

Device Training Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreABC2004006008001000986985984

AI Benchmark Alpha

Device AI Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreBAC400800120016002000195019481933


Phoronix Test Suite v10.8.4