sfsd

Intel Core i7-8700K testing with a ASUS TUF Z370-PLUS GAMING (2001 BIOS) and ASUS Intel UHD 630 CFL GT2 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2209060-NE-SFSD7970842.

sfsdProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABCIntel Core i7-8700K @ 4.70GHz (6 Cores / 12 Threads)ASUS TUF Z370-PLUS GAMING (2001 BIOS)Intel 8th Gen Core16GB128GB Toshiba THNSN5128GPU7ASUS Intel UHD 630 CFL GT2 16GB (1200MHz)Realtek ALC887-VDDELL S2409WIntel I219-VUbuntu 22.045.19.0-rc6-phx-retbleed (x86_64)GNOME Shell 42.4X Server + Wayland4.6 Mesa 22.0.11.2.204GCC 11.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled

sfsdunpack-linux: linux-5.19.tar.xzunvanquished: 1920 x 1080 - Highunvanquished: 1920 x 1080 - Ultraunvanquished: 1920 x 1080 - Mediumblosc: blosclz shuffleblosc: blosclz bitshufflelammps: Rhodopsin Proteingraphics-magick: Swirlgraphics-magick: Rotategraphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Resizinggraphics-magick: Noise-Gaussiangraphics-magick: HWB Color Spacesvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 10 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingbuild-nodejs: Time To Compilebuild-php: Time To Compilebuild-python: Defaultbuild-python: Released Build, PGO + LTO Optimizedprimesieve: 1e12primesieve: 1e13build-erlang: Time To Compilebuild-wasmer: Time To Compileaircrack-ng: node-web-tooling: clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, Third Runspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 1000 - SHA-512 Benchmark Timespark: 1000000 - 1000 - Calculate Pi Benchmarkspark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Group By Test Timespark: 1000000 - 1000 - Repartition Test Timespark: 1000000 - 1000 - Inner Join Test Timespark: 1000000 - 1000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 10000000 - 100 - SHA-512 Benchmark Timespark: 10000000 - 100 - Calculate Pi Benchmarkspark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Group By Test Timespark: 10000000 - 100 - Repartition Test Timespark: 10000000 - 100 - Inner Join Test Timespark: 10000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 500 - SHA-512 Benchmark Timespark: 10000000 - 500 - Calculate Pi Benchmarkspark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 500 - Group By Test Timespark: 10000000 - 500 - Repartition Test Timespark: 10000000 - 500 - Inner Join Test Timespark: 10000000 - 500 - Broadcast Inner Join Test Timespark: 20000000 - 100 - SHA-512 Benchmark Timespark: 20000000 - 100 - Calculate Pi Benchmarkspark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 100 - Group By Test Timespark: 20000000 - 100 - Repartition Test Timespark: 20000000 - 100 - Inner Join Test Timespark: 20000000 - 100 - Broadcast Inner Join Test Timespark: 20000000 - 500 - SHA-512 Benchmark Timespark: 20000000 - 500 - Calculate Pi Benchmarkspark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 500 - Group By Test Timespark: 20000000 - 500 - Repartition Test Timespark: 20000000 - 500 - Inner Join Test Timespark: 20000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Group By Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 10000000 - 1000 - SHA-512 Benchmark Timespark: 10000000 - 1000 - Calculate Pi Benchmarkspark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 1000 - Group By Test Timespark: 10000000 - 1000 - Repartition Test Timespark: 10000000 - 1000 - Inner Join Test Timespark: 10000000 - 1000 - Broadcast Inner Join Test Timespark: 10000000 - 2000 - SHA-512 Benchmark Timespark: 10000000 - 2000 - Calculate Pi Benchmarkspark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 2000 - Group By Test Timespark: 10000000 - 2000 - Repartition Test Timespark: 10000000 - 2000 - Inner Join Test Timespark: 10000000 - 2000 - Broadcast Inner Join Test Timespark: 20000000 - 1000 - SHA-512 Benchmark Timespark: 20000000 - 1000 - Calculate Pi Benchmarkspark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 1000 - Group By Test Timespark: 20000000 - 1000 - Repartition Test Timespark: 20000000 - 1000 - Inner Join Test Timespark: 20000000 - 1000 - Broadcast Inner Join Test Timespark: 20000000 - 2000 - SHA-512 Benchmark Timespark: 20000000 - 2000 - Calculate Pi Benchmarkspark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 2000 - Group By Test Timespark: 20000000 - 2000 - Repartition Test Timespark: 20000000 - 2000 - Inner Join Test Timespark: 20000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 1000 - SHA-512 Benchmark Timespark: 40000000 - 1000 - Calculate Pi Benchmarkspark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 1000 - Group By Test Timespark: 40000000 - 1000 - Repartition Test Timespark: 40000000 - 1000 - Inner Join Test Timespark: 40000000 - 1000 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timedragonflydb: 50 - 1:1dragonflydb: 50 - 1:5dragonflydb: 50 - 5:1dragonflydb: 200 - 1:1dragonflydb: 200 - 1:5dragonflydb: 200 - 5:1redis: GET - 50redis: SET - 50redis: GET - 500redis: LPOP - 50redis: SADD - 50redis: SET - 500redis: GET - 1000redis: LPOP - 500redis: LPUSH - 50redis: SADD - 500redis: SET - 1000redis: LPOP - 1000redis: LPUSH - 500redis: SADD - 1000redis: LPUSH - 1000astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustiveinkscape: SVG Files To PNGmemtier-benchmark: Redis - 50 - 1:1memtier-benchmark: Redis - 50 - 1:5memtier-benchmark: Redis - 50 - 5:1memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 500 - 1:1memtier-benchmark: Redis - 500 - 1:5memtier-benchmark: Redis - 500 - 5:1memtier-benchmark: Redis - 100 - 1:10memtier-benchmark: Redis - 500 - 1:10mnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUnatron: Spaceshipai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI ScoreABC7.241128.457188.310148.259894.942264655911436401726281.19524.93851.15176.4434.07968.855158.082285.7234819638738957.97186.84224.532288.39831.15369.551141.86387.61323361.85412.16102.81113.48115.744.41265.15322635414.043.823.312.341.864.67264.22654775713.9059905344.483.572.572.234.664221858264.05294161714.074.803.662.802.569569624.94263.65256893414.0438611965.234.063.5982461733.2718.72263.65208859814.059.4713.80707665815.7814.9617.86263.75441167514.029.0513.1409712214.39640577213.4435.44264.38178005714.0514.97058239325.98692184829.61382041231.6033.70263.63965716714.09943050314.6329841124.8829.1928.69613542964.387573869263.31580113313.9435.6851.91613444458.6057.0262.29264.98196995413.9934.5848.5855.3157.4118.77263.34891066113.759.9613.5315.0514.5118.73266.09530546414.209.6013.6653976215.8714.5933.09102933264.54099840213.90985431714.5124.35018248829.2027.3333.48265.64273126213.92469459515.3425.2329.1227.9562.08264.00564107614.0032.9546.6552317259.0558.09702831161.26263.76705939513.8633.3250.0159.1555.7388316181988043.92144617.321927867.312087416.282211895.731992128.4127325882361094.52815841.53444536.752629399.25233137029126173301260.751906321.122627615.752074037.52090338.52046449.252609105.51848508.38100.635337.18054.78660.489727.8541945581.311989301.281726134.011806170.521970123.241716481.411950043.181623068.241761822.771512323.241992499.121790527.7213.3231.6353.37935.3924.8673.0664.25933.97815.524.643.593.313.476.821.112.3555.2310.268.2321.1824.3416.3510.19230.153.971.972022.731.233200.371.243194.29106.6537.493.81051.99219.3318.22193.920.6121.96182380.415.76247.6716.135321.41.119929.880.62.196298619487.108132.756.618210174.65946.44.941264625911436441726381.20325.45753.31578.2484.06868.34158.056285.8434834539049957.36387.02124.495288.25330.724373.915142.04486.66923363.46112.43100.99111.19121.843.98263.54074998913.883.993.102.281.934.51264.14095131513.784.303.562.452.344.64263.77636730614.104.483.633.002.405.31263.96044425814.355.263.923.703.5518.702075341263.14342138914.059.0613.4715.49599405614.62926564817.762720002263.97171472314.048.9913.25743112814.9413.7334.93265.85055167614.08552208815.1925.1728.9429.2533.75263.10588012313.95763293815.1926.8428.7928.75054509364.85263.85494709914.0235.0857.9258.3444501756.92380852562.61263.96197845513.9634.5447.21524385456.9955.5318.98264.01511800714.0210.0713.6915.1614.5918.902454953263.78608643713.8010.1013.7816.0414.7132.985493107264.29858919413.8014.24417666323.9728.2526.9533.37266.22516373313.9914.74399345924.8728.9928.7961.82910047262.92088035513.9933.8750.6055.6556.3861.72263.76369023214.86566712733.0151.18643296260.76290474855.241994368.212154148.51931122.182051704.422194182.242005750.942861609.251924962.252823473.752141549.52605961.25232768023470192035452.621786065.122492194.52222163.252043605.621778839.882158587.251897336100.343237.23434.78760.489527.9651711625.321942966.661713885.811761644.311919288.211737144.541899683.691593958.741772298.561533754.232022525.581789992.0513.6531.653.39935.3044.8513.1234.27133.95215.594.63.573.183.416.781.112.2755.2110.278.2421.0524.3516.3410228.893.861.982019.271.233196.881.23299.94122.9832.53.811049.73220.7918.1195.6620.4322.06181.13380.4215.76246.9816.185297.961.129884.040.62.196598519507.062134.957.0179.210229.15977.64.912265656911436431726391.20125.65253.40778.1084.07269.095158.634285.9974724938962955.94686.24524.302289.76830.989382.859142.31685.90223366.59812.14100.03113.79115.004.24265.9614.103.903.282.281.914.69266.5014.004.373.412.622.374.76266.02502816214.114.653.662.942.525.32265.81172604314.075.034.033.623.3818.64265.6313.969.3413.9015.8515.6817.89266.28317523614.009.3513.2914.9614.1434.57265.8313.9915.3526.7429.7131.1833.71265.92085674013.8914.7824.5628.3227.6965.48266.0414.1936.3250.0357.9458.7562.43266.7113.9734.2948.1857.1954.7618.81266.0714.809.8213.5915.1014.2318.80265.8513.9810.1513.9016.6014.5732.77265.9914.1214.5824.4527.8127.7633.54266.7113.9315.3025.4129.1327.6562.14266.4813.9533.1847.2055.9955.6761.87266.1813.8833.8947.4858.0554.841996857.542159425.651934424.722074466.952217552.262032385.142830668.702306357.422799995.583105232.872552054.122201393.842732949.303208768.771878914.922568888.292226848.232028878.431933436.952594956.171945768.33100.221037.18754.78470.485927.6661819804.241929613.471695938.521813065.591943695.241711961.581944401.051620962.121720148.881545189.591996437.791748466.5113.1041.6363.40835.2824.8773.0804.27533.90515.614.623.613.323.486.841.1012.3855.1110.268.2321.0324.3116.3310.23228.593.971.932056.361.183310.581.173338.27119.1333.553.631100.09213.9718.68183.6521.7621.00190.39364.2816.46242.2216.495106.221.169454.830.632.19499841933OpenBenchmarking.org

Unpacking The Linux Kernel

linux-5.19.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzABC246810SE +/- 0.031, N = 47.2417.1087.062

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: High

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighABC306090120150SE +/- 1.15, N = 3128.4132.7134.9

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraABC1326395265SE +/- 0.03, N = 357.056.657.0

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Medium

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: MediumABC4080120160200SE +/- 2.18, N = 4188.3182.0179.2

C-Blosc

Test: blosclz shuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleABC2K4K6K8K10KSE +/- 23.89, N = 310148.210174.610229.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

C-Blosc

Test: blosclz bitshuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleABC13002600390052006500SE +/- 28.37, N = 35989.05946.45977.61. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinABC1.1122.2243.3364.4485.56SE +/- 0.012, N = 34.9424.9414.9121. (CXX) g++ options: -O3 -lm -ldl

GraphicsMagick

Operation: Swirl

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlABC60120180240300SE +/- 0.00, N = 32642642651. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Rotate

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateABC140280420560700SE +/- 3.71, N = 36556256561. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Sharpen

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenABC20406080100SE +/- 0.00, N = 39191911. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Enhanced

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedABC306090120150SE +/- 0.00, N = 31431431431. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Resizing

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingABC140280420560700SE +/- 0.58, N = 36406446431. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Noise-Gaussian

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianABC4080120160200SE +/- 0.33, N = 31721721721. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: HWB Color Space

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceABC140280420560700SE +/- 0.33, N = 36286386391. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KABC0.27070.54140.81211.08281.3535SE +/- 0.004, N = 31.1951.2031.2011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KABC612182430SE +/- 0.09, N = 324.9425.4625.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 10 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KABC1224364860SE +/- 0.09, N = 351.1553.3253.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KABC20406080100SE +/- 0.25, N = 376.4478.2578.111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pABC0.91781.83562.75343.67124.589SE +/- 0.007, N = 34.0794.0684.0721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pABC1530456075SE +/- 0.23, N = 368.8668.3469.101. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 10 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pABC4080120160200SE +/- 0.48, N = 3158.08158.06158.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pABC60120180240300SE +/- 1.67, N = 3285.72285.84286.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingABC10K20K30K40K50KSE +/- 84.10, N = 34819648345472491. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingABC8K16K24K32K40KSE +/- 65.37, N = 33873839049389621. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileABC2004006008001000SE +/- 0.84, N = 3957.97957.36955.95

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileABC20406080100SE +/- 0.42, N = 386.8487.0286.25

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultABC61218243024.5324.5024.30

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedABC60120180240300288.40288.25289.77

Primesieve

Length: 1e12

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12ABC714212835SE +/- 0.03, N = 331.1530.7230.991. (CXX) g++ options: -O3

Primesieve

Length: 1e13

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13ABC80160240320400SE +/- 1.09, N = 3369.55373.92382.861. (CXX) g++ options: -O3

Timed Erlang/OTP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileABC306090120150SE +/- 0.19, N = 3141.86142.04142.32

Timed Wasmer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileABC20406080100SE +/- 0.49, N = 387.6186.6785.901. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Aircrack-ng

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7ABC5K10K15K20K25KSE +/- 2.01, N = 323361.8523363.4623366.601. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Node.js V8 Web Tooling Benchmark

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkABC3691215SE +/- 0.06, N = 312.1612.4312.14

ClickHouse

100M Rows Web Analytics Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheABC20406080100SE +/- 1.12, N = 5102.81100.99100.03MIN: 7.39 / MAX: 15000MIN: 7.71 / MAX: 8571.43MIN: 6.85 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunABC306090120150SE +/- 1.47, N = 5113.48111.19113.79MIN: 7.71 / MAX: 12000MIN: 8.34 / MAX: 12000MIN: 7.08 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunABC306090120150SE +/- 1.03, N = 5115.74121.84115.00MIN: 8.6 / MAX: 20000MIN: 8.5 / MAX: 20000MIN: 7.14 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeABC0.99231.98462.97693.96924.9615SE +/- 0.06, N = 94.413.984.24

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.43, N = 9265.15263.54265.96

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.09, N = 914.0413.8814.10

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeABC0.89781.79562.69343.59124.489SE +/- 0.04, N = 93.823.993.90

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeABC0.74481.48962.23442.97923.724SE +/- 0.02, N = 93.313.103.28

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeABC0.52651.0531.57952.1062.6325SE +/- 0.02, N = 92.342.282.28

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC0.43430.86861.30291.73722.1715SE +/- 0.02, N = 91.861.931.91

Apache Spark

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeABC1.05532.11063.16594.22125.2765SE +/- 0.05, N = 94.674.514.69

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.36, N = 9264.23264.14266.50

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.03, N = 913.9113.7814.00

Apache Spark

Row Count: 1000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeABC1.0082.0163.0244.0325.04SE +/- 0.02, N = 94.484.304.37

Apache Spark

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeABC0.80331.60662.40993.21324.0165SE +/- 0.02, N = 93.573.563.41

Apache Spark

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeABC0.58951.1791.76852.3582.9475SE +/- 0.03, N = 92.572.452.62

Apache Spark

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC0.53331.06661.59992.13322.6665SE +/- 0.11, N = 92.232.342.37

Apache Spark

Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC1.0712.1423.2134.2845.355SE +/- 0.083377130, N = 64.6642218584.6400000004.760000000

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.19, N = 6264.05263.78266.03

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.14, N = 614.0714.1014.11

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Group By Test TimeABC1.082.163.244.325.4SE +/- 0.06, N = 64.804.484.65

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Repartition Test TimeABC0.82351.6472.47053.2944.1175SE +/- 0.02, N = 63.663.633.66

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Inner Join Test TimeABC0.6751.352.0252.73.375SE +/- 0.03, N = 62.803.002.94

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC0.57821.15641.73462.31282.891SE +/- 0.02035353, N = 62.569569622.400000002.52000000

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC1.1972.3943.5914.7885.985SE +/- 0.05, N = 94.945.315.32

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.16, N = 9263.65263.96265.81

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.10, N = 914.0414.3514.07

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test TimeABC1.18352.3673.55054.7345.9175SE +/- 0.03, N = 95.235.265.03

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test TimeABC0.91351.8272.74053.6544.5675SE +/- 0.05, N = 94.063.924.03

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test TimeABC0.83251.6652.49753.334.1625SE +/- 0.039029208, N = 93.5982461733.7000000003.620000000

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC0.79881.59762.39643.19523.994SE +/- 0.06, N = 93.273.553.38

Apache Spark

Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark TimeABC510152025SE +/- 0.13, N = 318.7218.7018.64

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.13, N = 3263.65263.14265.63

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.09, N = 314.0514.0513.96

Apache Spark

Row Count: 10000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Group By Test TimeABC3691215SE +/- 0.06, N = 39.479.069.34

Apache Spark

Row Count: 10000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Repartition Test TimeABC48121620SE +/- 0.19, N = 313.8113.4713.90

Apache Spark

Row Count: 10000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Inner Join Test TimeABC48121620SE +/- 0.27, N = 315.7815.5015.85

Apache Spark

Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.54, N = 314.9614.6315.68

Apache Spark

Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark TimeABC48121620SE +/- 0.14, N = 317.8617.7617.89

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.09, N = 3263.75263.97266.28

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.06, N = 314.0214.0414.00

Apache Spark

Row Count: 10000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Group By Test TimeABC3691215SE +/- 0.04, N = 39.058.999.35

Apache Spark

Row Count: 10000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Repartition Test TimeABC3691215SE +/- 0.08, N = 313.1413.2613.29

Apache Spark

Row Count: 10000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Inner Join Test TimeABC48121620SE +/- 0.31, N = 314.4014.9414.96

Apache Spark

Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.50, N = 313.4413.7314.14

Apache Spark

Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark TimeABC816243240SE +/- 0.15, N = 335.4434.9334.57

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.45, N = 3264.38265.85265.83

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.03, N = 314.0514.0913.99

Apache Spark

Row Count: 20000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Group By Test TimeABC48121620SE +/- 0.19, N = 314.9715.1915.35

Apache Spark

Row Count: 20000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Repartition Test TimeABC612182430SE +/- 0.16, N = 325.9925.1726.74

Apache Spark

Row Count: 20000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Inner Join Test TimeABC714212835SE +/- 0.45, N = 329.6128.9429.71

Apache Spark

Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.32, N = 331.6029.2531.18

Apache Spark

Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark TimeABC816243240SE +/- 0.27, N = 333.7033.7533.71

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.37, N = 3263.64263.11265.92

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.08, N = 314.1013.9613.89

Apache Spark

Row Count: 20000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Group By Test TimeABC48121620SE +/- 0.19, N = 314.6315.1914.78

Apache Spark

Row Count: 20000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Repartition Test TimeABC612182430SE +/- 0.08, N = 324.8826.8424.56

Apache Spark

Row Count: 20000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Inner Join Test TimeABC714212835SE +/- 0.16, N = 329.1928.7928.32

Apache Spark

Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.24, N = 328.7028.7527.69

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark TimeABC1530456075SE +/- 0.79, N = 464.3964.8565.48

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.20, N = 4263.32263.85266.04

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.26, N = 413.9414.0214.19

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test TimeABC816243240SE +/- 0.10, N = 435.6835.0836.32

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test TimeABC1326395265SE +/- 0.71, N = 451.9257.9250.03

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test TimeABC1326395265SE +/- 1.35, N = 458.6058.3457.94

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test TimeABC1326395265SE +/- 0.97, N = 457.0256.9258.75

Apache Spark

Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeABC1428425670SE +/- 0.45, N = 362.2962.6162.43

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.59, N = 3264.98263.96266.71

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.02, N = 313.9913.9613.97

Apache Spark

Row Count: 40000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeABC816243240SE +/- 0.40, N = 334.5834.5434.29

Apache Spark

Row Count: 40000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeABC1122334455SE +/- 0.93, N = 348.5847.2248.18

Apache Spark

Row Count: 40000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeABC1326395265SE +/- 0.91, N = 355.3156.9957.19

Apache Spark

Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeABC1326395265SE +/- 1.23, N = 357.4155.5354.76

Apache Spark

Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC510152025SE +/- 0.15, N = 318.7718.9818.81

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.24, N = 3263.35264.02266.07

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.46, N = 313.7514.0214.80

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Group By Test TimeABC3691215SE +/- 0.21, N = 39.9610.079.82

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Repartition Test TimeABC48121620SE +/- 0.10, N = 313.5313.6913.59

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Inner Join Test TimeABC48121620SE +/- 0.25, N = 315.0515.1615.10

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.14, N = 314.5114.5914.23

Apache Spark

Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC510152025SE +/- 0.01, N = 318.7318.9018.80

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.20, N = 3266.10263.79265.85

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.09, N = 314.2013.8013.98

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Group By Test TimeABC3691215SE +/- 0.09, N = 39.6010.1010.15

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Repartition Test TimeABC48121620SE +/- 0.18, N = 313.6713.7813.90

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Inner Join Test TimeABC48121620SE +/- 0.34, N = 315.8716.0416.60

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC48121620SE +/- 0.04, N = 314.5914.7114.57

Apache Spark

Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC816243240SE +/- 0.23, N = 333.0932.9932.77

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.39, N = 3264.54264.30265.99

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.30, N = 313.9113.8014.12

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Group By Test TimeABC48121620SE +/- 0.40, N = 314.5114.2414.58

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Repartition Test TimeABC612182430SE +/- 0.06, N = 324.3523.9724.45

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Inner Join Test TimeABC714212835SE +/- 0.22, N = 329.2028.2527.81

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.49, N = 327.3326.9527.76

Apache Spark

Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC816243240SE +/- 0.08, N = 333.4833.3733.54

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.31, N = 3265.64266.23266.71

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.08, N = 313.9213.9913.93

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Group By Test TimeABC48121620SE +/- 0.33, N = 315.3414.7415.30

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Repartition Test TimeABC612182430SE +/- 0.18, N = 325.2324.8725.41

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Inner Join Test TimeABC714212835SE +/- 0.30, N = 329.1228.9929.13

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC714212835SE +/- 0.38, N = 327.9528.7927.65

Apache Spark

Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark TimeABC1428425670SE +/- 0.13, N = 362.0861.8362.14

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.27, N = 3264.01262.92266.48

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.07, N = 314.0013.9913.95

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Group By Test TimeABC816243240SE +/- 0.03, N = 332.9533.8733.18

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Repartition Test TimeABC1122334455SE +/- 0.26, N = 346.6650.6047.20

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Inner Join Test TimeABC1326395265SE +/- 0.99, N = 359.0555.6555.99

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test TimeABC1326395265SE +/- 1.93, N = 358.1056.3855.67

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark TimeABC1428425670SE +/- 0.31, N = 361.2661.7261.87

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi BenchmarkABC60120180240300SE +/- 0.18, N = 3263.77263.76266.18

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeABC48121620SE +/- 0.06, N = 313.8614.8713.88

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test TimeABC816243240SE +/- 0.10, N = 333.3233.0133.89

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test TimeABC1224364860SE +/- 0.42, N = 350.0151.1947.48

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test TimeABC1428425670SE +/- 0.81, N = 359.1560.7658.05

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test TimeABC1326395265SE +/- 0.68, N = 355.7455.2454.84

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 6994.81, N = 31988043.901994368.211996857.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500KSE +/- 4648.71, N = 32144617.322154148.502159425.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 5831.26, N = 31927867.311931122.181934424.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 8670.43, N = 32087416.282051704.422074466.951. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500KSE +/- 1318.89, N = 32211895.732194182.242217552.261. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 6872.33, N = 31992128.412005750.942032385.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Test: GET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50ABC600K1200K1800K2400K3000KSE +/- 36134.78, N = 152732588.002861609.252830668.701. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50ABC500K1000K1500K2000K2500KSE +/- 21826.28, N = 62361094.501924962.252306357.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: GET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500ABC600K1200K1800K2400K3000KSE +/- 16101.19, N = 32815841.502823473.752799995.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 50ABC700K1400K2100K2800K3500KSE +/- 120387.99, N = 153444536.752141549.503105232.871. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 50ABC600K1200K1800K2400K3000KSE +/- 49552.86, N = 152629399.252605961.252552054.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500ABC500K1000K1500K2000K2500KSE +/- 55052.29, N = 122331370.002327680.002201393.841. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: GET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000ABC600K1200K1800K2400K3000KSE +/- 60647.19, N = 152912617.002347019.002732949.301. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 500ABC700K1400K2100K2800K3500KSE +/- 69012.03, N = 153301260.752035452.623208768.771. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 50ABC400K800K1200K1600K2000KSE +/- 18846.80, N = 151906321.121786065.121878914.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 500ABC600K1200K1800K2400K3000KSE +/- 38463.64, N = 122627615.752492194.502568888.291. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000ABC500K1000K1500K2000K2500KSE +/- 36124.68, N = 152074037.502222163.252226848.231. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 1000ABC400K800K1200K1600K2000KSE +/- 38452.70, N = 152090338.502043605.622028878.431. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 500ABC400K800K1200K1600K2000KSE +/- 32181.23, N = 152046449.251778839.881933436.951. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 1000ABC600K1200K1800K2400K3000KSE +/- 35265.51, N = 32609105.502158587.252594956.171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 1000ABC400K800K1200K1600K2000KSE +/- 33163.21, N = 151848508.381897336.001945768.331. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

Preset: Fast

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastABC20406080100SE +/- 0.07, N = 3100.64100.34100.221. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumABC918273645SE +/- 0.01, N = 337.1837.2337.191. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughABC1.07722.15443.23164.30885.386SE +/- 0.0015, N = 34.78664.78764.78471. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveABC0.11020.22040.33060.44080.551SE +/- 0.0010, N = 30.48970.48950.48591. (CXX) g++ options: -O3 -flto -pthread

Inkscape

Operation: SVG Files To PNG

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGABC714212835SE +/- 0.13, N = 327.8527.9727.671. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 11256.90, N = 31945581.311711625.321819804.241. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ABC400K800K1200K1600K2000KSE +/- 12369.07, N = 31989301.281942966.661929613.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 4351.00, N = 31726134.011713885.811695938.521. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000KSE +/- 8267.67, N = 31806170.521761644.311813065.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ABC400K800K1200K1600K2000KSE +/- 18068.84, N = 31970123.241919288.211943695.241. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000KSE +/- 8849.68, N = 31716481.411737144.541711961.581. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ABC400K800K1200K1600K2000KSE +/- 27485.35, N = 31950043.181899683.691944401.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1ABC300K600K900K1200K1500KSE +/- 13712.27, N = 31623068.241593958.741620962.121. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ABC400K800K1200K1600K2000KSE +/- 8909.71, N = 31761822.771772298.561720148.881. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1ABC300K600K900K1200K1500KSE +/- 7898.00, N = 31512323.241533754.231545189.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ABC400K800K1200K1600K2000KSE +/- 10080.98, N = 31992499.122022525.581996437.791. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ABC400K800K1200K1600K2000KSE +/- 8041.06, N = 31790527.721789992.051748466.511. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetABC48121620SE +/- 0.15, N = 313.3213.6513.10MIN: 12.9 / MAX: 14.46MIN: 13.28 / MAX: 25.73MIN: 12.57 / MAX: 25.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ABC0.37130.74261.11391.48521.8565SE +/- 0.014, N = 31.6351.6501.636MIN: 1.58 / MAX: 2.02MIN: 1.59 / MAX: 2.58MIN: 1.5 / MAX: 2.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ABC0.76681.53362.30043.06723.834SE +/- 0.007, N = 33.3793.3993.408MIN: 3.3 / MAX: 15.39MIN: 3.32 / MAX: 4.38MIN: 3.3 / MAX: 4.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50ABC816243240SE +/- 0.08, N = 335.3935.3035.28MIN: 35.23 / MAX: 46.59MIN: 35.1 / MAX: 46.47MIN: 35.07 / MAX: 47.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ABC1.09732.19463.29194.38925.4865SE +/- 0.010, N = 34.8674.8514.877MIN: 4.75 / MAX: 6.13MIN: 4.76 / MAX: 7.9MIN: 4.74 / MAX: 6.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABC0.70271.40542.10812.81083.5135SE +/- 0.003, N = 33.0663.1233.080MIN: 2.98 / MAX: 4.12MIN: 3.01 / MAX: 4.22MIN: 3 / MAX: 4.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ABC0.96191.92382.88573.84764.8095SE +/- 0.003, N = 34.2594.2714.275MIN: 4.21 / MAX: 5.18MIN: 4.24 / MAX: 5.01MIN: 4.18 / MAX: 5.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3ABC816243240SE +/- 0.05, N = 333.9833.9533.91MIN: 33.82 / MAX: 46.32MIN: 33.83 / MAX: 45.2MIN: 33.51 / MAX: 45.921. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetABC48121620SE +/- 0.01, N = 315.5215.5915.61MIN: 15.45 / MAX: 15.79MIN: 15.39 / MAX: 16.29MIN: 15.4 / MAX: 16.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2ABC1.0442.0883.1324.1765.22SE +/- 0.03, N = 34.644.604.62MIN: 4.51 / MAX: 5.33MIN: 4.49 / MAX: 5.66MIN: 4.46 / MAX: 5.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ABC0.81231.62462.43693.24924.0615SE +/- 0.02, N = 33.593.573.61MIN: 3.51 / MAX: 4.29MIN: 3.51 / MAX: 4.6MIN: 3.49 / MAX: 4.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2ABC0.7471.4942.2412.9883.735SE +/- 0.06, N = 33.313.183.32MIN: 3.26 / MAX: 4.01MIN: 3.15 / MAX: 3.87MIN: 3.11 / MAX: 4.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetABC0.7831.5662.3493.1323.915SE +/- 0.01, N = 33.473.413.48MIN: 3.43 / MAX: 4.15MIN: 3.37 / MAX: 4.1MIN: 3.38 / MAX: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0ABC246810SE +/- 0.01, N = 36.826.786.84MIN: 6.74 / MAX: 7.75MIN: 6.72 / MAX: 7.62MIN: 6.72 / MAX: 7.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceABC0.24750.4950.74250.991.2375SE +/- 0.00, N = 31.101.101.10MIN: 1.08 / MAX: 1.72MIN: 1.07 / MAX: 1.75MIN: 1.07 / MAX: 1.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetABC3691215SE +/- 0.04, N = 312.3512.2712.38MIN: 12.26 / MAX: 13.23MIN: 12.17 / MAX: 13.31MIN: 12.25 / MAX: 19.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16ABC1224364860SE +/- 0.02, N = 355.2355.2155.11MIN: 54.95 / MAX: 61.87MIN: 54.99 / MAX: 56.79MIN: 54.86 / MAX: 61.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18ABC3691215SE +/- 0.03, N = 310.2610.2710.26MIN: 10.14 / MAX: 11.15MIN: 10.11 / MAX: 17.12MIN: 10.11 / MAX: 11.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetABC246810SE +/- 0.00, N = 38.238.248.23MIN: 8.16 / MAX: 9.38MIN: 8.17 / MAX: 9.15MIN: 8.14 / MAX: 8.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50ABC510152025SE +/- 0.02, N = 321.1821.0521.03MIN: 20.9 / MAX: 22.52MIN: 20.93 / MAX: 22.12MIN: 20.8 / MAX: 22.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyABC612182430SE +/- 0.01, N = 324.3424.3524.31MIN: 24.19 / MAX: 26.13MIN: 24.26 / MAX: 24.91MIN: 24.15 / MAX: 24.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdABC48121620SE +/- 0.01, N = 316.3516.3416.33MIN: 16.25 / MAX: 16.66MIN: 16.26 / MAX: 16.69MIN: 16.2 / MAX: 17.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mABC3691215SE +/- 0.02, N = 310.1910.0010.23MIN: 10.11 / MAX: 11.07MIN: 9.9 / MAX: 10.76MIN: 10.13 / MAX: 11.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABC50100150200250SE +/- 0.02, N = 3230.15228.89228.59MIN: 229.93 / MAX: 237.36MIN: 228.7 / MAX: 235.46MIN: 228.35 / MAX: 235.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetABC0.89331.78662.67993.57324.4665SE +/- 0.03, N = 33.973.863.97MIN: 3.85 / MAX: 4.15MIN: 3.78 / MAX: 4.09MIN: 3.82 / MAX: 11.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC0.44550.8911.33651.7822.2275SE +/- 0.01, N = 31.971.981.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC400800120016002000SE +/- 14.58, N = 32022.732019.272056.36MIN: 1942.04 / MAX: 2048.56MIN: 1952.63 / MAX: 2066.71MIN: 1731.28 / MAX: 2158.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC0.27680.55360.83041.10721.384SE +/- 0.00, N = 31.231.231.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC7001400210028003500SE +/- 4.37, N = 33200.373196.883310.58MIN: 1758.1 / MAX: 3458.98MIN: 1702.1 / MAX: 3449.98MIN: 1782.74 / MAX: 3536.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC0.2790.5580.8371.1161.395SE +/- 0.00, N = 31.241.201.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC7001400210028003500SE +/- 8.61, N = 33194.293299.943338.27MIN: 1761.36 / MAX: 3455.63MIN: 3117.38 / MAX: 3443.22MIN: 2906.46 / MAX: 3515.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC306090120150SE +/- 1.14, N = 3106.65122.98119.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC918273645SE +/- 0.33, N = 337.4932.5033.55MIN: 29.62 / MAX: 50.43MIN: 15.55 / MAX: 40.11MIN: 15.9 / MAX: 44.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC0.85731.71462.57193.42924.2865SE +/- 0.01, N = 33.803.813.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC2004006008001000SE +/- 4.29, N = 31051.991049.731100.09MIN: 1040.91 / MAX: 1059.93MIN: 1041.12 / MAX: 1057.94MIN: 1042 / MAX: 1134.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC50100150200250SE +/- 0.38, N = 3219.33220.79213.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC510152025SE +/- 0.03, N = 318.2218.1018.68MIN: 16.77 / MAX: 28.3MIN: 10.18 / MAX: 27.41MIN: 10.54 / MAX: 28.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC4080120160200SE +/- 0.66, N = 3193.90195.66183.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC510152025SE +/- 0.08, N = 320.6120.4321.76MIN: 15.16 / MAX: 28.91MIN: 18.15 / MAX: 29.83MIN: 14.5 / MAX: 32.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC510152025SE +/- 0.07, N = 321.9622.0621.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC4080120160200SE +/- 0.66, N = 3182.00181.13190.39MIN: 97.01 / MAX: 200.14MIN: 93.18 / MAX: 198.15MIN: 137.7 / MAX: 208.91. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC80160240320400SE +/- 1.30, N = 3380.40380.42364.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC48121620SE +/- 0.06, N = 315.7615.7616.46MIN: 9.85 / MAX: 25.38MIN: 11.99 / MAX: 24.27MIN: 12.44 / MAX: 25.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC50100150200250SE +/- 0.67, N = 3247.67246.98242.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC48121620SE +/- 0.05, N = 316.1316.1816.49MIN: 14.36 / MAX: 19.06MIN: 9 / MAX: 21.75MIN: 9.1 / MAX: 33.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC11002200330044005500SE +/- 18.95, N = 35321.405297.965106.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC0.2610.5220.7831.0441.305SE +/- 0.01, N = 31.111.121.16MIN: 0.67 / MAX: 12.18MIN: 0.66 / MAX: 2.71MIN: 0.67 / MAX: 13.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC2K4K6K8K10KSE +/- 41.17, N = 39929.889884.049454.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC0.14180.28360.42540.56720.709SE +/- 0.00, N = 30.600.600.63MIN: 0.39 / MAX: 8.97MIN: 0.36 / MAX: 9.05MIN: 0.38 / MAX: 10.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipABC0.47250.9451.41751.892.3625SE +/- 0.00, N = 32.12.12.1

AI Benchmark Alpha

Device Inference Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreABC2004006008001000962965949

AI Benchmark Alpha

Device Training Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreABC2004006008001000986985984

AI Benchmark Alpha

Device AI Score

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreABC400800120016002000194819501933


Phoronix Test Suite v10.8.4