aa

AMD Ryzen 7 4700U testing with a LENOVO LNVNB161216 (DTCN18WWV1.04 BIOS) and AMD Renoir 512MB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2209013-NE-AA419570266.

aaProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionAAABCAMD Ryzen 7 4700U @ 2.00GHz (8 Cores)LENOVO LNVNB161216 (DTCN18WWV1.04 BIOS)AMD Renoir/Cezanne16GB512GB SAMSUNG MZALQ512HALU-000L2AMD Renoir 512MB (1600/400MHz)AMD Renoir Radeon HD AudioIntel Wi-Fi 6 AX200Ubuntu 22.045.18.8-051808-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.5 (LLVM 13.0.1 DRM 3.46)1.3.204GCC 11.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balanced Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

aaunpack-linux: linux-5.19.tar.xzblosc: blosclz shuffleblosc: blosclz bitshufflegraphics-magick: Swirlgraphics-magick: Rotategraphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Resizinggraphics-magick: Noise-Gaussiangraphics-magick: HWB Color Spacecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 1000 - SHA-512 Benchmark Timespark: 1000000 - 1000 - Calculate Pi Benchmarkspark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 1000 - Group By Test Timespark: 1000000 - 1000 - Repartition Test Timespark: 1000000 - 1000 - Inner Join Test Timespark: 1000000 - 1000 - Broadcast Inner Join Test Timespark: 1000000 - 2000 - SHA-512 Benchmark Timespark: 1000000 - 2000 - Calculate Pi Benchmarkspark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 2000 - Group By Test Timespark: 1000000 - 2000 - Repartition Test Timespark: 1000000 - 2000 - Inner Join Test Timespark: 1000000 - 2000 - Broadcast Inner Join Test Timespark: 10000000 - 100 - SHA-512 Benchmark Timespark: 10000000 - 100 - Calculate Pi Benchmarkspark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 100 - Group By Test Timespark: 10000000 - 100 - Repartition Test Timespark: 10000000 - 100 - Inner Join Test Timespark: 10000000 - 100 - Broadcast Inner Join Test Timespark: 10000000 - 500 - SHA-512 Benchmark Timespark: 10000000 - 500 - Calculate Pi Benchmarkspark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 500 - Group By Test Timespark: 10000000 - 500 - Repartition Test Timespark: 10000000 - 500 - Inner Join Test Timespark: 10000000 - 500 - Broadcast Inner Join Test Timespark: 20000000 - 100 - SHA-512 Benchmark Timespark: 20000000 - 100 - Calculate Pi Benchmarkspark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 100 - Group By Test Timespark: 20000000 - 100 - Repartition Test Timespark: 20000000 - 100 - Inner Join Test Timespark: 20000000 - 100 - Broadcast Inner Join Test Timespark: 20000000 - 500 - SHA-512 Benchmark Timespark: 20000000 - 500 - Calculate Pi Benchmarkspark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 500 - Group By Test Timespark: 20000000 - 500 - Repartition Test Timespark: 20000000 - 500 - Inner Join Test Timespark: 20000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 100 - SHA-512 Benchmark Timespark: 40000000 - 100 - Calculate Pi Benchmarkspark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 100 - Group By Test Timespark: 40000000 - 100 - Repartition Test Timespark: 40000000 - 100 - Inner Join Test Timespark: 40000000 - 100 - Broadcast Inner Join Test Timespark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Group By Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 10000000 - 1000 - SHA-512 Benchmark Timespark: 10000000 - 1000 - Calculate Pi Benchmarkspark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 1000 - Group By Test Timespark: 10000000 - 1000 - Repartition Test Timespark: 10000000 - 1000 - Inner Join Test Timespark: 10000000 - 1000 - Broadcast Inner Join Test Timespark: 10000000 - 2000 - SHA-512 Benchmark Timespark: 10000000 - 2000 - Calculate Pi Benchmarkspark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 10000000 - 2000 - Group By Test Timespark: 10000000 - 2000 - Repartition Test Timespark: 10000000 - 2000 - Inner Join Test Timespark: 10000000 - 2000 - Broadcast Inner Join Test Timespark: 20000000 - 1000 - SHA-512 Benchmark Timespark: 20000000 - 1000 - Calculate Pi Benchmarkspark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 1000 - Group By Test Timespark: 20000000 - 1000 - Repartition Test Timespark: 20000000 - 1000 - Inner Join Test Timespark: 20000000 - 1000 - Broadcast Inner Join Test Timespark: 20000000 - 2000 - SHA-512 Benchmark Timespark: 20000000 - 2000 - Calculate Pi Benchmarkspark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 20000000 - 2000 - Group By Test Timespark: 20000000 - 2000 - Repartition Test Timespark: 20000000 - 2000 - Inner Join Test Timespark: 20000000 - 2000 - Broadcast Inner Join Test Timespark: 40000000 - 1000 - SHA-512 Benchmark Timespark: 40000000 - 1000 - Calculate Pi Benchmarkspark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 1000 - Group By Test Timespark: 40000000 - 1000 - Repartition Test Timespark: 40000000 - 1000 - Inner Join Test Timespark: 40000000 - 1000 - Broadcast Inner Join Test Timespark: 40000000 - 2000 - SHA-512 Benchmark Timespark: 40000000 - 2000 - Calculate Pi Benchmarkspark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 2000 - Group By Test Timespark: 40000000 - 2000 - Repartition Test Timespark: 40000000 - 2000 - Inner Join Test Timespark: 40000000 - 2000 - Broadcast Inner Join Test Timedragonflydb: 50 - 1:1dragonflydb: 50 - 1:5dragonflydb: 50 - 5:1dragonflydb: 200 - 1:1dragonflydb: 200 - 1:5dragonflydb: 200 - 5:1redis: GET - 50redis: SET - 50redis: GET - 500redis: LPOP - 50redis: SADD - 50redis: SET - 500redis: GET - 1000redis: LPOP - 500redis: LPUSH - 50redis: SADD - 500redis: SET - 1000redis: LPOP - 1000redis: LPUSH - 500redis: SADD - 1000redis: LPUSH - 1000memtier-benchmark: Redis - 50 - 1:1memtier-benchmark: Redis - 50 - 1:5memtier-benchmark: Redis - 50 - 5:1memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 500 - 1:1memtier-benchmark: Redis - 500 - 1:5memtier-benchmark: Redis - 500 - 5:1memtier-benchmark: Redis - 100 - 1:10memtier-benchmark: Redis - 500 - 1:10mnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDetopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUAAABC7.6586659.93870.62365888212261213772730943241916.25330.12689774423.575.715.413.823.256.90330.19312169823.596.735.504.463.647.46329.46751997623.597.396.015.204.428.37330.47637988523.518.426.777.105.5033.58329.67604675723.6715.2423.8626.7926.7331.09329.87143933523.5914.0922.6125.1724.0956.88329.13689098323.5721.9842.2548.3145.8458.82329.8823.7222.2142.5147.9648.57114.15328.95452474123.6654.8284.0297.0294.69109.614286225330.5123.7152.4881.1793.4590.8531.64329.72266563223.6414.9222.9926.0224.2232.77328.81303659423.4416.0223.5627.9825.4758.52330.58506464023.4822.8042.1547.6547.9858.70329.72420805323.5422.8143.0349.6047.3619.5152041.31961.6100397375622313367032186249336.38329.20762897623.495.804.973.663.056.98329.4623.686.735.514.543.597.41330.73227746823.427.176.235.2061022294.4296788188.53329.44260724723.558.226.537.295.6233.57330.80735765823.61742626115.3923.3526.5426.5230.90330.03025664323.6213.8722.7125.3024.0556.71331.16885822523.6821.7043.28073513947.0045.3858.21330.83664896123.5522.4142.2149.4847.75113.33327.78858228423.4354.9583.6698.2896.13109.3054662330.41748831223.4957.2788.3896.1090.1231.846070779328.85444698823.39929460915.5623.0026.2424.3932.44328.81807491423.5715.6223.4627.4425.1458.30329.83623403223.3723.63915149342.6247.6946.7658.91331.45820926123.4323.0442.6149.2247.304964395110.862700796329.12435257123.5950.7584706281.0995.1390.83111.633289225328.56015521523.2848.2482.2399.1794.78796747.83813010.93780280.39749818.06776782.14725143.51320102268060.59309037.19320860310780.78290489.44330218.31331846.09271275.84318993.25309386.81285137.56284773.97316457.03299774.22273887.51284199.18262758.93268901.7284706.65265509.86289936.46284363.39281040.25286263.13291704.33305443.2721.7363.2356.1646.36110.1426.4876.11756.4135.578.889.156.566.6515.042.3531.81115.8325.8118.6753.6255.7241.0414.81501.828.2716.544.465.063.324.4911.683.1810.4836.58.229.6717.7322.0715.196.39827.094.861.283059.750.854647.90.854673.3488.745.051.792205.09120.4233.19129.6130.8312.82311.5191.1641.82170.3623.452722.732.884315.591.827.7186568.63807.72845838212260913870331717249616.22331.7123.665.675.373.792.926.92328.9823.656.685.574.253.777.39331.50590813823.477.315.935.064.268.29328.6923.458.416.596.765.7733.77330.75073532623.5514.9723.5325.6826.2930.95329.42250993523.4814.4022.3325.3323.7057.60328.5623.6421.8042.0246.8145.4158.22326.72413958223.4921.5942.7948.7747.88113.28327.8823.6255.7086.88102.31364564993.45108.514224164327.00453192623.5451.8681.67100.4290.2931.65329.7523.6114.4522.7725.7224.2132.82328.14882934223.4516.0223.5627.6625.3558.74332.2323.4622.8642.5347.8247.2658.61330.02330947623.5223.8042.4448.8746.70110.78328.4927055723.2548.8181.1794.3389.59111.44329.7323.5051.2783.8196.9392.18810806.99835036.6769072.59741192.5781721.28719579.27310163.88283503.22320454.91275379.31297396.16303422.44321775.91282959.69278031.44306787.91299055.94293415.81291080.62327407.59301030.28287394.48286161.35274932.95281470281771.29270055.68286611.92297614.25314677.22292101.3292217.18288777.4222.5043.5425.92346.7219.9056.6426.49757.57935.738.7695.296.2513.632.2733.5117.12718.7354.345541.8315.14504.118.3416.734.465.083.144.3911.463.5310.6636.548.19.617.4322.7514.946.28811.415.091.293069.090.854608.420.854677.1288.7145.041.812197.4912133.03129.4230.8812.91309.61190.9641.86171.823.252726.82.884311.481.827.7076639.738392825848212361013771131548242896.22324.98689848723.535.365.333.503.146.49330.8723.456.545.394.443.627.45331.64842608223.657.206.225.355.338.47328.4623.598.486.667.085.5633.34330.21623167723.5714.8623.0927.9526.2831.01329.9823.6613.8322.2725.0023.5856.55331.34188531724.3021.2940.9045.6744.2458.41330.6423.6622.6142.0047.7247.61113.130596046329.9323.7359.2883.5099.7995.31110.428019839330.24169428523.6550.0982.20100.58521294691.7831.72330.11994192823.4614.4422.9125.9324.4332.832996735332.85261355423.4615.4423.7427.3925.1158.76328.5723.4422.7342.6648.3048.0159.13327.3623.6224.2443.6149.3147.21110.362753712330.27837921123.5349.6083.04101.8091.65112.59330.5723.5850.5883.99103.68806746991.65781971.83842699.99769887.29724339.05782267.03747087.03304392.81284041.12314921.25273024.72309378.94283691.09322020.16290364.78288119.94303208.34297926.47286827.97293367.09313468.75298437.59279268.54291203.06285003.98280952.6290482.46268728.44295478.01309359.47281818.75290931.14296536.07302765.0422.3233.3755.92346.1999.9146.5736.12857.80736.8311.427.226.478.2618.061.8731.17118.7426.7118.954.1552.7537.9517.14504.158.1516.134.415.133.074.411.361.8110.6136.368.149.5617.2722.4913.286.61816.045.511.273079.260.854627.170.854652.8688.8844.961.82207.08120.4533.17129.3230.9112.86310.52191.1841.81171.5423.292717.022.894310.921.82OpenBenchmarking.org

Unpacking The Linux Kernel

linux-5.19.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzAAABC510152025SE +/- 0.008, N = 47.65819.5157.7187.707

C-Blosc

Test: blosclz shuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleAAABC14002800420056007000SE +/- 15.08, N = 36659.92041.36568.66639.71. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

C-Blosc

Test: blosclz bitshuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleAAABC8001600240032004000SE +/- 48.36, N = 33870.61961.63807.73839.01. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

GraphicsMagick

Operation: Swirl

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlAAABC60120180240300SE +/- 3.99, N = 152361002842821. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Rotate

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateAAABC130260390520650SE +/- 0.33, N = 35883975835841. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Sharpen

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenAAABC20406080100SE +/- 0.33, N = 3823782821. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Enhanced

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedAAABC306090120150SE +/- 0.33, N = 3122561221231. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Resizing

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingAAABC130260390520650SE +/- 1.86, N = 36122236096101. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: Noise-Gaussian

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianAAABC306090120150SE +/- 0.33, N = 31371331381371. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lm -lpthread

GraphicsMagick

Operation: HWB Color Space

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceAAABC160320480640800SE +/- 1.00, N = 37276707037111. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lm -lpthread

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingAAABC7K14K21K28K35KSE +/- 158.98, N = 3309433218631717315481. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingAAABC5K10K15K20K25KSE +/- 185.44, N = 3241912493324961242891. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeAAABC246810SE +/- 0.05, N = 36.256.386.226.22

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.47, N = 3330.13329.21331.71324.99

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.09, N = 323.5723.4923.6623.53

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeAAABC1.3052.613.9155.226.525SE +/- 0.05, N = 35.715.805.675.36

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeAAABC1.21732.43463.65194.86926.0865SE +/- 0.09, N = 35.414.975.375.33

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeAAABC0.85951.7192.57853.4384.2975SE +/- 0.05, N = 33.823.663.793.50

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeAAABC0.73131.46262.19392.92523.6565SE +/- 0.15, N = 33.253.052.923.14

Apache Spark

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeAAABC246810SE +/- 0.08, N = 46.906.986.926.49

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.41, N = 4330.19329.46328.98330.87

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.04, N = 423.5923.6823.6523.45

Apache Spark

Row Count: 1000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeAAABC246810SE +/- 0.05, N = 46.736.736.686.54

Apache Spark

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeAAABC1.25332.50663.75995.01326.2665SE +/- 0.08, N = 45.505.515.575.39

Apache Spark

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeAAABC1.02152.0433.06454.0865.1075SE +/- 0.03, N = 44.464.544.254.44

Apache Spark

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeAAABC0.84831.69662.54493.39324.2415SE +/- 0.04, N = 43.643.593.773.62

Apache Spark

Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark TimeAAABC246810SE +/- 0.08, N = 97.467.417.397.45

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.43, N = 9329.47330.73331.51331.65

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.04, N = 923.5923.4223.4723.65

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Group By Test TimeAAABC246810SE +/- 0.04, N = 97.397.177.317.20

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Repartition Test TimeAAABC246810SE +/- 0.04, N = 96.016.235.936.22

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Inner Join Test TimeAAABC1.20382.40763.61144.81526.019SE +/- 0.030529150, N = 95.2000000005.2061022295.0600000005.350000000

Apache Spark

Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test TimeAAABC1.19932.39863.59794.79725.9965SE +/- 0.110604999, N = 94.4200000004.4296788184.2600000005.330000000

Apache Spark

Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark TimeAAABC246810SE +/- 0.08, N = 78.378.538.298.47

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.30, N = 7330.48329.44328.69328.46

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.05, N = 723.5123.5523.4523.59

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Group By Test TimeAAABC246810SE +/- 0.06, N = 78.428.228.418.48

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Repartition Test TimeAAABC246810SE +/- 0.09, N = 76.776.536.596.66

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Inner Join Test TimeAAABC246810SE +/- 0.09, N = 77.107.296.767.08

Apache Spark

Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test TimeAAABC1.29832.59663.89495.19326.4915SE +/- 0.10, N = 75.505.625.775.56

Apache Spark

Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark TimeAAABC816243240SE +/- 0.12, N = 333.5833.5733.7733.34

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.36, N = 3329.68330.81330.75330.22

Apache Spark

Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.07, N = 323.6723.6223.5523.57

Apache Spark

Row Count: 10000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Group By Test TimeAAABC48121620SE +/- 0.09, N = 315.2415.3914.9714.86

Apache Spark

Row Count: 10000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Repartition Test TimeAAABC612182430SE +/- 0.56, N = 323.8623.3523.5323.09

Apache Spark

Row Count: 10000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Inner Join Test TimeAAABC714212835SE +/- 0.70, N = 326.7926.5425.6827.95

Apache Spark

Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test TimeAAABC612182430SE +/- 0.16, N = 326.7326.5226.2926.28

Apache Spark

Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark TimeAAABC714212835SE +/- 0.15, N = 331.0930.9030.9531.01

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.79, N = 3329.87330.03329.42329.98

Apache Spark

Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.09, N = 323.5923.6223.4823.66

Apache Spark

Row Count: 10000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Group By Test TimeAAABC48121620SE +/- 0.37, N = 314.0913.8714.4013.83

Apache Spark

Row Count: 10000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Repartition Test TimeAAABC510152025SE +/- 0.12, N = 322.6122.7122.3322.27

Apache Spark

Row Count: 10000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Inner Join Test TimeAAABC612182430SE +/- 0.17, N = 325.1725.3025.3325.00

Apache Spark

Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test TimeAAABC612182430SE +/- 0.08, N = 324.0924.0523.7023.58

Apache Spark

Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark TimeAAABC1326395265SE +/- 0.07, N = 356.8856.7157.6056.55

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 1.50, N = 3329.14331.17328.56331.34

Apache Spark

Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.06, N = 323.5723.6823.6424.30

Apache Spark

Row Count: 20000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Group By Test TimeAAABC510152025SE +/- 0.07, N = 321.9821.7021.8021.29

Apache Spark

Row Count: 20000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Repartition Test TimeAAABC1020304050SE +/- 0.19, N = 342.2543.2842.0240.90

Apache Spark

Row Count: 20000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Inner Join Test TimeAAABC1122334455SE +/- 0.69, N = 348.3147.0046.8145.67

Apache Spark

Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test TimeAAABC1020304050SE +/- 0.24, N = 345.8445.3845.4144.24

Apache Spark

Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark TimeAAABC1326395265SE +/- 0.16, N = 358.8258.2158.2258.41

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.60, N = 3329.88330.84326.72330.64

Apache Spark

Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.02, N = 323.7223.5523.4923.66

Apache Spark

Row Count: 20000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Group By Test TimeAAABC510152025SE +/- 0.17, N = 322.2122.4121.5922.61

Apache Spark

Row Count: 20000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Repartition Test TimeAAABC1020304050SE +/- 0.08, N = 342.5142.2142.7942.00

Apache Spark

Row Count: 20000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Inner Join Test TimeAAABC1122334455SE +/- 0.07, N = 347.9649.4848.7747.72

Apache Spark

Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test TimeAAABC1122334455SE +/- 0.71, N = 348.5747.7547.8847.61

Apache Spark

Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark TimeAAABC306090120150SE +/- 0.09, N = 3114.15113.33113.28113.13

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.47, N = 3328.95327.79327.88329.93

Apache Spark

Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.01, N = 323.6623.4323.6223.73

Apache Spark

Row Count: 40000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Group By Test TimeAAABC1326395265SE +/- 0.04, N = 354.8254.9555.7059.28

Apache Spark

Row Count: 40000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Repartition Test TimeAAABC20406080100SE +/- 1.65, N = 384.0283.6686.8883.50

Apache Spark

Row Count: 40000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Inner Join Test TimeAAABC20406080100SE +/- 2.32, N = 397.0298.28102.3199.79

Apache Spark

Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test TimeAAABC20406080100SE +/- 0.61, N = 394.6996.1393.4595.31

Apache Spark

Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeAAABC20406080100SE +/- 0.36, N = 3109.61109.31108.51110.43

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.75, N = 3330.51330.42327.00330.24

Apache Spark

Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.05, N = 323.7123.4923.5423.65

Apache Spark

Row Count: 40000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeAAABC1326395265SE +/- 2.15, N = 352.4857.2751.8650.09

Apache Spark

Row Count: 40000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeAAABC20406080100SE +/- 0.06, N = 381.1788.3881.6782.20

Apache Spark

Row Count: 40000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeAAABC20406080100SE +/- 0.89, N = 393.4596.10100.42100.59

Apache Spark

Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeAAABC20406080100SE +/- 0.67, N = 390.8590.1290.2991.78

Apache Spark

Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark TimeAAABC714212835SE +/- 0.02, N = 331.6431.8531.6531.72

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.20, N = 3329.72328.85329.75330.12

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.07, N = 323.6423.4023.6123.46

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Group By Test TimeAAABC48121620SE +/- 0.21, N = 314.9215.5614.4514.44

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Repartition Test TimeAAABC612182430SE +/- 0.03, N = 322.9923.0022.7722.91

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Inner Join Test TimeAAABC612182430SE +/- 0.04, N = 326.0226.2425.7225.93

Apache Spark

Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test TimeAAABC612182430SE +/- 0.02, N = 324.2224.3924.2124.43

Apache Spark

Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark TimeAAABC816243240SE +/- 0.09, N = 332.7732.4432.8232.83

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.62, N = 3328.81328.82328.15332.85

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.04, N = 323.4423.5723.4523.46

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Group By Test TimeAAABC48121620SE +/- 0.22, N = 316.0215.6216.0215.44

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Repartition Test TimeAAABC612182430SE +/- 0.12, N = 323.5623.4623.5623.74

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Inner Join Test TimeAAABC714212835SE +/- 0.22, N = 327.9827.4427.6627.39

Apache Spark

Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test TimeAAABC612182430SE +/- 0.08, N = 325.4725.1425.3525.11

Apache Spark

Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark TimeAAABC1326395265SE +/- 0.07, N = 358.5258.3058.7458.76

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.58, N = 3330.59329.84332.23328.57

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.05, N = 323.4823.3723.4623.44

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Group By Test TimeAAABC612182430SE +/- 0.45, N = 322.8023.6422.8622.73

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Repartition Test TimeAAABC1020304050SE +/- 0.20, N = 342.1542.6242.5342.66

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Inner Join Test TimeAAABC1122334455SE +/- 0.16, N = 347.6547.6947.8248.30

Apache Spark

Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test TimeAAABC1122334455SE +/- 0.73, N = 347.9846.7647.2648.01

Apache Spark

Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark TimeAAABC1326395265SE +/- 0.17, N = 358.7058.9158.6159.13

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi BenchmarkAAABC70140210280350SE +/- 0.29, N = 3329.72331.46330.02327.36

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeAAABC612182430SE +/- 0.13, N = 323.5423.4323.5223.62

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Group By Test TimeAAABC612182430SE +/- 0.06, N = 322.8123.0423.8024.24

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Repartition Test TimeAAABC1020304050SE +/- 0.02, N = 343.0342.6142.4443.61

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Inner Join Test TimeAAABC1122334455SE +/- 0.29, N = 349.6049.2248.8749.31

Apache Spark

Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test TimeAAABC1122334455SE +/- 0.08, N = 347.3647.3046.7047.21

Apache Spark

Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark TimeAABC20406080100110.86110.78110.36

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi BenchmarkAABC70140210280350329.12328.49330.28

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using DataframeAABC61218243023.5923.2523.53

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Group By Test TimeAABC112233445550.7648.8149.60

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Repartition Test TimeAABC2040608010081.0981.1783.04

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Inner Join Test TimeAABC2040608010095.1394.33101.80

Apache Spark

Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test TimeAABC2040608010090.8389.5991.65

Apache Spark

Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark TimeAABC306090120150111.63111.44112.59

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi BenchmarkAABC70140210280350328.56329.73330.57

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using DataframeAABC61218243023.2823.5023.58

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Group By Test TimeAABC122436486048.2451.2750.58

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Repartition Test TimeAABC2040608010082.2383.8183.99

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Inner Join Test TimeAABC2040608010099.1796.93103.69

Apache Spark

Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test TimeAABC2040608010094.7892.1891.65

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1AABC200K400K600K800K1000K796747.83810806.99781971.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5AABC200K400K600K800K1000K813010.93835036.60842699.991. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1AABC200K400K600K800K1000K780280.39769072.59769887.291. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1AABC160K320K480K640K800K749818.06741192.50724339.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5AABC200K400K600K800K1000K776782.14781721.28782267.031. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1AABC160K320K480K640K800K725143.51719579.27747087.031. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Test: GET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50AABC70K140K210K280K350K320102.00310163.88304392.811. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50AABC60K120K180K240K300K268060.59283503.22284041.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: GET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500AABC70K140K210K280K350K309037.19320454.91314921.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 50AABC70K140K210K280K350K320860.00275379.31273024.721. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 50AABC70K140K210K280K350K310780.78297396.16309378.941. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500AABC60K120K180K240K300K290489.44303422.44283691.091. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: GET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000AABC70K140K210K280K350K330218.31321775.91322020.161. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 500AABC70K140K210K280K350K331846.09282959.69290364.781. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 50AABC60K120K180K240K300K271275.84278031.44288119.941. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 500AABC70K140K210K280K350K318993.25306787.91303208.341. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SET - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000AABC70K140K210K280K350K309386.81299055.94297926.471. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPOP - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 1000AABC60K120K180K240K300K285137.56293415.81286827.971. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 500AABC60K120K180K240K300K284773.97291080.62293367.091. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: SADD - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 1000AABC70K140K210K280K350K316457.03327407.59313468.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Redis

Test: LPUSH - Parallel Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 1000AABC60K120K180K240K300K299774.22301030.28298437.591. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1AABC60K120K180K240K300K273887.51287394.48279268.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5AABC60K120K180K240K300K284199.18286161.35291203.061. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1AABC60K120K180K240K300K262758.93274932.95285003.981. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1AABC60K120K180K240K300K268901.7281470.0280952.61. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5AABC60K120K180K240K300K284706.65281771.29290482.461. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1AABC60K120K180K240K300K265509.86270055.68268728.441. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10AABC60K120K180K240K300K289936.46286611.92295478.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1AABC70K140K210K280K350K284363.39297614.25309359.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5AABC70K140K210K280K350K281040.25314677.22281818.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1AABC60K120K180K240K300K286263.13292101.30290931.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10AABC60K120K180K240K300K291704.33292217.18296536.071. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10AABC70K140K210K280K350K305443.27288777.42302765.041. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetAABC51015202521.7422.5022.32MIN: 14.28 / MAX: 46.61MIN: 12.78 / MAX: 45.64MIN: 12.95 / MAX: 40.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3AABC0.7971.5942.3913.1883.9853.2353.5423.375MIN: 1.88 / MAX: 16.85MIN: 1.94 / MAX: 89.13MIN: 1.95 / MAX: 15.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1AABC2468106.1605.9235.923MIN: 3.68 / MAX: 15.06MIN: 3.48 / MAX: 27.65MIN: 3.48 / MAX: 17.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50AABC112233445546.3646.7246.20MIN: 34.31 / MAX: 69.8MIN: 33.65 / MAX: 67.19MIN: 34.21 / MAX: 74.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0AABC369121510.1429.9059.914MIN: 5.71 / MAX: 27.46MIN: 5.59 / MAX: 18.27MIN: 5.64 / MAX: 29.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224AABC2468106.4876.6426.573MIN: 3.77 / MAX: 17.93MIN: 3.77 / MAX: 20.69MIN: 3.89 / MAX: 16.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0AABC2468106.1176.4976.128MIN: 3.6 / MAX: 18.96MIN: 3.52 / MAX: 24.6MIN: 3.57 / MAX: 26.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3AABC132639526556.4157.5857.81MIN: 41.8 / MAX: 83.17MIN: 41.67 / MAX: 87.29MIN: 42.16 / MAX: 99.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetAABC81624324035.5735.7336.83MIN: 26.52 / MAX: 169.88MIN: 26.48 / MAX: 136.57MIN: 27.03 / MAX: 154.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2AABC36912158.888.7611.42MIN: 6.61 / MAX: 13.35MIN: 6.34 / MAX: 15.86MIN: 6.56 / MAX: 23.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3AABC36912159.159.007.22MIN: 5.36 / MAX: 13.49MIN: 5.09 / MAX: 15.16MIN: 5.35 / MAX: 10.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2AABC2468106.565.296.47MIN: 3.73 / MAX: 132.3MIN: 3.67 / MAX: 11.66MIN: 3.83 / MAX: 12.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetAABC2468106.656.258.26MIN: 4.67 / MAX: 11.74MIN: 4.54 / MAX: 11.48MIN: 4.8 / MAX: 16.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0AABC4812162015.0413.6318.06MIN: 10.15 / MAX: 24.19MIN: 10.05 / MAX: 91.74MIN: 10.15 / MAX: 118.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceAABC0.52881.05761.58642.11522.6442.352.271.87MIN: 1.34 / MAX: 9.4MIN: 1.36 / MAX: 50.49MIN: 1.36 / MAX: 5.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetAABC81624324031.8133.5031.17MIN: 23.69 / MAX: 168.51MIN: 23.68 / MAX: 263.03MIN: 23.61 / MAX: 74.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16AABC306090120150115.83117.10118.74MIN: 94.75 / MAX: 217.23MIN: 97.95 / MAX: 249.41MIN: 99.32 / MAX: 211.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18AABC61218243025.8127.0026.71MIN: 19.93 / MAX: 39.86MIN: 20.28 / MAX: 95.45MIN: 19.79 / MAX: 76.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetAABC51015202518.6718.7318.90MIN: 11.48 / MAX: 84.49MIN: 11.43 / MAX: 61.21MIN: 11.57 / MAX: 52.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50AABC122436486053.6254.3454.15MIN: 40.86 / MAX: 101.46MIN: 41.01 / MAX: 110.19MIN: 40.98 / MAX: 104.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyAABC132639526555.7255.0052.75MIN: 39.48 / MAX: 236.11MIN: 40.15 / MAX: 131.98MIN: 39.56 / MAX: 172.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdAABC102030405041.0441.8337.95MIN: 32.46 / MAX: 115.38MIN: 33.02 / MAX: 159.48MIN: 30.85 / MAX: 52.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mAABC4812162014.8115.1417.14MIN: 10.27 / MAX: 30.69MIN: 10.3 / MAX: 24.97MIN: 10.17 / MAX: 61.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerAABC110220330440550501.82504.11504.15MIN: 444.51 / MAX: 600.73MIN: 436.79 / MAX: 669.94MIN: 445.1 / MAX: 603.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetAABC2468108.278.348.15MIN: 4.75 / MAX: 12.51MIN: 4.76 / MAX: 161.53MIN: 4.68 / MAX: 13.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetAABC4812162016.5416.7316.13MIN: 12.22 / MAX: 25.64MIN: 12.55 / MAX: 30.48MIN: 12.1 / MAX: 28.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2AABC1.00352.0073.01054.0145.01754.464.464.41MIN: 3.98 / MAX: 5.68MIN: 3.94 / MAX: 5.79MIN: 3.9 / MAX: 5.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3AABC1.15432.30863.46294.61725.77155.065.085.13MIN: 4.46 / MAX: 6.14MIN: 4.54 / MAX: 6.14MIN: 4.57 / MAX: 6.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2AABC0.7471.4942.2412.9883.7353.323.143.07MIN: 2.71 / MAX: 4.7MIN: 2.68 / MAX: 4.58MIN: 2.65 / MAX: 3.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetAABC1.01032.02063.03094.04125.05154.494.394.40MIN: 4.05 / MAX: 5.67MIN: 3.92 / MAX: 5.38MIN: 3.99 / MAX: 5.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0AABC369121511.6811.4611.36MIN: 10.58 / MAX: 12.98MIN: 10.61 / MAX: 12.77MIN: 10.56 / MAX: 12.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceAABC0.79431.58862.38293.17723.97153.183.531.81MIN: 1.36 / MAX: 14.97MIN: 1.37 / MAX: 11.8MIN: 1.36 / MAX: 4.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetAABC369121510.4810.6610.61MIN: 9.8 / MAX: 11.61MIN: 9.81 / MAX: 12.07MIN: 9.52 / MAX: 12.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16AABC81624324036.5036.5436.36MIN: 34.42 / MAX: 38.28MIN: 33.88 / MAX: 38.01MIN: 34.01 / MAX: 38.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18AABC2468108.228.108.14MIN: 7.39 / MAX: 9.4MIN: 7.03 / MAX: 9.7MIN: 7.43 / MAX: 9.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetAABC36912159.679.609.56MIN: 8.59 / MAX: 10.73MIN: 8.55 / MAX: 10.76MIN: 8.65 / MAX: 10.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50AABC4812162017.7317.4317.27MIN: 16.67 / MAX: 19.27MIN: 16.28 / MAX: 18.92MIN: 16.38 / MAX: 18.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyAABC51015202522.0722.7522.49MIN: 16.94 / MAX: 41.21MIN: 17.06 / MAX: 35.35MIN: 16.96 / MAX: 39.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdAABC4812162015.1914.9413.28MIN: 10.9 / MAX: 25.85MIN: 10.6 / MAX: 29.44MIN: 9.87 / MAX: 29.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mAABC2468106.396.286.61MIN: 5.6 / MAX: 7.45MIN: 5.58 / MAX: 7.97MIN: 5.64 / MAX: 7.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerAABC2004006008001000827.09811.41816.04MIN: 718.29 / MAX: 1429.76MIN: 701.49 / MAX: 926.09MIN: 697.06 / MAX: 919.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetAABC1.23982.47963.71944.95926.1994.865.095.51MIN: 3.95 / MAX: 5.79MIN: 4.04 / MAX: 6.32MIN: 4.59 / MAX: 6.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUAABC0.29030.58060.87091.16121.45151.281.291.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUAABC70014002100280035003059.753069.093079.26MIN: 2237.09 / MAX: 3438.3MIN: 2441.25 / MAX: 3388.77MIN: 2239.51 / MAX: 3421.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUAABC0.19130.38260.57390.76520.95650.850.850.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUAABC100020003000400050004647.904608.424627.17MIN: 4000.95 / MAX: 5050.1MIN: 3672.48 / MAX: 4981.34MIN: 4012.32 / MAX: 5085.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUAABC0.19130.38260.57390.76520.95650.850.850.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUAABC100020003000400050004673.344677.124652.86MIN: 4075.37 / MAX: 5052.59MIN: 4084.68 / MAX: 5063.45MIN: 4098.86 / MAX: 5126.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUAABC2040608010088.7088.7188.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUAABC102030405045.0545.0444.96MIN: 30.2 / MAX: 92.13MIN: 28.76 / MAX: 94.57MIN: 29.76 / MAX: 94.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUAABC0.40730.81461.22191.62922.03651.791.811.801. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUAABC50010001500200025002205.092197.492207.08MIN: 1689.25 / MAX: 2583.9MIN: 1894.85 / MAX: 2523.12MIN: 1918.22 / MAX: 2596.591. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUAABC306090120150120.42121.00120.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUAABC81624324033.1933.0333.17MIN: 23.22 / MAX: 61.03MIN: 21.21 / MAX: 60.43MIN: 21.31 / MAX: 63.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUAABC306090120150129.61129.42129.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUAABC71421283530.8330.8830.91MIN: 22.11 / MAX: 60.46MIN: 22.18 / MAX: 55.27MIN: 19.5 / MAX: 58.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUAABC369121512.8212.9112.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUAABC70140210280350311.50309.61310.52MIN: 193.89 / MAX: 477.7MIN: 244.19 / MAX: 450.45MIN: 231.46 / MAX: 464.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUAABC4080120160200191.16190.96191.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUAABC102030405041.8241.8641.81MIN: 28.74 / MAX: 69.91MIN: 28.73 / MAX: 125.76MIN: 28.65 / MAX: 73.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUAABC4080120160200170.36171.80171.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUAABC61218243023.4523.2523.29MIN: 16.28 / MAX: 49.04MIN: 16.1 / MAX: 49.56MIN: 17.1 / MAX: 49.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUAABC60012001800240030002722.732726.802717.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUAABC0.65031.30061.95092.60123.25152.882.882.89MIN: 1.92 / MAX: 22.65MIN: 1.92 / MAX: 21.46MIN: 1.94 / MAX: 22.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAABC90018002700360045004315.594311.484310.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAABC0.40950.8191.22851.6382.04751.821.821.82MIN: 1.15 / MAX: 16.03MIN: 1.17 / MAX: 18.13MIN: 1.14 / MAX: 15.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared


Phoronix Test Suite v10.8.4