3900X Seppy AMD Ryzen 9 3900X 12-Core testing with a ASUS TUF GAMING X570-PLUS (WI-FI) (2203 BIOS) and MSI AMD Radeon RX 580 8GB on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209074-NE-3900XSEPP02 .
3900X Seppy Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B C AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads) ASUS TUF GAMING X570-PLUS (WI-FI) (2203 BIOS) AMD Starship/Matisse 16GB Samsung SSD 970 EVO Plus 250GB MSI AMD Radeon RX 580 8GB (1366/2000MHz) AMD Ellesmere HDMI Audio MX279 Realtek RTL8111/8168/8411 + Intel-AC 9260 Ubuntu 20.04 5.11.0-rc1-phx (x86_64) 20201228 GNOME Shell 3.36.4 X Server 1.20.13 4.6 Mesa 21.2.6 (LLVM 12.0.0) 1.2.182 GCC 9.4.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8701021 Graphics Details - GLAMOR - BAR1 / Visible vRAM Size: 256 MB Java Details - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu120.04) Python Details - Python 3.8.10 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
3900X Seppy unpack-linux: linux-5.19.tar.xz gravitymark: 1920 x 1080 - OpenGL gravitymark: 1920 x 1080 - Vulkan gravitymark: 1920 x 1080 - OpenGL ES unvanquished: 1920 x 1080 - High unvanquished: 1920 x 1080 - Ultra unvanquished: 1920 x 1080 - Medium xonotic: 1920 x 1080 - Low xonotic: 1920 x 1080 - High xonotic: 1920 x 1080 - Ultra xonotic: 1920 x 1080 - Ultimate blosc: blosclz shuffle blosc: blosclz bitshuffle lammps: 20k Atoms lammps: Rhodopsin Protein srsran: OFDM_Test srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p compress-7zip: Compression Rating compress-7zip: Decompression Rating build-nodejs: Time To Compile build-php: Time To Compile build-python: Default build-python: Released Build, PGO + LTO Optimized primesieve: 1e12 primesieve: 1e13 ospray-studio: 1 - 4K - 1 - Path Tracer ospray-studio: 2 - 4K - 1 - Path Tracer ospray-studio: 3 - 4K - 1 - Path Tracer ospray-studio: 1 - 4K - 16 - Path Tracer ospray-studio: 1 - 4K - 32 - Path Tracer ospray-studio: 2 - 4K - 16 - Path Tracer ospray-studio: 2 - 4K - 32 - Path Tracer ospray-studio: 3 - 4K - 16 - Path Tracer ospray-studio: 3 - 4K - 32 - Path Tracer ospray-studio: 1 - 1080p - 1 - Path Tracer ospray-studio: 2 - 1080p - 1 - Path Tracer ospray-studio: 3 - 1080p - 1 - Path Tracer ospray-studio: 1 - 1080p - 16 - Path Tracer ospray-studio: 1 - 1080p - 32 - Path Tracer ospray-studio: 2 - 1080p - 16 - Path Tracer ospray-studio: 2 - 1080p - 32 - Path Tracer ospray-studio: 3 - 1080p - 16 - Path Tracer ospray-studio: 3 - 1080p - 32 - Path Tracer build-erlang: Time To Compile aircrack-ng: node-web-tooling: spark: 1000000 - 100 - SHA-512 Benchmark Time spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 500 - SHA-512 Benchmark Time spark: 1000000 - 500 - Calculate Pi Benchmark spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 500 - Group By Test Time spark: 1000000 - 500 - Repartition Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - SHA-512 Benchmark Time spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 10000000 - 100 - Calculate Pi Benchmark spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Group By Test Time spark: 10000000 - 100 - Repartition Test Time spark: 10000000 - 100 - Inner Join Test Time spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 500 - SHA-512 Benchmark Time spark: 10000000 - 500 - Calculate Pi Benchmark spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 500 - Group By Test Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 20000000 - 100 - SHA-512 Benchmark Time spark: 20000000 - 100 - Calculate Pi Benchmark spark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 100 - Group By Test Time spark: 20000000 - 100 - Repartition Test Time spark: 20000000 - 100 - Inner Join Test Time spark: 20000000 - 100 - Broadcast Inner Join Test Time spark: 20000000 - 500 - SHA-512 Benchmark Time spark: 20000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Group By Test Time spark: 20000000 - 500 - Repartition Test Time spark: 20000000 - 500 - Inner Join Test Time spark: 20000000 - 500 - Broadcast Inner Join Test Time spark: 40000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 100 - Group By Test Time spark: 40000000 - 100 - Repartition Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Broadcast Inner Join Test Time spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Group By Test Time spark: 40000000 - 500 - Repartition Test Time spark: 40000000 - 500 - Inner Join Test Time spark: 40000000 - 500 - Broadcast Inner Join Test Time spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 1000 - Repartition Test Time spark: 10000000 - 1000 - Inner Join Test Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - SHA-512 Benchmark Time spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 2000 - Group By Test Time spark: 10000000 - 2000 - Repartition Test Time spark: 10000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 20000000 - 1000 - SHA-512 Benchmark Time spark: 20000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 1000 - Group By Test Time spark: 20000000 - 1000 - Repartition Test Time spark: 20000000 - 1000 - Inner Join Test Time spark: 20000000 - 1000 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 2000 - Calculate Pi Benchmark spark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 2000 - Group By Test Time spark: 20000000 - 2000 - Repartition Test Time spark: 20000000 - 2000 - Inner Join Test Time spark: 20000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - SHA-512 Benchmark Time spark: 40000000 - 1000 - Calculate Pi Benchmark spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 1000 - Group By Test Time spark: 40000000 - 1000 - Repartition Test Time spark: 40000000 - 1000 - Inner Join Test Time spark: 40000000 - 1000 - Broadcast Inner Join Test Time spark: 40000000 - 2000 - SHA-512 Benchmark Time spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 2000 - Group By Test Time spark: 40000000 - 2000 - Repartition Test Time spark: 40000000 - 2000 - Inner Join Test Time spark: 40000000 - 2000 - Broadcast Inner Join Test Time dragonflydb: 50 - 1:1 dragonflydb: 50 - 1:5 dragonflydb: 50 - 5:1 dragonflydb: 200 - 1:1 dragonflydb: 200 - 1:5 dragonflydb: 200 - 5:1 redis: GET - 50 redis: SET - 50 redis: GET - 500 redis: LPOP - 50 redis: SADD - 50 redis: SET - 500 redis: GET - 1000 redis: LPOP - 500 redis: LPUSH - 50 redis: SADD - 500 redis: SET - 1000 redis: LPOP - 1000 redis: LPUSH - 500 redis: SADD - 1000 redis: LPUSH - 1000 astcenc: Fast astcenc: Medium astcenc: Thorough astcenc: Exhaustive memtier-benchmark: Redis - 50 - 1:1 memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 50 - 5:1 memtier-benchmark: Redis - 100 - 1:1 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 100 - 5:1 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 500 - 1:1 memtier-benchmark: Redis - 500 - 1:5 memtier-benchmark: Redis - 500 - 5:1 memtier-benchmark: Redis - 100 - 1:10 memtier-benchmark: Redis - 500 - 1:10 mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - FastestDet openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU rocksdb: Rand Fill rocksdb: Rand Read rocksdb: Update Rand rocksdb: Seq Fill rocksdb: Rand Fill Sync rocksdb: Read While Writing rocksdb: Read Rand Write Rand natron: Spaceship brl-cad: VGR Performance Metric A B C 7.004 69.6 66.8 50.3 320.4 314.9 328.1 536.8385533 449.7914841 412.5012465 307.8368633 16249.2 9647.4 8.291 8.285 129100000 347.3 142.7 368.1 164.6 389.2 158.6 402.7 175.4 111.1 61.6 642 687 182 291 1379 352 1249 1.845 39.723 70.773 94.642 5.339 105.216 220.139 329.43 85631 88576 469.416 59.577 18.833 309.971 14.674 185.415 10233 10532 12209 169810 333972 174430 342152 201629 397802 2583 2647 3090 41284 88798 42208 90100 49131 104130 100.819 43713.723 11.63 3.70 121.398043618 7.56 3.99 1.84 1.85 1.57 3.99 124.240940193 7.61 4.48 2.05 1.84 1.70 4.18 123.451797119 7.66 4.39 2.23 2.55 2.09 4.40 124.290781801 7.58 5.04 2.83 3.14 2.90 17.58 124.62804651 7.60 8.32 10.60 13.094310098 13.90 16.41 124.736721537 7.56 7.71 9.97 11.98 11.53 32.600203152 123.540958083 7.64 12.88 21.586612403 25.79 26.34 32.12 124.859400848 7.58 13.276474348 21.72 25.86 25.89 63.09 124.886104895 7.65 35.63 42.37 51.48 50.356000828 57.330306599 124.324862085 7.57 30.65796783 40.06 46.550539165 45.31 16.65372428 124.080298668 7.61 8.17 10.30 12.00 11.24 16.92 123.640864587 7.65 8.42 10.69 13.653470006 12.37 30.58 124.249778759 7.60 11.746586907 19.94 23.65 23.595971624 31.53 124.383605089 7.62 11.92 19.96 24.28 22.84 59.062692968 124.56455836 7.59 27.06 39.70 45.295483537 46.512187696 58.24 124.588956163 7.67 26.86 40.002159983 47.58 46.73 2571296.09 2782190.43 2467506.49 2435286.83 2594626.13 2326752.52 2300790.75 1475997.62 1963578 2348967 1886028.5 1453979.38 2013854.62 2278568 1454759.12 1818545 1586963.25 1342933.62 1412551.5 1812210.12 1383954.75 177.978 58.0397 6.9528 0.7553 1613018.51 1701125.86 1434216.13 1536206.15 1703287.1 1433374.57 1708785.94 1531801.97 1602907.29 1454839.23 1654936.28 1804067.98 15.791 2.411 4.584 32.984 6.738 4.578 5.499 29.022 14.4 4.85 4.27 5.01 4.41 6.73 2.16 15.34 49.58 12.77 9.86 20.92 26.11 21.23 14.8 171.12 6.11 7.64 3.42 4.18 2.75 3.72 11.34 1.39 9 10.73 4.57 3.04 9.75 9.85 6.96 7.21 164.54 2.6 2.69 2227.9 2 2962.89 1.97 3024.82 165.6 36.2 4.16 1440.68 353.79 16.95 319.27 18.78 29.33 204.45 411.75 29.13 353.99 16.93 9947.11 1.2 12623.71 0.94 869370 58740644 516743 1031481 4514 2559841 1894185 3.1 181386 7.26 70.1 66.9 50.3 319.2 318.8 324.9 542.6421245 452.7463575 419.5751153 310.6637414 16778.8 10159 8.277 8.18 129300000 367.4 150.9 371.7 163.9 394.7 161.5 398.5 174.1 109.8 60.9 656 765 183 293 1416 360 1389 1.851 41.265 73.88 100.512 5.415 105.802 222.512 346.25 87122 88624 468.677 59.61 18.816 311.963 14.644 185.618 10231 10495 12200 169368 334350 173423 341210 201249 397000 2581 2649 3080 40995 88008 42159 90160 49075 104140 102.61 43775.77 11.58 3.60 121.549495183 7.50 4.28 1.77 1.94 1.59 3.86 124.116566334 7.60 4.16 1.95 1.97 1.72 3.97 123.73 7.58 4.56 2.35 2.55 1.93 4.42 123.333619836 7.53 4.98 2.76 3.17 2.51 17.07 124.66 7.58 8.30 10.46 12.28 12.84 16.23 123.910028274 7.55 7.67 10.02 12.09 11.68 31.42 123.678567106 7.56 12.19 21.62 26.79 25.13 32.07 124.331603099 7.61 12.837489815 21.46 25.93 25.90 63.12 124.119028332 7.57 35.21 42.40 50.07 50.15 57.68 123.674687082 7.62 31.58 39.717702062 46.43 45.42 16.45 124.039439873 7.67 8.05 10.01 11.61 11.422052413 16.98 123.471797282 7.63 8.37 10.67 13.112866616 11.73 29.67 124.89 7.54 11.49 20.17 22.74 22.73 31.25 125.06432936 7.61 12.088378319 20.76 24.13 22.49 58.94 123.600440537 7.55 31.55 40.00 47.06 47.26 59.38 124.383948894 7.54 25.63 40.34 47.02 45.91 2530290.29 2805869.83 2450502.69 2432286.97 2603169.2 2342872.52 1980782.62 1604856.12 1992385 1278828 1733906.5 1496450.5 2024809.38 1386256.75 1479037.62 1797623.38 1586804.12 1357810 1395701.88 1812301.38 1415616.5 178.4348 58.1473 6.9463 0.7568 1511370.28 1651913.58 1408731.93 1492923.5 1587437.46 1471052.19 1705311.59 1465097.24 1629690 1422519.5 1646498.4 1598224.25 15.645 2.404 4.573 32.935 6.811 4.596 5.471 28.637 14.43 4.85 4.26 5.03 4.4 6.73 2.16 15.15 49.18 12.59 9.89 20.94 25.89 21.17 14.78 171.81 5.82 7.66 3.41 4.16 2.76 3.73 11.42 1.4 9.13 10.67 4.55 3.09 9.75 9.88 7.04 7.16 177.97 2.56 2.71 2201.35 2.01 2922.79 2.03 2917.75 166.67 35.98 4.16 1436.03 355.77 16.86 319.98 18.74 29.86 200.8 413.92 28.98 354.88 16.89 10061.67 1.19 12536.44 0.95 856519 55704584 510169 1011586 4420 2495285 1891502 3.3 180779 6.963 70.2 66.5 50.4 319.8 324.6 324.2 544.6950305 450.6252591 422.1865418 309.0855508 16951.6 9980.1 8.267 7.983 122300000 364.6 149.4 368.7 162.9 375.2 153.6 402.7 175 109.8 60.8 658 770 183 294 1424 362 1414 1.858 40.925 73.141 100.766 5.363 106.693 223.216 339.683 86779 88089 468.809 59.525 18.975 313.754 14.7 185.908 10295 10556 12221 169662 334546 173925 341165 201634 397232 2594 2655 3088 41236 88501 42363 90458 49226 104486 103.291 43668.871 11.94 3.48 122.651776772 7.57 4.25 1.75 1.70 1.58 3.80 124.186179993 7.61 4.25 1.93 2.07 1.58 4.21 123.178075774 7.64 4.46 2.19 2.48 2.05 4.30 124.281435093 7.59 5.03 2.76 3.26 2.63 17.35 124.28 7.61 7.88 10.32 12.35 13.16 16.31 124.04 7.60 7.62 10.070926909 11.59 11.01 32.52 124.527683092 7.56 12.46 21.84 25.43 25.57 32.05 124.78 7.64 13.03 20.34 24.16 23.51 62.74 124.687130511 7.63 35.15 42.65 51.67 49.30 57.79 124.77 7.64 31.26 39.50 46.07 44.52 16.25 123.655987591 7.60 7.89 10.30 12.17 11.28 16.9481657 124.53 7.58 8.64 10.55 12.81 11.39 29.67 123.95 7.61 11.49 20.01 22.57 21.93 31.51 124.687862455 7.59 12.92 20.64 24.86 23.58 58.69 124.708485508 7.54 27.23 40.47 44.59 46.22 61.921948894 124.46 7.62 26.09 39.44 44.90 45.90 2549487.54 2763256.98 2444018.25 2423383.45 2611028.59 2329272.97 2029142.25 1574484.25 1952244.88 1380363.25 1745999.38 1568311.12 1993837.12 1390922.88 1406495 1822522.25 1545873.75 1363517.5 1401205.62 1865566.62 1397587.12 178.1141 58.036 6.913 0.7547 1490717.64 1623836.53 1439771.73 1472294.05 1620664.8 1407911.56 1690778.92 1382825.47 1665629.7 1611931.04 15.844 2.457 4.675 33.96 6.899 4.711 5.511 29.137 14.41 4.88 4.27 5.03 4.41 6.77 2.19 15.1 49.1 12.59 9.9 21.04 25.83 21.57 14.8 171.64 6.24 7.63 3.41 4.18 2.75 3.72 11.33 1.4 8.98 10.86 4.57 3.05 9.77 9.88 6.96 6.88 164.31 2.61 2.72 2202.93 2.01 2929.95 1.99 2975.33 170.02 35.26 4.17 1438.62 354.76 16.9 317.41 18.89 29.71 201.86 412.76 29.06 351.53 17.05 9907.84 1.2 12619.85 0.95 873162 55635525 520170 1024005 4504 2485811 1896932 3.2 179642 OpenBenchmarking.org
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz A B C 2 4 6 8 10 7.004 7.260 6.963
GravityMark Resolution: 1920 x 1080 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.70 Resolution: 1920 x 1080 - Renderer: OpenGL A B C 16 32 48 64 80 69.6 70.1 70.2
GravityMark Resolution: 1920 x 1080 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.70 Resolution: 1920 x 1080 - Renderer: Vulkan A B C 15 30 45 60 75 66.8 66.9 66.5
GravityMark Resolution: 1920 x 1080 - Renderer: OpenGL ES OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.70 Resolution: 1920 x 1080 - Renderer: OpenGL ES A B C 11 22 33 44 55 50.3 50.3 50.4
Unvanquished Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High A B C 70 140 210 280 350 320.4 319.2 319.8
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Ultra A B C 70 140 210 280 350 314.9 318.8 324.6
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Medium A B C 70 140 210 280 350 328.1 324.9 324.2
Xonotic Resolution: 1920 x 1080 - Effects Quality: Low OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: Low A B C 120 240 360 480 600 536.84 542.64 544.70 MIN: 359 / MAX: 1041 MIN: 350 / MAX: 1016 MIN: 365 / MAX: 1032
Xonotic Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: High A B C 100 200 300 400 500 449.79 452.75 450.63 MIN: 252 / MAX: 843 MIN: 305 / MAX: 841 MIN: 292 / MAX: 838
Xonotic Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: Ultra A B C 90 180 270 360 450 412.50 419.58 422.19 MIN: 240 / MAX: 715 MIN: 261 / MAX: 756 MIN: 257 / MAX: 746
Xonotic Resolution: 1920 x 1080 - Effects Quality: Ultimate OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: Ultimate A B C 70 140 210 280 350 307.84 310.66 309.09 MIN: 64 / MAX: 679 MIN: 74 / MAX: 686 MIN: 68 / MAX: 667
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle A B C 4K 8K 12K 16K 20K 16249.2 16778.8 16951.6 1. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle A B C 2K 4K 6K 8K 10K 9647.4 10159.0 9980.1 1. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
LAMMPS Molecular Dynamics Simulator Model: 20k Atoms OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms A B C 2 4 6 8 10 8.291 8.277 8.267 1. (CXX) g++ options: -O3 -pthread -lm -ldl
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein A B C 2 4 6 8 10 8.285 8.180 7.983 1. (CXX) g++ options: -O3 -pthread -lm -ldl
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test A B C 30M 60M 90M 120M 150M 129100000 129300000 122300000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 80 160 240 320 400 347.3 367.4 364.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 30 60 90 120 150 142.7 150.9 149.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 80 160 240 320 400 368.1 371.7 368.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 40 80 120 160 200 164.6 163.9 162.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 90 180 270 360 450 389.2 394.7 375.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 40 80 120 160 200 158.6 161.5 153.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 90 180 270 360 450 402.7 398.5 402.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 40 80 120 160 200 175.4 174.1 175.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 20 40 60 80 100 111.1 109.8 109.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 14 28 42 56 70 61.6 60.9 60.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl A B C 140 280 420 560 700 642 656 658 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate A B C 170 340 510 680 850 687 765 770 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen A B C 40 80 120 160 200 182 183 183 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced A B C 60 120 180 240 300 291 293 294 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing A B C 300 600 900 1200 1500 1379 1416 1424 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian A B C 80 160 240 320 400 352 360 362 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space A B C 300 600 900 1200 1500 1249 1389 1414 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K A B C 0.4181 0.8362 1.2543 1.6724 2.0905 1.845 1.851 1.858 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K A B C 9 18 27 36 45 39.72 41.27 40.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K A B C 16 32 48 64 80 70.77 73.88 73.14 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K A B C 20 40 60 80 100 94.64 100.51 100.77 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p A B C 1.2184 2.4368 3.6552 4.8736 6.092 5.339 5.415 5.363 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p A B C 20 40 60 80 100 105.22 105.80 106.69 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p A B C 50 100 150 200 250 220.14 222.51 223.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p A B C 80 160 240 320 400 329.43 346.25 339.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating A B C 20K 40K 60K 80K 100K 85631 87122 86779 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating A B C 20K 40K 60K 80K 100K 88576 88624 88089 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile A B C 100 200 300 400 500 469.42 468.68 468.81
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile A B C 13 26 39 52 65 59.58 59.61 59.53
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default A B C 5 10 15 20 25 18.83 18.82 18.98
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized A B C 70 140 210 280 350 309.97 311.96 313.75
Primesieve Length: 1e12 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e12 A B C 4 8 12 16 20 14.67 14.64 14.70 1. (CXX) g++ options: -O3 -lpthread
Primesieve Length: 1e13 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e13 A B C 40 80 120 160 200 185.42 185.62 185.91 1. (CXX) g++ options: -O3 -lpthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 2K 4K 6K 8K 10K 10233 10231 10295 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 2K 4K 6K 8K 10K 10532 10495 10556 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 3K 6K 9K 12K 15K 12209 12200 12221 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 40K 80K 120K 160K 200K 169810 169368 169662 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 70K 140K 210K 280K 350K 333972 334350 334546 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 40K 80K 120K 160K 200K 174430 173423 173925 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 70K 140K 210K 280K 350K 342152 341210 341165 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 40K 80K 120K 160K 200K 201629 201249 201634 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 90K 180K 270K 360K 450K 397802 397000 397232 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 600 1200 1800 2400 3000 2583 2581 2594 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 600 1200 1800 2400 3000 2647 2649 2655 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 700 1400 2100 2800 3500 3090 3080 3088 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 9K 18K 27K 36K 45K 41284 40995 41236 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 20K 40K 60K 80K 100K 88798 88008 88501 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 9K 18K 27K 36K 45K 42208 42159 42363 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 20K 40K 60K 80K 100K 90100 90160 90458 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 11K 22K 33K 44K 55K 49131 49075 49226 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 20K 40K 60K 80K 100K 104130 104140 104486 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile A B C 20 40 60 80 100 100.82 102.61 103.29
Aircrack-ng OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 A B C 9K 18K 27K 36K 45K 43713.72 43775.77 43668.87 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -lbsd -pthread
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark A B C 3 6 9 12 15 11.63 11.58 11.94
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 0.8325 1.665 2.4975 3.33 4.1625 3.70 3.60 3.48
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 121.40 121.55 122.65
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.56 7.50 7.57
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time A B C 0.963 1.926 2.889 3.852 4.815 3.99 4.28 4.25
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time A B C 0.414 0.828 1.242 1.656 2.07 1.84 1.77 1.75
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time A B C 0.4365 0.873 1.3095 1.746 2.1825 1.85 1.94 1.70
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 0.3578 0.7156 1.0734 1.4312 1.789 1.57 1.59 1.58
Apache Spark Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 0.8978 1.7956 2.6934 3.5912 4.489 3.99 3.86 3.80
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.24 124.12 124.19
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.61 7.60 7.61
Apache Spark Row Count: 1000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time A B C 1.008 2.016 3.024 4.032 5.04 4.48 4.16 4.25
Apache Spark Row Count: 1000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time A B C 0.4613 0.9226 1.3839 1.8452 2.3065 2.05 1.95 1.93
Apache Spark Row Count: 1000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time A B C 0.4658 0.9316 1.3974 1.8632 2.329 1.84 1.97 2.07
Apache Spark Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 0.387 0.774 1.161 1.548 1.935 1.70 1.72 1.58
Apache Spark Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 0.9473 1.8946 2.8419 3.7892 4.7365 4.18 3.97 4.21
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.45 123.73 123.18
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.66 7.58 7.64
Apache Spark Row Count: 1000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time A B C 1.026 2.052 3.078 4.104 5.13 4.39 4.56 4.46
Apache Spark Row Count: 1000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time A B C 0.5288 1.0576 1.5864 2.1152 2.644 2.23 2.35 2.19
Apache Spark Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time A B C 0.5738 1.1476 1.7214 2.2952 2.869 2.55 2.55 2.48
Apache Spark Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 0.4703 0.9406 1.4109 1.8812 2.3515 2.09 1.93 2.05
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 0.9945 1.989 2.9835 3.978 4.9725 4.40 4.42 4.30
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.29 123.33 124.28
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.58 7.53 7.59
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time A B C 1.134 2.268 3.402 4.536 5.67 5.04 4.98 5.03
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time A B C 0.6368 1.2736 1.9104 2.5472 3.184 2.83 2.76 2.76
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time A B C 0.7335 1.467 2.2005 2.934 3.6675 3.14 3.17 3.26
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 0.6525 1.305 1.9575 2.61 3.2625 2.90 2.51 2.63
Apache Spark Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 4 8 12 16 20 17.58 17.07 17.35
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.63 124.66 124.28
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.60 7.58 7.61
Apache Spark Row Count: 10000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time A B C 2 4 6 8 10 8.32 8.30 7.88
Apache Spark Row Count: 10000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time A B C 3 6 9 12 15 10.60 10.46 10.32
Apache Spark Row Count: 10000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time A B C 3 6 9 12 15 13.09 12.28 12.35
Apache Spark Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 13.90 12.84 13.16
Apache Spark Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 4 8 12 16 20 16.41 16.23 16.31
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.74 123.91 124.04
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.56 7.55 7.60
Apache Spark Row Count: 10000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time A B C 2 4 6 8 10 7.71 7.67 7.62
Apache Spark Row Count: 10000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time A B C 3 6 9 12 15 9.970000000 10.020000000 10.070926909
Apache Spark Row Count: 10000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time A B C 3 6 9 12 15 11.98 12.09 11.59
Apache Spark Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 11.53 11.68 11.01
Apache Spark Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 8 16 24 32 40 32.60 31.42 32.52
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.54 123.68 124.53
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.64 7.56 7.56
Apache Spark Row Count: 20000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Group By Test Time A B C 3 6 9 12 15 12.88 12.19 12.46
Apache Spark Row Count: 20000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Repartition Test Time A B C 5 10 15 20 25 21.59 21.62 21.84
Apache Spark Row Count: 20000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Inner Join Test Time A B C 6 12 18 24 30 25.79 26.79 25.43
Apache Spark Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 26.34 25.13 25.57
Apache Spark Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 7 14 21 28 35 32.12 32.07 32.05
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.86 124.33 124.78
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.58 7.61 7.64
Apache Spark Row Count: 20000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Group By Test Time A B C 3 6 9 12 15 13.28 12.84 13.03
Apache Spark Row Count: 20000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Repartition Test Time A B C 5 10 15 20 25 21.72 21.46 20.34
Apache Spark Row Count: 20000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Inner Join Test Time A B C 6 12 18 24 30 25.86 25.93 24.16
Apache Spark Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 25.89 25.90 23.51
Apache Spark Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 14 28 42 56 70 63.09 63.12 62.74
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.89 124.12 124.69
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.65 7.57 7.63
Apache Spark Row Count: 40000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time A B C 8 16 24 32 40 35.63 35.21 35.15
Apache Spark Row Count: 40000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time A B C 10 20 30 40 50 42.37 42.40 42.65
Apache Spark Row Count: 40000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time A B C 12 24 36 48 60 51.48 50.07 51.67
Apache Spark Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 11 22 33 44 55 50.36 50.15 49.30
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 13 26 39 52 65 57.33 57.68 57.79
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.32 123.67 124.77
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.57 7.62 7.64
Apache Spark Row Count: 40000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Group By Test Time A B C 7 14 21 28 35 30.66 31.58 31.26
Apache Spark Row Count: 40000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Repartition Test Time A B C 9 18 27 36 45 40.06 39.72 39.50
Apache Spark Row Count: 40000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Inner Join Test Time A B C 11 22 33 44 55 46.55 46.43 46.07
Apache Spark Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 10 20 30 40 50 45.31 45.42 44.52
Apache Spark Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 4 8 12 16 20 16.65 16.45 16.25
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.08 124.04 123.66
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.61 7.67 7.60
Apache Spark Row Count: 10000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time A B C 2 4 6 8 10 8.17 8.05 7.89
Apache Spark Row Count: 10000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time A B C 3 6 9 12 15 10.30 10.01 10.30
Apache Spark Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time A B C 3 6 9 12 15 12.00 11.61 12.17
Apache Spark Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 11.24 11.42 11.28
Apache Spark Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 4 8 12 16 20 16.92 16.98 16.95
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.64 123.47 124.53
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.65 7.63 7.58
Apache Spark Row Count: 10000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time A B C 2 4 6 8 10 8.42 8.37 8.64
Apache Spark Row Count: 10000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time A B C 3 6 9 12 15 10.69 10.67 10.55
Apache Spark Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time A B C 4 8 12 16 20 13.65 13.11 12.81
Apache Spark Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 12.37 11.73 11.39
Apache Spark Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 7 14 21 28 35 30.58 29.67 29.67
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.25 124.89 123.95
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.60 7.54 7.61
Apache Spark Row Count: 20000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Group By Test Time A B C 3 6 9 12 15 11.75 11.49 11.49
Apache Spark Row Count: 20000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Repartition Test Time A B C 5 10 15 20 25 19.94 20.17 20.01
Apache Spark Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time A B C 6 12 18 24 30 23.65 22.74 22.57
Apache Spark Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 23.60 22.73 21.93
Apache Spark Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 7 14 21 28 35 31.53 31.25 31.51
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.38 125.06 124.69
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.62 7.61 7.59
Apache Spark Row Count: 20000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Group By Test Time A B C 3 6 9 12 15 11.92 12.09 12.92
Apache Spark Row Count: 20000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Repartition Test Time A B C 5 10 15 20 25 19.96 20.76 20.64
Apache Spark Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time A B C 6 12 18 24 30 24.28 24.13 24.86
Apache Spark Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 22.84 22.49 23.58
Apache Spark Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 13 26 39 52 65 59.06 58.94 58.69
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.56 123.60 124.71
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.59 7.55 7.54
Apache Spark Row Count: 40000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Group By Test Time A B C 7 14 21 28 35 27.06 31.55 27.23
Apache Spark Row Count: 40000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Repartition Test Time A B C 9 18 27 36 45 39.70 40.00 40.47
Apache Spark Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time A B C 11 22 33 44 55 45.30 47.06 44.59
Apache Spark Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 11 22 33 44 55 46.51 47.26 46.22
Apache Spark Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 14 28 42 56 70 58.24 59.38 61.92
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.59 124.38 124.46
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.67 7.54 7.62
Apache Spark Row Count: 40000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time A B C 6 12 18 24 30 26.86 25.63 26.09
Apache Spark Row Count: 40000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time A B C 9 18 27 36 45 40.00 40.34 39.44
Apache Spark Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time A B C 11 22 33 44 55 47.58 47.02 44.90
Apache Spark Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 11 22 33 44 55 46.73 45.91 45.90
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 A B C 600K 1200K 1800K 2400K 3000K 2571296.09 2530290.29 2549487.54 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2782190.43 2805869.83 2763256.98 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2467506.49 2450502.69 2444018.25 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 A B C 500K 1000K 1500K 2000K 2500K 2435286.83 2432286.97 2423383.45 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2594626.13 2603169.20 2611028.59 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2326752.52 2342872.52 2329272.97 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 A B C 500K 1000K 1500K 2000K 2500K 2300790.75 1980782.62 2029142.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 A B C 300K 600K 900K 1200K 1500K 1475997.62 1604856.12 1574484.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 A B C 400K 800K 1200K 1600K 2000K 1963578.00 1992385.00 1952244.88 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 50 A B C 500K 1000K 1500K 2000K 2500K 2348967.00 1278828.00 1380363.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 50 A B C 400K 800K 1200K 1600K 2000K 1886028.50 1733906.50 1745999.38 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 A B C 300K 600K 900K 1200K 1500K 1453979.38 1496450.50 1568311.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 A B C 400K 800K 1200K 1600K 2000K 2013854.62 2024809.38 1993837.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 500 A B C 500K 1000K 1500K 2000K 2500K 2278568.00 1386256.75 1390922.88 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 A B C 300K 600K 900K 1200K 1500K 1454759.12 1479037.62 1406495.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 500 A B C 400K 800K 1200K 1600K 2000K 1818545.00 1797623.38 1822522.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 A B C 300K 600K 900K 1200K 1500K 1586963.25 1586804.12 1545873.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 A B C 300K 600K 900K 1200K 1500K 1342933.62 1357810.00 1363517.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 A B C 300K 600K 900K 1200K 1500K 1412551.50 1395701.88 1401205.62 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 1000 A B C 400K 800K 1200K 1600K 2000K 1812210.12 1812301.38 1865566.62 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 A B C 300K 600K 900K 1200K 1500K 1383954.75 1415616.50 1397587.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast A B C 40 80 120 160 200 177.98 178.43 178.11 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium A B C 13 26 39 52 65 58.04 58.15 58.04 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough A B C 2 4 6 8 10 6.9528 6.9463 6.9130 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive A B C 0.1703 0.3406 0.5109 0.6812 0.8515 0.7553 0.7568 0.7547 1. (CXX) g++ options: -O3 -flto -pthread
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 A B C 300K 600K 900K 1200K 1500K 1613018.51 1511370.28 1490717.64 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 A B C 400K 800K 1200K 1600K 2000K 1701125.86 1651913.58 1623836.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 A B C 300K 600K 900K 1200K 1500K 1434216.13 1408731.93 1439771.73 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 A B C 300K 600K 900K 1200K 1500K 1536206.15 1492923.50 1472294.05 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 A B C 400K 800K 1200K 1600K 2000K 1703287.10 1587437.46 1620664.80 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 A B C 300K 600K 900K 1200K 1500K 1433374.57 1471052.19 1407911.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K 1708785.94 1705311.59 1690778.92 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 A B 300K 600K 900K 1200K 1500K 1531801.97 1465097.24 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 A B 300K 600K 900K 1200K 1500K 1602907.29 1629690.00 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 A B C 300K 600K 900K 1200K 1500K 1454839.23 1422519.50 1382825.47 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K 1654936.28 1646498.40 1665629.70 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K 1804067.98 1598224.25 1611931.04 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B C 4 8 12 16 20 15.79 15.65 15.84 MIN: 15.69 / MAX: 18.21 MIN: 15.53 / MAX: 18.8 MIN: 15.72 / MAX: 23.77 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B C 0.5528 1.1056 1.6584 2.2112 2.764 2.411 2.404 2.457 MIN: 2.39 / MAX: 2.99 MIN: 2.38 / MAX: 2.46 MIN: 2.43 / MAX: 3.09 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B C 1.0519 2.1038 3.1557 4.2076 5.2595 4.584 4.573 4.675 MIN: 4.47 / MAX: 15.99 MIN: 4.51 / MAX: 5.21 MIN: 4.62 / MAX: 5.23 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A B C 8 16 24 32 40 32.98 32.94 33.96 MIN: 32.32 / MAX: 61.68 MIN: 32.3 / MAX: 43.85 MIN: 33.01 / MAX: 51.01 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B C 2 4 6 8 10 6.738 6.811 6.899 MIN: 6.67 / MAX: 7.6 MIN: 6.74 / MAX: 18.26 MIN: 6.83 / MAX: 18.3 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B C 1.06 2.12 3.18 4.24 5.3 4.578 4.596 4.711 MIN: 4.52 / MAX: 6.35 MIN: 4.54 / MAX: 5.86 MIN: 4.67 / MAX: 5.4 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A B C 1.24 2.48 3.72 4.96 6.2 5.499 5.471 5.511 MIN: 5.39 / MAX: 6.12 MIN: 5.38 / MAX: 7.12 MIN: 5.4 / MAX: 6.73 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B C 7 14 21 28 35 29.02 28.64 29.14 MIN: 28.5 / MAX: 38.97 MIN: 28.32 / MAX: 39.21 MIN: 28.76 / MAX: 33.48 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet A B C 4 8 12 16 20 14.40 14.43 14.41 MIN: 14.23 / MAX: 14.75 MIN: 14.26 / MAX: 18.36 MIN: 14.21 / MAX: 14.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 A B C 1.098 2.196 3.294 4.392 5.49 4.85 4.85 4.88 MIN: 4.78 / MAX: 5.44 MIN: 4.78 / MAX: 5.31 MIN: 4.8 / MAX: 7.53 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 A B C 0.9608 1.9216 2.8824 3.8432 4.804 4.27 4.26 4.27 MIN: 4.2 / MAX: 4.36 MIN: 4.2 / MAX: 4.33 MIN: 4.21 / MAX: 4.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 A B C 1.1318 2.2636 3.3954 4.5272 5.659 5.01 5.03 5.03 MIN: 4.97 / MAX: 5.08 MIN: 4.96 / MAX: 5.43 MIN: 4.97 / MAX: 5.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet A B C 0.9923 1.9846 2.9769 3.9692 4.9615 4.41 4.40 4.41 MIN: 4.34 / MAX: 4.47 MIN: 4.35 / MAX: 4.47 MIN: 4.35 / MAX: 4.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 A B C 2 4 6 8 10 6.73 6.73 6.77 MIN: 6.68 / MAX: 6.81 MIN: 6.69 / MAX: 6.82 MIN: 6.72 / MAX: 6.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface A B C 0.4928 0.9856 1.4784 1.9712 2.464 2.16 2.16 2.19 MIN: 2.13 / MAX: 2.53 MIN: 2.13 / MAX: 2.25 MIN: 2.16 / MAX: 2.27 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet A B C 4 8 12 16 20 15.34 15.15 15.10 MIN: 14.76 / MAX: 15.96 MIN: 14.39 / MAX: 15.92 MIN: 14.48 / MAX: 16.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 A B C 11 22 33 44 55 49.58 49.18 49.10 MIN: 48.22 / MAX: 56.17 MIN: 48.5 / MAX: 50.39 MIN: 48.39 / MAX: 64.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 A B C 3 6 9 12 15 12.77 12.59 12.59 MIN: 12.45 / MAX: 13.46 MIN: 12.28 / MAX: 13.23 MIN: 12.26 / MAX: 13.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet A B C 3 6 9 12 15 9.86 9.89 9.90 MIN: 9.51 / MAX: 10.33 MIN: 9.57 / MAX: 10.32 MIN: 9.57 / MAX: 10.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 A B C 5 10 15 20 25 20.92 20.94 21.04 MIN: 20.71 / MAX: 21.79 MIN: 20.69 / MAX: 21.54 MIN: 20.78 / MAX: 21.57 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny A B C 6 12 18 24 30 26.11 25.89 25.83 MIN: 24.91 / MAX: 36.36 MIN: 24.85 / MAX: 27.28 MIN: 24.9 / MAX: 27.2 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd A B C 5 10 15 20 25 21.23 21.17 21.57 MIN: 18.98 / MAX: 23.87 MIN: 19.28 / MAX: 30.13 MIN: 19.73 / MAX: 22.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m A B C 4 8 12 16 20 14.80 14.78 14.80 MIN: 14.7 / MAX: 14.89 MIN: 14.69 / MAX: 15.38 MIN: 14.71 / MAX: 14.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer A B C 40 80 120 160 200 171.12 171.81 171.64 MIN: 169.3 / MAX: 188.47 MIN: 169.37 / MAX: 194.03 MIN: 169.64 / MAX: 182.63 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet A B C 2 4 6 8 10 6.11 5.82 6.24 MIN: 6.06 / MAX: 6.17 MIN: 5.76 / MAX: 6.39 MIN: 6.18 / MAX: 6.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet A B C 2 4 6 8 10 7.64 7.66 7.63 MIN: 7.51 / MAX: 15.72 MIN: 7.58 / MAX: 8.86 MIN: 7.14 / MAX: 8.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 A B C 0.7695 1.539 2.3085 3.078 3.8475 3.42 3.41 3.41 MIN: 3.21 / MAX: 3.78 MIN: 3.23 / MAX: 4 MIN: 3.39 / MAX: 3.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 A B C 0.9405 1.881 2.8215 3.762 4.7025 4.18 4.16 4.18 MIN: 3.94 / MAX: 9.39 MIN: 3.95 / MAX: 5.68 MIN: 3.96 / MAX: 5.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 A B C 0.621 1.242 1.863 2.484 3.105 2.75 2.76 2.75 MIN: 2.62 / MAX: 3.47 MIN: 2.63 / MAX: 3.51 MIN: 2.63 / MAX: 3.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet A B C 0.8393 1.6786 2.5179 3.3572 4.1965 3.72 3.73 3.72 MIN: 3.56 / MAX: 4.91 MIN: 3.56 / MAX: 4.85 MIN: 3.56 / MAX: 4.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 A B C 3 6 9 12 15 11.34 11.42 11.33 MIN: 10.14 / MAX: 20.01 MIN: 10.14 / MAX: 19.99 MIN: 10.1 / MAX: 15.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface A B C 0.315 0.63 0.945 1.26 1.575 1.39 1.40 1.40 MIN: 1.3 / MAX: 1.85 MIN: 1.31 / MAX: 1.74 MIN: 1.31 / MAX: 1.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet A B C 3 6 9 12 15 9.00 9.13 8.98 MIN: 8.66 / MAX: 12.61 MIN: 8.65 / MAX: 20.02 MIN: 8.66 / MAX: 15.78 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 A B C 3 6 9 12 15 10.73 10.67 10.86 MIN: 10.41 / MAX: 15.72 MIN: 10.43 / MAX: 15.84 MIN: 10.44 / MAX: 20 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 A B C 1.0283 2.0566 3.0849 4.1132 5.1415 4.57 4.55 4.57 MIN: 4.52 / MAX: 4.93 MIN: 4.2 / MAX: 5.51 MIN: 4.22 / MAX: 5.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet A B C 0.6953 1.3906 2.0859 2.7812 3.4765 3.04 3.09 3.05 MIN: 3 / MAX: 3.58 MIN: 3.01 / MAX: 9.01 MIN: 3 / MAX: 3.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 A B C 3 6 9 12 15 9.75 9.75 9.77 MIN: 9.48 / MAX: 14.3 MIN: 9.43 / MAX: 13.99 MIN: 9.45 / MAX: 17.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny A B C 3 6 9 12 15 9.85 9.88 9.88 MIN: 9.77 / MAX: 11.22 MIN: 9.81 / MAX: 10.1 MIN: 9.8 / MAX: 11.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd A B C 2 4 6 8 10 6.96 7.04 6.96 MIN: 6.9 / MAX: 7.54 MIN: 6.9 / MAX: 14.62 MIN: 6.91 / MAX: 7.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m A B C 2 4 6 8 10 7.21 7.16 6.88 MIN: 6.82 / MAX: 19.5 MIN: 6.84 / MAX: 19.55 MIN: 6.82 / MAX: 11.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer A B C 40 80 120 160 200 164.54 177.97 164.31 MIN: 159.31 / MAX: 182.17 MIN: 164.84 / MAX: 233.11 MIN: 160.79 / MAX: 188.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet A B C 0.5873 1.1746 1.7619 2.3492 2.9365 2.60 2.56 2.61 MIN: 2.41 / MAX: 5.5 MIN: 2.41 / MAX: 3.44 MIN: 2.43 / MAX: 3.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 0.612 1.224 1.836 2.448 3.06 2.69 2.71 2.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 500 1000 1500 2000 2500 2227.90 2201.35 2202.93 MIN: 2121.62 / MAX: 2325.9 MIN: 2028.79 / MAX: 2338.37 MIN: 2053.29 / MAX: 2323.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 0.4523 0.9046 1.3569 1.8092 2.2615 2.00 2.01 2.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 600 1200 1800 2400 3000 2962.89 2922.79 2929.95 MIN: 2447.28 / MAX: 3308.08 MIN: 2569.56 / MAX: 3257.06 MIN: 2420.13 / MAX: 3244.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 0.4568 0.9136 1.3704 1.8272 2.284 1.97 2.03 1.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 600 1200 1800 2400 3000 3024.82 2917.75 2975.33 MIN: 2726.93 / MAX: 3236.96 MIN: 2552.62 / MAX: 3254.12 MIN: 2439.39 / MAX: 3287.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 40 80 120 160 200 165.60 166.67 170.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 8 16 24 32 40 36.20 35.98 35.26 MIN: 18.48 / MAX: 46.94 MIN: 16.02 / MAX: 46.38 MIN: 10.8 / MAX: 44.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 0.9383 1.8766 2.8149 3.7532 4.6915 4.16 4.16 4.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 300 600 900 1200 1500 1440.68 1436.03 1438.62 MIN: 1417.17 / MAX: 1479.28 MIN: 1413.44 / MAX: 1478.56 MIN: 1420.26 / MAX: 1471.11 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 80 160 240 320 400 353.79 355.77 354.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 4 8 12 16 20 16.95 16.86 16.90 MIN: 13.35 / MAX: 28.89 MIN: 11.41 / MAX: 23.49 MIN: 15.24 / MAX: 23.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 70 140 210 280 350 319.27 319.98 317.41 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 5 10 15 20 25 18.78 18.74 18.89 MIN: 17.78 / MAX: 31.42 MIN: 17.38 / MAX: 33.43 MIN: 9.79 / MAX: 50.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 7 14 21 28 35 29.33 29.86 29.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 40 80 120 160 200 204.45 200.80 201.86 MIN: 164.7 / MAX: 238.21 MIN: 163.8 / MAX: 284.18 MIN: 166.26 / MAX: 232.49 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 90 180 270 360 450 411.75 413.92 412.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 7 14 21 28 35 29.13 28.98 29.06 MIN: 22.66 / MAX: 39.5 MIN: 15.13 / MAX: 38.61 MIN: 14.79 / MAX: 39.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 80 160 240 320 400 353.99 354.88 351.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 4 8 12 16 20 16.93 16.89 17.05 MIN: 8.39 / MAX: 23.2 MIN: 8.39 / MAX: 33.55 MIN: 8.4 / MAX: 31.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 2K 4K 6K 8K 10K 9947.11 10061.67 9907.84 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 0.27 0.54 0.81 1.08 1.35 1.20 1.19 1.20 MIN: 0.82 / MAX: 16.12 MIN: 0.69 / MAX: 16.13 MIN: 0.72 / MAX: 3.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 3K 6K 9K 12K 15K 12623.71 12536.44 12619.85 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 0.2138 0.4276 0.6414 0.8552 1.069 0.94 0.95 0.95 MIN: 0.55 / MAX: 2.84 MIN: 0.55 / MAX: 13.65 MIN: 0.55 / MAX: 3.68 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Facebook RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill A B C 200K 400K 600K 800K 1000K 869370 856519 873162 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read A B C 13M 26M 39M 52M 65M 58740644 55704584 55635525 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random A B C 110K 220K 330K 440K 550K 516743 510169 520170 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill A B C 200K 400K 600K 800K 1000K 1031481 1011586 1024005 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync A B C 1000 2000 3000 4000 5000 4514 4420 4504 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing A B C 500K 1000K 1500K 2000K 2500K 2559841 2495285 2485811 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random A B C 400K 800K 1200K 1600K 2000K 1894185 1891502 1896932 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship A B C 0.7425 1.485 2.2275 2.97 3.7125 3.1 3.3 3.2
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric A B C 40K 80K 120K 160K 200K 181386 180779 179642 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm
Phoronix Test Suite v10.8.5