3900X Seppy AMD Ryzen 9 3900X 12-Core testing with a ASUS TUF GAMING X570-PLUS (WI-FI) (2203 BIOS) and MSI AMD Radeon RX 580 8GB on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209074-NE-3900XSEPP02&grr .
3900X Seppy Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B C AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads) ASUS TUF GAMING X570-PLUS (WI-FI) (2203 BIOS) AMD Starship/Matisse 16GB Samsung SSD 970 EVO Plus 250GB MSI AMD Radeon RX 580 8GB (1366/2000MHz) AMD Ellesmere HDMI Audio MX279 Realtek RTL8111/8168/8411 + Intel-AC 9260 Ubuntu 20.04 5.11.0-rc1-phx (x86_64) 20201228 GNOME Shell 3.36.4 X Server 1.20.13 4.6 Mesa 21.2.6 (LLVM 12.0.0) 1.2.182 GCC 9.4.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8701021 Graphics Details - GLAMOR - BAR1 / Visible vRAM Size: 256 MB Java Details - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu120.04) Python Details - Python 3.8.10 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
3900X Seppy lammps: 20k Atoms brl-cad: VGR Performance Metric build-nodejs: Time To Compile ospray-studio: 3 - 4K - 32 - Path Tracer spark: 40000000 - 100 - Broadcast Inner Join Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Repartition Test Time spark: 40000000 - 100 - Group By Test Time spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 500 - Broadcast Inner Join Test Time spark: 40000000 - 500 - Inner Join Test Time spark: 40000000 - 500 - Repartition Test Time spark: 40000000 - 500 - Group By Test Time spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 1000 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - Inner Join Test Time spark: 40000000 - 1000 - Repartition Test Time spark: 40000000 - 1000 - Group By Test Time spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 1000 - Calculate Pi Benchmark spark: 40000000 - 1000 - SHA-512 Benchmark Time spark: 40000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 2000 - Inner Join Test Time spark: 40000000 - 2000 - Repartition Test Time spark: 40000000 - 2000 - Group By Test Time spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 40000000 - 2000 - SHA-512 Benchmark Time ospray-studio: 2 - 4K - 32 - Path Tracer ospray-studio: 1 - 4K - 32 - Path Tracer build-python: Released Build, PGO + LTO Optimized spark: 20000000 - 100 - Broadcast Inner Join Test Time spark: 20000000 - 100 - Inner Join Test Time spark: 20000000 - 100 - Repartition Test Time spark: 20000000 - 100 - Group By Test Time spark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 100 - Calculate Pi Benchmark spark: 20000000 - 100 - SHA-512 Benchmark Time spark: 20000000 - 500 - Broadcast Inner Join Test Time spark: 20000000 - 500 - Inner Join Test Time spark: 20000000 - 500 - Repartition Test Time spark: 20000000 - 500 - Group By Test Time spark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 500 - SHA-512 Benchmark Time spark: 20000000 - 2000 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - Inner Join Test Time spark: 20000000 - 2000 - Repartition Test Time spark: 20000000 - 2000 - Group By Test Time spark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 2000 - Calculate Pi Benchmark spark: 20000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 1000 - Broadcast Inner Join Test Time spark: 20000000 - 1000 - Inner Join Test Time spark: 20000000 - 1000 - Repartition Test Time spark: 20000000 - 1000 - Group By Test Time spark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 1000 - SHA-512 Benchmark Time mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet ospray-studio: 3 - 4K - 16 - Path Tracer spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 100 - Inner Join Test Time spark: 10000000 - 100 - Repartition Test Time spark: 10000000 - 100 - Group By Test Time spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Calculate Pi Benchmark spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Repartition Test Time spark: 10000000 - 2000 - Group By Test Time spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 2000 - SHA-512 Benchmark Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 1000 - Inner Join Test Time spark: 10000000 - 1000 - Repartition Test Time spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 500 - Group By Test Time spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 500 - Calculate Pi Benchmark spark: 10000000 - 500 - SHA-512 Benchmark Time primesieve: 1e13 ospray-studio: 2 - 4K - 16 - Path Tracer ospray-studio: 1 - 4K - 16 - Path Tracer gravitymark: 1920 x 1080 - OpenGL gravitymark: 1920 x 1080 - OpenGL ES gravitymark: 1920 x 1080 - Vulkan ospray-studio: 3 - 1080p - 16 - Path Tracer ospray-studio: 2 - 4K - 1 - Path Tracer spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - SHA-512 Benchmark Time ospray-studio: 1 - 4K - 1 - Path Tracer spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 500 - Repartition Test Time spark: 1000000 - 500 - Group By Test Time spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 500 - Calculate Pi Benchmark spark: 1000000 - 500 - SHA-512 Benchmark Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - SHA-512 Benchmark Time ospray-studio: 2 - 1080p - 16 - Path Tracer ospray-studio: 1 - 1080p - 16 - Path Tracer ncnn: CPU - FastestDet ncnn: CPU - vision_transformer ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet ospray-studio: 3 - 1080p - 1 - Path Tracer ospray-studio: 2 - 1080p - 1 - Path Tracer ospray-studio: 1 - 1080p - 1 - Path Tracer ospray-studio: 3 - 1080p - 32 - Path Tracer xonotic: 1920 x 1080 - Ultimate build-erlang: Time To Compile ncnn: Vulkan GPU - FastestDet ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU - mobilenet ospray-studio: 2 - 1080p - 32 - Path Tracer ospray-studio: 1 - 1080p - 32 - Path Tracer svt-av1: Preset 4 - Bosphorus 4K ospray-studio: 3 - 4K - 1 - Path Tracer xonotic: 1920 x 1080 - Ultra xonotic: 1920 x 1080 - High node-web-tooling: memtier-benchmark: Redis - 500 - 1:10 memtier-benchmark: Redis - 500 - 5:1 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 100 - 1:10 memtier-benchmark: Redis - 100 - 5:1 memtier-benchmark: Redis - 100 - 1:1 memtier-benchmark: Redis - 500 - 1:5 dragonflydb: 200 - 1:5 memtier-benchmark: Redis - 500 - 1:1 memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 50 - 1:1 memtier-benchmark: Redis - 50 - 5:1 dragonflydb: 200 - 5:1 dragonflydb: 200 - 1:1 dragonflydb: 50 - 1:1 dragonflydb: 50 - 5:1 dragonflydb: 50 - 1:5 astcenc: Exhaustive openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU xonotic: 1920 x 1080 - Low rocksdb: Rand Fill Sync openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU graphics-magick: Sharpen openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU rocksdb: Update Rand openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU rocksdb: Rand Fill graphics-magick: Noise-Gaussian graphics-magick: Enhanced rocksdb: Read Rand Write Rand rocksdb: Rand Read rocksdb: Read While Writing graphics-magick: Resizing graphics-magick: Swirl graphics-magick: Rotate graphics-magick: HWB Color Space build-php: Time To Compile redis: LPOP - 1000 redis: LPUSH - 1000 redis: LPUSH - 500 srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM redis: LPUSH - 50 redis: SET - 500 astcenc: Thorough redis: SET - 1000 redis: SET - 50 redis: LPOP - 50 redis: LPOP - 500 srsran: OFDM_Test redis: SADD - 1000 redis: SADD - 500 redis: SADD - 50 srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM natron: Spaceship redis: GET - 500 redis: GET - 1000 svt-av1: Preset 4 - Bosphorus 1080p redis: GET - 50 compress-7zip: Decompression Rating compress-7zip: Compression Rating aircrack-ng: unvanquished: 1920 x 1080 - High unvanquished: 1920 x 1080 - Ultra unvanquished: 1920 x 1080 - Medium rocksdb: Seq Fill astcenc: Fast srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM build-python: Default srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM svt-av1: Preset 8 - Bosphorus 4K primesieve: 1e12 blosc: blosclz bitshuffle astcenc: Medium svt-av1: Preset 10 - Bosphorus 4K blosc: blosclz shuffle unpack-linux: linux-5.19.tar.xz svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p lammps: Rhodopsin Protein svt-av1: Preset 12 - Bosphorus 1080p A B C 8.291 181386 469.416 397802 50.356000828 51.48 42.37 35.63 7.65 124.886104895 63.09 45.31 46.550539165 40.06 30.65796783 7.57 124.324862085 57.330306599 46.512187696 45.295483537 39.70 27.06 7.59 124.56455836 59.062692968 46.73 47.58 40.002159983 26.86 7.67 124.588956163 58.24 342152 333972 309.971 26.34 25.79 21.586612403 12.88 7.64 123.540958083 32.600203152 25.89 25.86 21.72 13.276474348 7.58 124.859400848 32.12 22.84 24.28 19.96 11.92 7.62 124.383605089 31.53 23.595971624 23.65 19.94 11.746586907 7.60 124.249778759 30.58 29.022 5.499 4.578 6.738 32.984 4.584 2.411 15.791 201629 13.90 13.094310098 10.60 8.32 7.60 124.62804651 17.58 12.37 13.653470006 10.69 8.42 7.65 123.640864587 16.92 11.24 12.00 10.30 8.17 7.61 124.080298668 16.65372428 11.53 11.98 9.97 7.71 7.56 124.736721537 16.41 185.415 174430 169810 69.6 50.3 66.8 49131 10532 2.90 3.14 2.83 5.04 7.58 124.290781801 4.40 10233 2.09 2.55 2.23 4.39 7.66 123.451797119 4.18 1.70 1.84 2.05 4.48 7.61 124.240940193 3.99 1.57 1.85 1.84 3.99 7.56 121.398043618 3.70 42208 41284 6.11 171.12 14.8 21.23 26.11 20.92 9.86 12.77 49.58 15.34 2.16 6.73 4.41 5.01 4.27 4.85 14.4 3090 2647 2583 104130 307.8368633 100.819 2.6 164.54 7.21 6.96 9.85 9.75 3.04 4.57 10.73 9 1.39 11.34 3.72 2.75 4.18 3.42 7.64 90100 88798 1.845 12209 412.5012465 449.7914841 11.63 1804067.98 1454839.23 1703287.1 1654936.28 1433374.57 1536206.15 1602907.29 2594626.13 1531801.97 1701125.86 1708785.94 1613018.51 1434216.13 2326752.52 2435286.83 2571296.09 2467506.49 2782190.43 0.7553 2227.9 2.69 3024.82 1.97 2962.89 2 536.8385533 4514 1440.68 4.16 204.45 29.33 16.93 353.99 18.78 319.27 16.95 353.79 182 36.2 165.6 516743 29.13 411.75 1.2 9947.11 0.94 12623.71 869370 352 291 1894185 58740644 2559841 1379 642 687 1249 59.577 1342933.62 1383954.75 1412551.5 158.6 389.2 1454759.12 1453979.38 6.9528 1586963.25 1475997.62 2348967 2278568 129100000 1812210.12 1818545 1886028.5 142.7 347.3 3.1 1963578 2013854.62 5.339 2300790.75 88576 85631 43713.723 320.4 314.9 328.1 1031481 177.978 61.6 111.1 175.4 402.7 18.833 164.6 368.1 39.723 14.674 9647.4 58.0397 70.773 16249.2 7.004 94.642 105.216 220.139 8.285 329.43 8.277 180779 468.677 397000 50.15 50.07 42.40 35.21 7.57 124.119028332 63.12 45.42 46.43 39.717702062 31.58 7.62 123.674687082 57.68 47.26 47.06 40.00 31.55 7.55 123.600440537 58.94 45.91 47.02 40.34 25.63 7.54 124.383948894 59.38 341210 334350 311.963 25.13 26.79 21.62 12.19 7.56 123.678567106 31.42 25.90 25.93 21.46 12.837489815 7.61 124.331603099 32.07 22.49 24.13 20.76 12.088378319 7.61 125.06432936 31.25 22.73 22.74 20.17 11.49 7.54 124.89 29.67 28.637 5.471 4.596 6.811 32.935 4.573 2.404 15.645 201249 12.84 12.28 10.46 8.30 7.58 124.66 17.07 11.73 13.112866616 10.67 8.37 7.63 123.471797282 16.98 11.422052413 11.61 10.01 8.05 7.67 124.039439873 16.45 11.68 12.09 10.02 7.67 7.55 123.910028274 16.23 185.618 173423 169368 70.1 50.3 66.9 49075 10495 2.51 3.17 2.76 4.98 7.53 123.333619836 4.42 10231 1.93 2.55 2.35 4.56 7.58 123.73 3.97 1.72 1.97 1.95 4.16 7.60 124.116566334 3.86 1.59 1.94 1.77 4.28 7.50 121.549495183 3.60 42159 40995 5.82 171.81 14.78 21.17 25.89 20.94 9.89 12.59 49.18 15.15 2.16 6.73 4.4 5.03 4.26 4.85 14.43 3080 2649 2581 104140 310.6637414 102.61 2.56 177.97 7.16 7.04 9.88 9.75 3.09 4.55 10.67 9.13 1.4 11.42 3.73 2.76 4.16 3.41 7.66 90160 88008 1.851 12200 419.5751153 452.7463575 11.58 1598224.25 1422519.5 1587437.46 1646498.4 1471052.19 1492923.5 1629690 2603169.2 1465097.24 1651913.58 1705311.59 1511370.28 1408731.93 2342872.52 2432286.97 2530290.29 2450502.69 2805869.83 0.7568 2201.35 2.71 2917.75 2.03 2922.79 2.01 542.6421245 4420 1436.03 4.16 200.8 29.86 16.89 354.88 18.74 319.98 16.86 355.77 183 35.98 166.67 510169 28.98 413.92 1.19 10061.67 0.95 12536.44 856519 360 293 1891502 55704584 2495285 1416 656 765 1389 59.61 1357810 1415616.5 1395701.88 161.5 394.7 1479037.62 1496450.5 6.9463 1586804.12 1604856.12 1278828 1386256.75 129300000 1812301.38 1797623.38 1733906.5 150.9 367.4 3.3 1992385 2024809.38 5.415 1980782.62 88624 87122 43775.77 319.2 318.8 324.9 1011586 178.4348 60.9 109.8 174.1 398.5 18.816 163.9 371.7 41.265 14.644 10159 58.1473 73.88 16778.8 7.26 100.512 105.802 222.512 8.18 346.25 8.267 179642 468.809 397232 49.30 51.67 42.65 35.15 7.63 124.687130511 62.74 44.52 46.07 39.50 31.26 7.64 124.77 57.79 46.22 44.59 40.47 27.23 7.54 124.708485508 58.69 45.90 44.90 39.44 26.09 7.62 124.46 61.921948894 341165 334546 313.754 25.57 25.43 21.84 12.46 7.56 124.527683092 32.52 23.51 24.16 20.34 13.03 7.64 124.78 32.05 23.58 24.86 20.64 12.92 7.59 124.687862455 31.51 21.93 22.57 20.01 11.49 7.61 123.95 29.67 29.137 5.511 4.711 6.899 33.96 4.675 2.457 15.844 201634 13.16 12.35 10.32 7.88 7.61 124.28 17.35 11.39 12.81 10.55 8.64 7.58 124.53 16.9481657 11.28 12.17 10.30 7.89 7.60 123.655987591 16.25 11.01 11.59 10.070926909 7.62 7.60 124.04 16.31 185.908 173925 169662 70.2 50.4 66.5 49226 10556 2.63 3.26 2.76 5.03 7.59 124.281435093 4.30 10295 2.05 2.48 2.19 4.46 7.64 123.178075774 4.21 1.58 2.07 1.93 4.25 7.61 124.186179993 3.80 1.58 1.70 1.75 4.25 7.57 122.651776772 3.48 42363 41236 6.24 171.64 14.8 21.57 25.83 21.04 9.9 12.59 49.1 15.1 2.19 6.77 4.41 5.03 4.27 4.88 14.41 3088 2655 2594 104486 309.0855508 103.291 2.61 164.31 6.88 6.96 9.88 9.77 3.05 4.57 10.86 8.98 1.4 11.33 3.72 2.75 4.18 3.41 7.63 90458 88501 1.858 12221 422.1865418 450.6252591 11.94 1611931.04 1382825.47 1620664.8 1665629.7 1407911.56 1472294.05 2611028.59 1623836.53 1690778.92 1490717.64 1439771.73 2329272.97 2423383.45 2549487.54 2444018.25 2763256.98 0.7547 2202.93 2.72 2975.33 1.99 2929.95 2.01 544.6950305 4504 1438.62 4.17 201.86 29.71 17.05 351.53 18.89 317.41 16.9 354.76 183 35.26 170.02 520170 29.06 412.76 1.2 9907.84 0.95 12619.85 873162 362 294 1896932 55635525 2485811 1424 658 770 1414 59.525 1363517.5 1397587.12 1401205.62 153.6 375.2 1406495 1568311.12 6.913 1545873.75 1574484.25 1380363.25 1390922.88 122300000 1865566.62 1822522.25 1745999.38 149.4 364.6 3.2 1952244.88 1993837.12 5.363 2029142.25 88089 86779 43668.871 319.8 324.6 324.2 1024005 178.1141 60.8 109.8 175 402.7 18.975 162.9 368.7 40.925 14.7 9980.1 58.036 73.141 16951.6 6.963 100.766 106.693 223.216 7.983 339.683 OpenBenchmarking.org
LAMMPS Molecular Dynamics Simulator Model: 20k Atoms OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms A B C 2 4 6 8 10 8.291 8.277 8.267 1. (CXX) g++ options: -O3 -pthread -lm -ldl
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric A B C 40K 80K 120K 160K 200K 181386 180779 179642 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile A B C 100 200 300 400 500 469.42 468.68 468.81
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 90K 180K 270K 360K 450K 397802 397000 397232 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Apache Spark Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 11 22 33 44 55 50.36 50.15 49.30
Apache Spark Row Count: 40000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time A B C 12 24 36 48 60 51.48 50.07 51.67
Apache Spark Row Count: 40000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time A B C 10 20 30 40 50 42.37 42.40 42.65
Apache Spark Row Count: 40000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time A B C 8 16 24 32 40 35.63 35.21 35.15
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.65 7.57 7.63
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.89 124.12 124.69
Apache Spark Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 14 28 42 56 70 63.09 63.12 62.74
Apache Spark Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 10 20 30 40 50 45.31 45.42 44.52
Apache Spark Row Count: 40000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Inner Join Test Time A B C 11 22 33 44 55 46.55 46.43 46.07
Apache Spark Row Count: 40000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Repartition Test Time A B C 9 18 27 36 45 40.06 39.72 39.50
Apache Spark Row Count: 40000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Group By Test Time A B C 7 14 21 28 35 30.66 31.58 31.26
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.57 7.62 7.64
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.32 123.67 124.77
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 13 26 39 52 65 57.33 57.68 57.79
Apache Spark Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 11 22 33 44 55 46.51 47.26 46.22
Apache Spark Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time A B C 11 22 33 44 55 45.30 47.06 44.59
Apache Spark Row Count: 40000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Repartition Test Time A B C 9 18 27 36 45 39.70 40.00 40.47
Apache Spark Row Count: 40000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Group By Test Time A B C 7 14 21 28 35 27.06 31.55 27.23
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.59 7.55 7.54
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.56 123.60 124.71
Apache Spark Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 13 26 39 52 65 59.06 58.94 58.69
Apache Spark Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 11 22 33 44 55 46.73 45.91 45.90
Apache Spark Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time A B C 11 22 33 44 55 47.58 47.02 44.90
Apache Spark Row Count: 40000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time A B C 9 18 27 36 45 40.00 40.34 39.44
Apache Spark Row Count: 40000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time A B C 6 12 18 24 30 26.86 25.63 26.09
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.67 7.54 7.62
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.59 124.38 124.46
Apache Spark Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 14 28 42 56 70 58.24 59.38 61.92
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 70K 140K 210K 280K 350K 342152 341210 341165 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 70K 140K 210K 280K 350K 333972 334350 334546 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized A B C 70 140 210 280 350 309.97 311.96 313.75
Apache Spark Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 26.34 25.13 25.57
Apache Spark Row Count: 20000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Inner Join Test Time A B C 6 12 18 24 30 25.79 26.79 25.43
Apache Spark Row Count: 20000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Repartition Test Time A B C 5 10 15 20 25 21.59 21.62 21.84
Apache Spark Row Count: 20000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Group By Test Time A B C 3 6 9 12 15 12.88 12.19 12.46
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.64 7.56 7.56
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.54 123.68 124.53
Apache Spark Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 8 16 24 32 40 32.60 31.42 32.52
Apache Spark Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 25.89 25.90 23.51
Apache Spark Row Count: 20000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Inner Join Test Time A B C 6 12 18 24 30 25.86 25.93 24.16
Apache Spark Row Count: 20000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Repartition Test Time A B C 5 10 15 20 25 21.72 21.46 20.34
Apache Spark Row Count: 20000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Group By Test Time A B C 3 6 9 12 15 13.28 12.84 13.03
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.58 7.61 7.64
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.86 124.33 124.78
Apache Spark Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 7 14 21 28 35 32.12 32.07 32.05
Apache Spark Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 22.84 22.49 23.58
Apache Spark Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time A B C 6 12 18 24 30 24.28 24.13 24.86
Apache Spark Row Count: 20000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Repartition Test Time A B C 5 10 15 20 25 19.96 20.76 20.64
Apache Spark Row Count: 20000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Group By Test Time A B C 3 6 9 12 15 11.92 12.09 12.92
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.62 7.61 7.59
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.38 125.06 124.69
Apache Spark Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 7 14 21 28 35 31.53 31.25 31.51
Apache Spark Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 6 12 18 24 30 23.60 22.73 21.93
Apache Spark Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time A B C 6 12 18 24 30 23.65 22.74 22.57
Apache Spark Row Count: 20000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Repartition Test Time A B C 5 10 15 20 25 19.94 20.17 20.01
Apache Spark Row Count: 20000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Group By Test Time A B C 3 6 9 12 15 11.75 11.49 11.49
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.60 7.54 7.61
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.25 124.89 123.95
Apache Spark Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 7 14 21 28 35 30.58 29.67 29.67
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B C 7 14 21 28 35 29.02 28.64 29.14 MIN: 28.5 / MAX: 38.97 MIN: 28.32 / MAX: 39.21 MIN: 28.76 / MAX: 33.48 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A B C 1.24 2.48 3.72 4.96 6.2 5.499 5.471 5.511 MIN: 5.39 / MAX: 6.12 MIN: 5.38 / MAX: 7.12 MIN: 5.4 / MAX: 6.73 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B C 1.06 2.12 3.18 4.24 5.3 4.578 4.596 4.711 MIN: 4.52 / MAX: 6.35 MIN: 4.54 / MAX: 5.86 MIN: 4.67 / MAX: 5.4 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B C 2 4 6 8 10 6.738 6.811 6.899 MIN: 6.67 / MAX: 7.6 MIN: 6.74 / MAX: 18.26 MIN: 6.83 / MAX: 18.3 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A B C 8 16 24 32 40 32.98 32.94 33.96 MIN: 32.32 / MAX: 61.68 MIN: 32.3 / MAX: 43.85 MIN: 33.01 / MAX: 51.01 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B C 1.0519 2.1038 3.1557 4.2076 5.2595 4.584 4.573 4.675 MIN: 4.47 / MAX: 15.99 MIN: 4.51 / MAX: 5.21 MIN: 4.62 / MAX: 5.23 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B C 0.5528 1.1056 1.6584 2.2112 2.764 2.411 2.404 2.457 MIN: 2.39 / MAX: 2.99 MIN: 2.38 / MAX: 2.46 MIN: 2.43 / MAX: 3.09 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B C 4 8 12 16 20 15.79 15.65 15.84 MIN: 15.69 / MAX: 18.21 MIN: 15.53 / MAX: 18.8 MIN: 15.72 / MAX: 23.77 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 40K 80K 120K 160K 200K 201629 201249 201634 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Apache Spark Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 13.90 12.84 13.16
Apache Spark Row Count: 10000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time A B C 3 6 9 12 15 13.09 12.28 12.35
Apache Spark Row Count: 10000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time A B C 3 6 9 12 15 10.60 10.46 10.32
Apache Spark Row Count: 10000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time A B C 2 4 6 8 10 8.32 8.30 7.88
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.60 7.58 7.61
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.63 124.66 124.28
Apache Spark Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 4 8 12 16 20 17.58 17.07 17.35
Apache Spark Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 12.37 11.73 11.39
Apache Spark Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time A B C 4 8 12 16 20 13.65 13.11 12.81
Apache Spark Row Count: 10000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time A B C 3 6 9 12 15 10.69 10.67 10.55
Apache Spark Row Count: 10000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time A B C 2 4 6 8 10 8.42 8.37 8.64
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.65 7.63 7.58
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.64 123.47 124.53
Apache Spark Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 4 8 12 16 20 16.92 16.98 16.95
Apache Spark Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 11.24 11.42 11.28
Apache Spark Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time A B C 3 6 9 12 15 12.00 11.61 12.17
Apache Spark Row Count: 10000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time A B C 3 6 9 12 15 10.30 10.01 10.30
Apache Spark Row Count: 10000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time A B C 2 4 6 8 10 8.17 8.05 7.89
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.61 7.67 7.60
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.08 124.04 123.66
Apache Spark Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 4 8 12 16 20 16.65 16.45 16.25
Apache Spark Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 11.53 11.68 11.01
Apache Spark Row Count: 10000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time A B C 3 6 9 12 15 11.98 12.09 11.59
Apache Spark Row Count: 10000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time A B C 3 6 9 12 15 9.970000000 10.020000000 10.070926909
Apache Spark Row Count: 10000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time A B C 2 4 6 8 10 7.71 7.67 7.62
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.56 7.55 7.60
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.74 123.91 124.04
Apache Spark Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 4 8 12 16 20 16.41 16.23 16.31
Primesieve Length: 1e13 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e13 A B C 40 80 120 160 200 185.42 185.62 185.91 1. (CXX) g++ options: -O3 -lpthread
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 40K 80K 120K 160K 200K 174430 173423 173925 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 40K 80K 120K 160K 200K 169810 169368 169662 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
GravityMark Resolution: 1920 x 1080 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.70 Resolution: 1920 x 1080 - Renderer: OpenGL A B C 16 32 48 64 80 69.6 70.1 70.2
GravityMark Resolution: 1920 x 1080 - Renderer: OpenGL ES OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.70 Resolution: 1920 x 1080 - Renderer: OpenGL ES A B C 11 22 33 44 55 50.3 50.3 50.4
GravityMark Resolution: 1920 x 1080 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.70 Resolution: 1920 x 1080 - Renderer: Vulkan A B C 15 30 45 60 75 66.8 66.9 66.5
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 11K 22K 33K 44K 55K 49131 49075 49226 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 2K 4K 6K 8K 10K 10532 10495 10556 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 0.6525 1.305 1.9575 2.61 3.2625 2.90 2.51 2.63
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time A B C 0.7335 1.467 2.2005 2.934 3.6675 3.14 3.17 3.26
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time A B C 0.6368 1.2736 1.9104 2.5472 3.184 2.83 2.76 2.76
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time A B C 1.134 2.268 3.402 4.536 5.67 5.04 4.98 5.03
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.58 7.53 7.59
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.29 123.33 124.28
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 0.9945 1.989 2.9835 3.978 4.9725 4.40 4.42 4.30
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 2K 4K 6K 8K 10K 10233 10231 10295 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Apache Spark Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 0.4703 0.9406 1.4109 1.8812 2.3515 2.09 1.93 2.05
Apache Spark Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time A B C 0.5738 1.1476 1.7214 2.2952 2.869 2.55 2.55 2.48
Apache Spark Row Count: 1000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time A B C 0.5288 1.0576 1.5864 2.1152 2.644 2.23 2.35 2.19
Apache Spark Row Count: 1000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time A B C 1.026 2.052 3.078 4.104 5.13 4.39 4.56 4.46
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.66 7.58 7.64
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.45 123.73 123.18
Apache Spark Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 0.9473 1.8946 2.8419 3.7892 4.7365 4.18 3.97 4.21
Apache Spark Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 0.387 0.774 1.161 1.548 1.935 1.70 1.72 1.58
Apache Spark Row Count: 1000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time A B C 0.4658 0.9316 1.3974 1.8632 2.329 1.84 1.97 2.07
Apache Spark Row Count: 1000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time A B C 0.4613 0.9226 1.3839 1.8452 2.3065 2.05 1.95 1.93
Apache Spark Row Count: 1000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time A B C 1.008 2.016 3.024 4.032 5.04 4.48 4.16 4.25
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.61 7.60 7.61
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.24 124.12 124.19
Apache Spark Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 0.8978 1.7956 2.6934 3.5912 4.489 3.99 3.86 3.80
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 0.3578 0.7156 1.0734 1.4312 1.789 1.57 1.59 1.58
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time A B C 0.4365 0.873 1.3095 1.746 2.1825 1.85 1.94 1.70
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time A B C 0.414 0.828 1.242 1.656 2.07 1.84 1.77 1.75
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time A B C 0.963 1.926 2.889 3.852 4.815 3.99 4.28 4.25
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.56 7.50 7.57
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 121.40 121.55 122.65
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 0.8325 1.665 2.4975 3.33 4.1625 3.70 3.60 3.48
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 9K 18K 27K 36K 45K 42208 42159 42363 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer A B C 9K 18K 27K 36K 45K 41284 40995 41236 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet A B C 2 4 6 8 10 6.11 5.82 6.24 MIN: 6.06 / MAX: 6.17 MIN: 5.76 / MAX: 6.39 MIN: 6.18 / MAX: 6.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer A B C 40 80 120 160 200 171.12 171.81 171.64 MIN: 169.3 / MAX: 188.47 MIN: 169.37 / MAX: 194.03 MIN: 169.64 / MAX: 182.63 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m A B C 4 8 12 16 20 14.80 14.78 14.80 MIN: 14.7 / MAX: 14.89 MIN: 14.69 / MAX: 15.38 MIN: 14.71 / MAX: 14.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd A B C 5 10 15 20 25 21.23 21.17 21.57 MIN: 18.98 / MAX: 23.87 MIN: 19.28 / MAX: 30.13 MIN: 19.73 / MAX: 22.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny A B C 6 12 18 24 30 26.11 25.89 25.83 MIN: 24.91 / MAX: 36.36 MIN: 24.85 / MAX: 27.28 MIN: 24.9 / MAX: 27.2 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 A B C 5 10 15 20 25 20.92 20.94 21.04 MIN: 20.71 / MAX: 21.79 MIN: 20.69 / MAX: 21.54 MIN: 20.78 / MAX: 21.57 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet A B C 3 6 9 12 15 9.86 9.89 9.90 MIN: 9.51 / MAX: 10.33 MIN: 9.57 / MAX: 10.32 MIN: 9.57 / MAX: 10.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 A B C 3 6 9 12 15 12.77 12.59 12.59 MIN: 12.45 / MAX: 13.46 MIN: 12.28 / MAX: 13.23 MIN: 12.26 / MAX: 13.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 A B C 11 22 33 44 55 49.58 49.18 49.10 MIN: 48.22 / MAX: 56.17 MIN: 48.5 / MAX: 50.39 MIN: 48.39 / MAX: 64.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet A B C 4 8 12 16 20 15.34 15.15 15.10 MIN: 14.76 / MAX: 15.96 MIN: 14.39 / MAX: 15.92 MIN: 14.48 / MAX: 16.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface A B C 0.4928 0.9856 1.4784 1.9712 2.464 2.16 2.16 2.19 MIN: 2.13 / MAX: 2.53 MIN: 2.13 / MAX: 2.25 MIN: 2.16 / MAX: 2.27 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 A B C 2 4 6 8 10 6.73 6.73 6.77 MIN: 6.68 / MAX: 6.81 MIN: 6.69 / MAX: 6.82 MIN: 6.72 / MAX: 6.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet A B C 0.9923 1.9846 2.9769 3.9692 4.9615 4.41 4.40 4.41 MIN: 4.34 / MAX: 4.47 MIN: 4.35 / MAX: 4.47 MIN: 4.35 / MAX: 4.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 A B C 1.1318 2.2636 3.3954 4.5272 5.659 5.01 5.03 5.03 MIN: 4.97 / MAX: 5.08 MIN: 4.96 / MAX: 5.43 MIN: 4.97 / MAX: 5.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 A B C 0.9608 1.9216 2.8824 3.8432 4.804 4.27 4.26 4.27 MIN: 4.2 / MAX: 4.36 MIN: 4.2 / MAX: 4.33 MIN: 4.21 / MAX: 4.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 A B C 1.098 2.196 3.294 4.392 5.49 4.85 4.85 4.88 MIN: 4.78 / MAX: 5.44 MIN: 4.78 / MAX: 5.31 MIN: 4.8 / MAX: 7.53 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet A B C 4 8 12 16 20 14.40 14.43 14.41 MIN: 14.23 / MAX: 14.75 MIN: 14.26 / MAX: 18.36 MIN: 14.21 / MAX: 14.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 700 1400 2100 2800 3500 3090 3080 3088 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 600 1200 1800 2400 3000 2647 2649 2655 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 600 1200 1800 2400 3000 2583 2581 2594 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 20K 40K 60K 80K 100K 104130 104140 104486 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Xonotic Resolution: 1920 x 1080 - Effects Quality: Ultimate OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: Ultimate A B C 70 140 210 280 350 307.84 310.66 309.09 MIN: 64 / MAX: 679 MIN: 74 / MAX: 686 MIN: 68 / MAX: 667
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile A B C 20 40 60 80 100 100.82 102.61 103.29
NCNN Target: Vulkan GPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet A B C 0.5873 1.1746 1.7619 2.3492 2.9365 2.60 2.56 2.61 MIN: 2.41 / MAX: 5.5 MIN: 2.41 / MAX: 3.44 MIN: 2.43 / MAX: 3.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer A B C 40 80 120 160 200 164.54 177.97 164.31 MIN: 159.31 / MAX: 182.17 MIN: 164.84 / MAX: 233.11 MIN: 160.79 / MAX: 188.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m A B C 2 4 6 8 10 7.21 7.16 6.88 MIN: 6.82 / MAX: 19.5 MIN: 6.84 / MAX: 19.55 MIN: 6.82 / MAX: 11.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd A B C 2 4 6 8 10 6.96 7.04 6.96 MIN: 6.9 / MAX: 7.54 MIN: 6.9 / MAX: 14.62 MIN: 6.91 / MAX: 7.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny A B C 3 6 9 12 15 9.85 9.88 9.88 MIN: 9.77 / MAX: 11.22 MIN: 9.81 / MAX: 10.1 MIN: 9.8 / MAX: 11.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 A B C 3 6 9 12 15 9.75 9.75 9.77 MIN: 9.48 / MAX: 14.3 MIN: 9.43 / MAX: 13.99 MIN: 9.45 / MAX: 17.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet A B C 0.6953 1.3906 2.0859 2.7812 3.4765 3.04 3.09 3.05 MIN: 3 / MAX: 3.58 MIN: 3.01 / MAX: 9.01 MIN: 3 / MAX: 3.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 A B C 1.0283 2.0566 3.0849 4.1132 5.1415 4.57 4.55 4.57 MIN: 4.52 / MAX: 4.93 MIN: 4.2 / MAX: 5.51 MIN: 4.22 / MAX: 5.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 A B C 3 6 9 12 15 10.73 10.67 10.86 MIN: 10.41 / MAX: 15.72 MIN: 10.43 / MAX: 15.84 MIN: 10.44 / MAX: 20 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet A B C 3 6 9 12 15 9.00 9.13 8.98 MIN: 8.66 / MAX: 12.61 MIN: 8.65 / MAX: 20.02 MIN: 8.66 / MAX: 15.78 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface A B C 0.315 0.63 0.945 1.26 1.575 1.39 1.40 1.40 MIN: 1.3 / MAX: 1.85 MIN: 1.31 / MAX: 1.74 MIN: 1.31 / MAX: 1.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 A B C 3 6 9 12 15 11.34 11.42 11.33 MIN: 10.14 / MAX: 20.01 MIN: 10.14 / MAX: 19.99 MIN: 10.1 / MAX: 15.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet A B C 0.8393 1.6786 2.5179 3.3572 4.1965 3.72 3.73 3.72 MIN: 3.56 / MAX: 4.91 MIN: 3.56 / MAX: 4.85 MIN: 3.56 / MAX: 4.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 A B C 0.621 1.242 1.863 2.484 3.105 2.75 2.76 2.75 MIN: 2.62 / MAX: 3.47 MIN: 2.63 / MAX: 3.51 MIN: 2.63 / MAX: 3.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 A B C 0.9405 1.881 2.8215 3.762 4.7025 4.18 4.16 4.18 MIN: 3.94 / MAX: 9.39 MIN: 3.95 / MAX: 5.68 MIN: 3.96 / MAX: 5.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 A B C 0.7695 1.539 2.3085 3.078 3.8475 3.42 3.41 3.41 MIN: 3.21 / MAX: 3.78 MIN: 3.23 / MAX: 4 MIN: 3.39 / MAX: 3.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
NCNN Target: Vulkan GPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet A B C 2 4 6 8 10 7.64 7.66 7.63 MIN: 7.51 / MAX: 15.72 MIN: 7.58 / MAX: 8.86 MIN: 7.14 / MAX: 8.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OSPRay Studio Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 20K 40K 60K 80K 100K 90100 90160 90458 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OSPRay Studio Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer A B C 20K 40K 60K 80K 100K 88798 88008 88501 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K A B C 0.4181 0.8362 1.2543 1.6724 2.0905 1.845 1.851 1.858 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer A B C 3K 6K 9K 12K 15K 12209 12200 12221 1. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
Xonotic Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: Ultra A B C 90 180 270 360 450 412.50 419.58 422.19 MIN: 240 / MAX: 715 MIN: 261 / MAX: 756 MIN: 257 / MAX: 746
Xonotic Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: High A B C 100 200 300 400 500 449.79 452.75 450.63 MIN: 252 / MAX: 843 MIN: 305 / MAX: 841 MIN: 292 / MAX: 838
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark A B C 3 6 9 12 15 11.63 11.58 11.94
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K 1804067.98 1598224.25 1611931.04 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 A B C 300K 600K 900K 1200K 1500K 1454839.23 1422519.50 1382825.47 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 A B C 400K 800K 1200K 1600K 2000K 1703287.10 1587437.46 1620664.80 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K 1654936.28 1646498.40 1665629.70 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 A B C 300K 600K 900K 1200K 1500K 1433374.57 1471052.19 1407911.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 A B C 300K 600K 900K 1200K 1500K 1536206.15 1492923.50 1472294.05 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 A B 300K 600K 900K 1200K 1500K 1602907.29 1629690.00 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2594626.13 2603169.20 2611028.59 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 A B 300K 600K 900K 1200K 1500K 1531801.97 1465097.24 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 A B C 400K 800K 1200K 1600K 2000K 1701125.86 1651913.58 1623836.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K 1708785.94 1705311.59 1690778.92 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 A B C 300K 600K 900K 1200K 1500K 1613018.51 1511370.28 1490717.64 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 A B C 300K 600K 900K 1200K 1500K 1434216.13 1408731.93 1439771.73 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2326752.52 2342872.52 2329272.97 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 A B C 500K 1000K 1500K 2000K 2500K 2435286.83 2432286.97 2423383.45 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 A B C 600K 1200K 1800K 2400K 3000K 2571296.09 2530290.29 2549487.54 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2467506.49 2450502.69 2444018.25 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2782190.43 2805869.83 2763256.98 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive A B C 0.1703 0.3406 0.5109 0.6812 0.8515 0.7553 0.7568 0.7547 1. (CXX) g++ options: -O3 -flto -pthread
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 500 1000 1500 2000 2500 2227.90 2201.35 2202.93 MIN: 2121.62 / MAX: 2325.9 MIN: 2028.79 / MAX: 2338.37 MIN: 2053.29 / MAX: 2323.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 0.612 1.224 1.836 2.448 3.06 2.69 2.71 2.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 600 1200 1800 2400 3000 3024.82 2917.75 2975.33 MIN: 2726.93 / MAX: 3236.96 MIN: 2552.62 / MAX: 3254.12 MIN: 2439.39 / MAX: 3287.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 0.4568 0.9136 1.3704 1.8272 2.284 1.97 2.03 1.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 600 1200 1800 2400 3000 2962.89 2922.79 2929.95 MIN: 2447.28 / MAX: 3308.08 MIN: 2569.56 / MAX: 3257.06 MIN: 2420.13 / MAX: 3244.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 0.4523 0.9046 1.3569 1.8092 2.2615 2.00 2.01 2.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Xonotic Resolution: 1920 x 1080 - Effects Quality: Low OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.5 Resolution: 1920 x 1080 - Effects Quality: Low A B C 120 240 360 480 600 536.84 542.64 544.70 MIN: 359 / MAX: 1041 MIN: 350 / MAX: 1016 MIN: 365 / MAX: 1032
Facebook RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync A B C 1000 2000 3000 4000 5000 4514 4420 4504 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 300 600 900 1200 1500 1440.68 1436.03 1438.62 MIN: 1417.17 / MAX: 1479.28 MIN: 1413.44 / MAX: 1478.56 MIN: 1420.26 / MAX: 1471.11 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 0.9383 1.8766 2.8149 3.7532 4.6915 4.16 4.16 4.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 40 80 120 160 200 204.45 200.80 201.86 MIN: 164.7 / MAX: 238.21 MIN: 163.8 / MAX: 284.18 MIN: 166.26 / MAX: 232.49 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 7 14 21 28 35 29.33 29.86 29.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 4 8 12 16 20 16.93 16.89 17.05 MIN: 8.39 / MAX: 23.2 MIN: 8.39 / MAX: 33.55 MIN: 8.4 / MAX: 31.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 80 160 240 320 400 353.99 354.88 351.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 5 10 15 20 25 18.78 18.74 18.89 MIN: 17.78 / MAX: 31.42 MIN: 17.38 / MAX: 33.43 MIN: 9.79 / MAX: 50.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 70 140 210 280 350 319.27 319.98 317.41 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 4 8 12 16 20 16.95 16.86 16.90 MIN: 13.35 / MAX: 28.89 MIN: 11.41 / MAX: 23.49 MIN: 15.24 / MAX: 23.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 80 160 240 320 400 353.79 355.77 354.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen A B C 40 80 120 160 200 182 183 183 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 8 16 24 32 40 36.20 35.98 35.26 MIN: 18.48 / MAX: 46.94 MIN: 16.02 / MAX: 46.38 MIN: 10.8 / MAX: 44.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 40 80 120 160 200 165.60 166.67 170.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Facebook RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random A B C 110K 220K 330K 440K 550K 516743 510169 520170 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 7 14 21 28 35 29.13 28.98 29.06 MIN: 22.66 / MAX: 39.5 MIN: 15.13 / MAX: 38.61 MIN: 14.79 / MAX: 39.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 90 180 270 360 450 411.75 413.92 412.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 0.27 0.54 0.81 1.08 1.35 1.20 1.19 1.20 MIN: 0.82 / MAX: 16.12 MIN: 0.69 / MAX: 16.13 MIN: 0.72 / MAX: 3.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 2K 4K 6K 8K 10K 9947.11 10061.67 9907.84 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 0.2138 0.4276 0.6414 0.8552 1.069 0.94 0.95 0.95 MIN: 0.55 / MAX: 2.84 MIN: 0.55 / MAX: 13.65 MIN: 0.55 / MAX: 3.68 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 3K 6K 9K 12K 15K 12623.71 12536.44 12619.85 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Facebook RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill A B C 200K 400K 600K 800K 1000K 869370 856519 873162 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian A B C 80 160 240 320 400 352 360 362 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced A B C 60 120 180 240 300 291 293 294 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Facebook RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random A B C 400K 800K 1200K 1600K 2000K 1894185 1891502 1896932 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read A B C 13M 26M 39M 52M 65M 58740644 55704584 55635525 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing A B C 500K 1000K 1500K 2000K 2500K 2559841 2495285 2485811 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing A B C 300 600 900 1200 1500 1379 1416 1424 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl A B C 140 280 420 560 700 642 656 658 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate A B C 170 340 510 680 850 687 765 770 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space A B C 300 600 900 1200 1500 1249 1389 1414 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile A B C 13 26 39 52 65 59.58 59.61 59.53
Redis Test: LPOP - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 A B C 300K 600K 900K 1200K 1500K 1342933.62 1357810.00 1363517.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 A B C 300K 600K 900K 1200K 1500K 1383954.75 1415616.50 1397587.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 A B C 300K 600K 900K 1200K 1500K 1412551.50 1395701.88 1401205.62 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 40 80 120 160 200 158.6 161.5 153.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 90 180 270 360 450 389.2 394.7 375.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
Redis Test: LPUSH - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 A B C 300K 600K 900K 1200K 1500K 1454759.12 1479037.62 1406495.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 A B C 300K 600K 900K 1200K 1500K 1453979.38 1496450.50 1568311.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough A B C 2 4 6 8 10 6.9528 6.9463 6.9130 1. (CXX) g++ options: -O3 -flto -pthread
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 A B C 300K 600K 900K 1200K 1500K 1586963.25 1586804.12 1545873.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 A B C 300K 600K 900K 1200K 1500K 1475997.62 1604856.12 1574484.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 50 A B C 500K 1000K 1500K 2000K 2500K 2348967.00 1278828.00 1380363.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 500 A B C 500K 1000K 1500K 2000K 2500K 2278568.00 1386256.75 1390922.88 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test A B C 30M 60M 90M 120M 150M 129100000 129300000 122300000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
Redis Test: SADD - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 1000 A B C 400K 800K 1200K 1600K 2000K 1812210.12 1812301.38 1865566.62 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 500 A B C 400K 800K 1200K 1600K 2000K 1818545.00 1797623.38 1822522.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 50 A B C 400K 800K 1200K 1600K 2000K 1886028.50 1733906.50 1745999.38 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 30 60 90 120 150 142.7 150.9 149.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 80 160 240 320 400 347.3 367.4 364.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship A B C 0.7425 1.485 2.2275 2.97 3.7125 3.1 3.3 3.2
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 A B C 400K 800K 1200K 1600K 2000K 1963578.00 1992385.00 1952244.88 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 A B C 400K 800K 1200K 1600K 2000K 2013854.62 2024809.38 1993837.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p A B C 1.2184 2.4368 3.6552 4.8736 6.092 5.339 5.415 5.363 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 A B C 500K 1000K 1500K 2000K 2500K 2300790.75 1980782.62 2029142.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating A B C 20K 40K 60K 80K 100K 88576 88624 88089 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating A B C 20K 40K 60K 80K 100K 85631 87122 86779 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Aircrack-ng OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 A B C 9K 18K 27K 36K 45K 43713.72 43775.77 43668.87 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -lbsd -pthread
Unvanquished Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High A B C 70 140 210 280 350 320.4 319.2 319.8
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Ultra A B C 70 140 210 280 350 314.9 318.8 324.6
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Medium A B C 70 140 210 280 350 328.1 324.9 324.2
Facebook RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill A B C 200K 400K 600K 800K 1000K 1031481 1011586 1024005 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast A B C 40 80 120 160 200 177.98 178.43 178.11 1. (CXX) g++ options: -O3 -flto -pthread
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 14 28 42 56 70 61.6 60.9 60.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 20 40 60 80 100 111.1 109.8 109.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 40 80 120 160 200 175.4 174.1 175.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 90 180 270 360 450 402.7 398.5 402.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default A B C 5 10 15 20 25 18.83 18.82 18.98
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 40 80 120 160 200 164.6 163.9 162.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 80 160 240 320 400 368.1 371.7 368.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K A B C 9 18 27 36 45 39.72 41.27 40.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Primesieve Length: 1e12 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e12 A B C 4 8 12 16 20 14.67 14.64 14.70 1. (CXX) g++ options: -O3 -lpthread
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle A B C 2K 4K 6K 8K 10K 9647.4 10159.0 9980.1 1. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium A B C 13 26 39 52 65 58.04 58.15 58.04 1. (CXX) g++ options: -O3 -flto -pthread
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K A B C 16 32 48 64 80 70.77 73.88 73.14 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle A B C 4K 8K 12K 16K 20K 16249.2 16778.8 16951.6 1. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz A B C 2 4 6 8 10 7.004 7.260 6.963
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K A B C 20 40 60 80 100 94.64 100.51 100.77 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p A B C 20 40 60 80 100 105.22 105.80 106.69 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p A B C 50 100 150 200 250 220.14 222.51 223.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein A B C 2 4 6 8 10 8.285 8.180 7.983 1. (CXX) g++ options: -O3 -pthread -lm -ldl
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p A B C 80 160 240 320 400 329.43 346.25 339.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Phoronix Test Suite v10.8.5