5950x.822.bukubench

AMD Ryzen 9 5950X 16-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.I0 BIOS) and eVGA NVIDIA GeForce GTX 1060 6GB on Gentoo 2.8 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2208253-QUAR-5950X8259
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
3 x 1000GB Samsung SSD 980 PRO 1TB
August 24 2022
  1 Day, 5 Hours, 17 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5950x.822.bukubench Suite 1.0.0 System Test suite extracted from 5950x.822.bukubench. pts/openssl-3.0.1 sha256 Algorithm: SHA256 pts/sysbench-1.1.0 cpu run Test: CPU pts/dav1d-1.12.0 -i chimera_8b_1080p.ivf Video Input: Chimera 1080p pts/dav1d-1.12.0 -i summer_nature_4k.ivf Video Input: Summer Nature 4K pts/dav1d-1.12.0 -i summer_nature_1080p.ivf Video Input: Summer Nature 1080p pts/dav1d-1.12.0 -i chimera_10b_1080p.ivf Video Input: Chimera 1080p 10-bit pts/svt-av1-2.6.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/svt-av1-2.6.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/svt-av1-2.6.0 --preset 10 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 10 - Input: Bosphorus 4K pts/svt-av1-2.6.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.6.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/svt-av1-2.6.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/svt-av1-2.6.0 --preset 10 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 10 - Input: Bosphorus 1080p pts/svt-av1-2.6.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-hevc-1.2.1 -encMode 1 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Tuning: 1 - Input: Bosphorus 4K pts/svt-hevc-1.2.1 -encMode 7 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Tuning: 7 - Input: Bosphorus 4K pts/svt-hevc-1.2.1 -encMode 10 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Tuning: 10 - Input: Bosphorus 4K pts/svt-hevc-1.2.1 -encMode 1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Tuning: 1 - Input: Bosphorus 1080p pts/svt-hevc-1.2.1 -encMode 7 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Tuning: 7 - Input: Bosphorus 1080p pts/svt-hevc-1.2.1 -encMode 10 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Tuning: 10 - Input: Bosphorus 1080p pts/x264-2.7.0 Bosphorus_3840x2160.y4m Video Input: Bosphorus 4K pts/x264-2.7.0 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Video Input: Bosphorus 1080p pts/x265-1.3.0 Bosphorus_3840x2160.y4m Video Input: Bosphorus 4K pts/x265-1.3.0 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Video Input: Bosphorus 1080p pts/graphics-magick-2.1.0 -swirl 90 Operation: Swirl pts/graphics-magick-2.1.0 -rotate 90 Operation: Rotate pts/graphics-magick-2.1.0 -sharpen 0x2.0 Operation: Sharpen pts/graphics-magick-2.1.0 -enhance Operation: Enhanced pts/graphics-magick-2.1.0 -resize 50% Operation: Resizing pts/graphics-magick-2.1.0 -operator all Noise-Gaussian 30% Operation: Noise-Gaussian pts/graphics-magick-2.1.0 -colorspace HWB Operation: HWB Color Space pts/coremark-1.0.1 CoreMark Size 666 - Iterations Per Second pts/aircrack-ng-1.3.0 pts/ramspeed-1.4.3 ADD -b 3 Type: Add - Benchmark: Integer pts/ramspeed-1.4.3 COPY -b 3 Type: Copy - Benchmark: Integer pts/ramspeed-1.4.3 SCALE -b 3 Type: Scale - Benchmark: Integer pts/ramspeed-1.4.3 TRIAD -b 3 Type: Triad - Benchmark: Integer pts/ramspeed-1.4.3 AVERAGE -b 3 Type: Average - Benchmark: Integer pts/ramspeed-1.4.3 ADD -b 6 Type: Add - Benchmark: Floating Point pts/ramspeed-1.4.3 COPY -b 6 Type: Copy - Benchmark: Floating Point pts/ramspeed-1.4.3 SCALE -b 6 Type: Scale - Benchmark: Floating Point pts/ramspeed-1.4.3 TRIAD -b 6 Type: Triad - Benchmark: Floating Point pts/ramspeed-1.4.3 AVERAGE -b 6 Type: Average - Benchmark: Floating Point pts/blosc-1.2.0 blosclz shuffle Test: blosclz shuffle pts/blosc-1.2.0 blosclz bitshuffle Test: blosclz bitshuffle pts/cachebench-1.1.2 -r Test: Read pts/cachebench-1.1.2 -w Test: Write pts/cachebench-1.1.2 -b Test: Read / Modify / Write pts/compress-lz4-1.0.0 -b1 -e1 Compression Level: 1 - Compression Speed pts/compress-lz4-1.0.0 -b1 -e1 Compression Level: 1 - Decompression Speed pts/compress-lz4-1.0.0 -b3 -e3 Compression Level: 3 - Compression Speed pts/compress-lz4-1.0.0 -b3 -e3 Compression Level: 3 - Decompression Speed pts/compress-lz4-1.0.0 -b9 -e9 Compression Level: 9 - Compression Speed pts/compress-lz4-1.0.0 -b9 -e9 Compression Level: 9 - Decompression Speed pts/compress-zstd-1.5.0 -b3 Compression Level: 3 - Compression Speed pts/compress-zstd-1.5.0 -b8 Compression Level: 8 - Compression Speed pts/compress-zstd-1.5.0 -b8 Compression Level: 8 - Decompression Speed pts/compress-zstd-1.5.0 -b19 Compression Level: 19 - Compression Speed pts/compress-zstd-1.5.0 -b19 Compression Level: 19 - Decompression Speed pts/compress-zstd-1.5.0 -b3 --long Compression Level: 3, Long Mode - Compression Speed pts/compress-zstd-1.5.0 -b3 --long Compression Level: 3, Long Mode - Decompression Speed pts/compress-zstd-1.5.0 -b8 --long Compression Level: 8, Long Mode - Compression Speed pts/compress-zstd-1.5.0 -b8 --long Compression Level: 8, Long Mode - Decompression Speed pts/compress-zstd-1.5.0 -b19 --long Compression Level: 19, Long Mode - Compression Speed pts/compress-zstd-1.5.0 -b19 --long Compression Level: 19, Long Mode - Decompression Speed pts/botan-1.6.0 KASUMI Test: KASUMI pts/botan-1.6.0 KASUMI Test: KASUMI - Decrypt pts/botan-1.6.0 AES-256 Test: AES-256 pts/botan-1.6.0 AES-256 Test: AES-256 - Decrypt pts/botan-1.6.0 Twofish Test: Twofish pts/botan-1.6.0 Twofish Test: Twofish - Decrypt pts/botan-1.6.0 Blowfish Test: Blowfish pts/botan-1.6.0 Blowfish Test: Blowfish - Decrypt pts/botan-1.6.0 CAST-256 Test: CAST-256 pts/botan-1.6.0 CAST-256 Test: CAST-256 - Decrypt pts/botan-1.6.0 ChaCha20Poly1305 Test: ChaCha20Poly1305 pts/botan-1.6.0 ChaCha20Poly1305 Test: ChaCha20Poly1305 - Decrypt pts/sysbench-1.1.0 memory run Test: RAM / Memory pts/astcenc-1.4.0 -fast -repeats 120 Preset: Fast pts/astcenc-1.4.0 -medium -repeats 20 Preset: Medium pts/astcenc-1.4.0 -thorough -repeats 10 Preset: Thorough pts/astcenc-1.4.0 -exhaustive -repeats 2 Preset: Exhaustive pts/crafty-1.4.5 Elapsed Time pts/stockfish-1.4.0 Total Time pts/asmfish-1.1.2 1024 Hash Memory, 26 Depth pts/swet-1.0.0 Average pts/dragonflydb-1.0.0 -c 50 --ratio=1:1 Clients: 50 - Set To Get Ratio: 1:1 pts/dragonflydb-1.0.0 -c 50 --ratio=1:5 Clients: 50 - Set To Get Ratio: 1:5 pts/dragonflydb-1.0.0 -c 50 --ratio=5:1 Clients: 50 - Set To Get Ratio: 5:1 pts/dragonflydb-1.0.0 -c 200 --ratio=1:1 Clients: 200 - Set To Get Ratio: 1:1 pts/dragonflydb-1.0.0 -c 200 --ratio=1:5 Clients: 200 - Set To Get Ratio: 1:5 pts/dragonflydb-1.0.0 -c 200 --ratio=5:1 Clients: 200 - Set To Get Ratio: 5:1 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 50 --clients 100 Test: PUT - Connections: 50 - Clients: 100 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 100 --clients 100 Test: PUT - Connections: 100 - Clients: 100 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 50 --clients 1000 Test: PUT - Connections: 50 - Clients: 1000 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 500 --clients 100 Test: PUT - Connections: 500 - Clients: 100 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 100 --clients 1000 Test: PUT - Connections: 100 - Clients: 1000 pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 500 --clients 1000 Test: PUT - Connections: 500 - Clients: 1000 pts/etcd-1.0.0 range KEY --total=4000000 --conns 50 --clients 100 Test: RANGE - Connections: 50 - Clients: 100 pts/etcd-1.0.0 range KEY --total=4000000 --conns 100 --clients 100 Test: RANGE - Connections: 100 - Clients: 100 pts/etcd-1.0.0 range KEY --total=4000000 --conns 50 --clients 1000 Test: RANGE - Connections: 50 - Clients: 1000 pts/etcd-1.0.0 range KEY --total=4000000 --conns 500 --clients 100 Test: RANGE - Connections: 500 - Clients: 100 pts/etcd-1.0.0 range KEY --total=4000000 --conns 100 --clients 1000 Test: RANGE - Connections: 100 - Clients: 1000 pts/etcd-1.0.0 range KEY --total=4000000 --conns 500 --clients 1000 Test: RANGE - Connections: 500 - Clients: 1000 pts/phpbench-1.1.6 PHP Benchmark Suite pts/openssl-3.0.1 rsa4096 Algorithm: RSA4096 pts/postmark-1.1.2 Disk Transaction Performance pts/blake2-1.2.2 pts/pybench-1.1.3 Total For Average Test Times pts/renaissance-1.3.0 dotty Test: Scala Dotty pts/renaissance-1.3.0 dec-tree Test: Random Forest pts/renaissance-1.3.0 movie-lens Test: ALS Movie Lens pts/renaissance-1.3.0 als Test: Apache Spark ALS pts/renaissance-1.3.0 naive-bayes Test: Apache Spark Bayes pts/renaissance-1.3.0 reactors Test: Savina Reactors.IO pts/renaissance-1.3.0 page-rank Test: Apache Spark PageRank pts/renaissance-1.3.0 finagle-http Test: Finagle HTTP Requests pts/renaissance-1.3.0 db-shootout Test: In-Memory Database Shootout pts/renaissance-1.3.0 akka-uct Test: Akka Unbalanced Cobwebbed Tree pts/renaissance-1.3.0 future-genetic Test: Genetic Algorithm Using Jenetics + Futures pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 50 --clients 100 Test: PUT - Connections: 50 - Clients: 100 - Average Latency pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 100 --clients 100 Test: PUT - Connections: 100 - Clients: 100 - Average Latency pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 50 --clients 1000 Test: PUT - Connections: 50 - Clients: 1000 - Average Latency pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 500 --clients 100 Test: PUT - Connections: 500 - Clients: 100 - Average Latency pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 100 --clients 1000 Test: PUT - Connections: 100 - Clients: 1000 - Average Latency pts/etcd-1.0.0 put --total=4000000 --val-size=256 --key-size=8 --conns 500 --clients 1000 Test: PUT - Connections: 500 - Clients: 1000 - Average Latency pts/etcd-1.0.0 range KEY --total=4000000 --conns 50 --clients 100 Test: RANGE - Connections: 50 - Clients: 100 - Average Latency pts/etcd-1.0.0 range KEY --total=4000000 --conns 100 --clients 100 Test: RANGE - Connections: 100 - Clients: 100 - Average Latency pts/etcd-1.0.0 range KEY --total=4000000 --conns 50 --clients 1000 Test: RANGE - Connections: 50 - Clients: 1000 - Average Latency pts/etcd-1.0.0 range KEY --total=4000000 --conns 500 --clients 100 Test: RANGE - Connections: 500 - Clients: 100 - Average Latency pts/etcd-1.0.0 range KEY --total=4000000 --conns 100 --clients 1000 Test: RANGE - Connections: 100 - Clients: 1000 - Average Latency pts/etcd-1.0.0 range KEY --total=4000000 --conns 500 --clients 1000 Test: RANGE - Connections: 500 - Clients: 1000 - Average Latency pts/glibc-bench-1.7.2 bench-cos Benchmark: cos pts/glibc-bench-1.7.2 bench-exp Benchmark: exp pts/glibc-bench-1.7.2 bench-ffs Benchmark: ffs pts/glibc-bench-1.7.2 bench-sin Benchmark: sin pts/glibc-bench-1.7.2 bench-log2 Benchmark: log2 pts/glibc-bench-1.7.2 bench-modf Benchmark: modf pts/glibc-bench-1.7.2 bench-sinh Benchmark: sinh pts/glibc-bench-1.7.2 bench-sqrt Benchmark: sqrt pts/glibc-bench-1.7.2 bench-tanh Benchmark: tanh pts/glibc-bench-1.7.2 bench-asinh Benchmark: asinh pts/glibc-bench-1.7.2 bench-atanh Benchmark: atanh pts/glibc-bench-1.7.2 bench-ffsll Benchmark: ffsll pts/glibc-bench-1.7.2 bench-sincos Benchmark: sincos pts/glibc-bench-1.7.2 bench-pthread_once Benchmark: pthread_once pts/build-gem5-1.0.0 Time To Compile pts/build-linux-kernel-1.14.0 defconfig Build: defconfig pts/build-nodejs-1.1.1 Time To Compile pts/compress-pbzip2-1.6.0 FreeBSD-13.0-RELEASE-amd64-memstick.img Compression pts/primesieve-1.9.0 1e12 Length: 1e12 pts/primesieve-1.9.0 1e13 Length: 1e13 pts/rust-prime-1.0.0 Prime Number Test To 200,000,000 pts/smallpt-1.2.1 Global Illumination Renderer; 128 Samples pts/y-cruncher-1.1.0 Calculating 500M Pi Digits pts/compress-gzip-1.2.0 Linux Source Tree Archiving To .tar.gz pts/compress-xz-1.1.0 Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9 pts/ffmpeg-2.8.0 H.264 HD To NTSC DV pts/m-queens-1.1.0 Time To Solve pts/n-queens-1.2.1 Elapsed Time pts/tachyon-1.3.0 Total Time pts/spark-1.0.0 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark pts/spark-1.0.0 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 - Group By Test Time pts/spark-1.0.0 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 - Repartition Test Time pts/spark-1.0.0 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time pts/spark-1.0.0 -r 1000000 -p 100 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark pts/spark-1.0.0 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 - Group By Test Time pts/spark-1.0.0 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 - Repartition Test Time pts/spark-1.0.0 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time pts/spark-1.0.0 -r 1000000 -p 500 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 - Group By Test Time pts/spark-1.0.0 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time pts/spark-1.0.0 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time pts/spark-1.0.0 -r 1000000 -p 1000 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 - Group By Test Time pts/spark-1.0.0 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time pts/spark-1.0.0 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time pts/spark-1.0.0 -r 1000000 -p 2000 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark pts/spark-1.0.0 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 - Group By Test Time pts/spark-1.0.0 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 - Repartition Test Time pts/spark-1.0.0 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 100 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark pts/spark-1.0.0 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 - Group By Test Time pts/spark-1.0.0 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 - Repartition Test Time pts/spark-1.0.0 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 500 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark pts/spark-1.0.0 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 - Group By Test Time pts/spark-1.0.0 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 - Repartition Test Time pts/spark-1.0.0 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 - Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 100 Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark pts/spark-1.0.0 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 - Group By Test Time pts/spark-1.0.0 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 - Repartition Test Time pts/spark-1.0.0 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 - Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 500 Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark pts/spark-1.0.0 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 - Group By Test Time pts/spark-1.0.0 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 - Repartition Test Time pts/spark-1.0.0 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 100 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark pts/spark-1.0.0 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 - Group By Test Time pts/spark-1.0.0 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 - Repartition Test Time pts/spark-1.0.0 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 - Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 500 Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 - Group By Test Time pts/spark-1.0.0 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time pts/spark-1.0.0 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 1000 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 - Group By Test Time pts/spark-1.0.0 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time pts/spark-1.0.0 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time pts/spark-1.0.0 -r 10000000 -p 2000 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 - Group By Test Time pts/spark-1.0.0 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 - Repartition Test Time pts/spark-1.0.0 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 1000 Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 - Group By Test Time pts/spark-1.0.0 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 - Repartition Test Time pts/spark-1.0.0 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time pts/spark-1.0.0 -r 20000000 -p 2000 Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 - Group By Test Time pts/spark-1.0.0 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 - Repartition Test Time pts/spark-1.0.0 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 1000 Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time pts/spark-1.0.0 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark pts/spark-1.0.0 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe pts/spark-1.0.0 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 - Group By Test Time pts/spark-1.0.0 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time pts/spark-1.0.0 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time pts/spark-1.0.0 -r 40000000 -p 2000 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time pts/sqlite-speedtest-1.0.1 Timed Time - Size 1,000 pts/blender-3.2.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/blender-3.2.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/blender-3.2.0 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/blender-3.2.0 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/compress-rar-1.2.0 Linux Source Tree Archiving To RAR pts/pmbench-1.0.2 -j 1 -r 50 Concurrent Worker Threads: 1 - Read-Write Ratio: 50% pts/pmbench-1.0.2 -j 2 -r 50 Concurrent Worker Threads: 2 - Read-Write Ratio: 50% pts/pmbench-1.0.2 -j 4 -r 50 Concurrent Worker Threads: 4 - Read-Write Ratio: 50% pts/pmbench-1.0.2 -j 8 -r 50 Concurrent Worker Threads: 8 - Read-Write Ratio: 50% pts/pmbench-1.0.2 -j 16 -r 50 Concurrent Worker Threads: 16 - Read-Write Ratio: 50% pts/pmbench-1.0.2 -j 32 -r 50 Concurrent Worker Threads: 32 - Read-Write Ratio: 50% pts/pmbench-1.0.2 -j 1 -r 100 Concurrent Worker Threads: 1 - Read-Write Ratio: 100% Reads pts/pmbench-1.0.2 -j 2 -r 100 Concurrent Worker Threads: 2 - Read-Write Ratio: 100% Reads pts/pmbench-1.0.2 -j 4 -r 100 Concurrent Worker Threads: 4 - Read-Write Ratio: 100% Reads pts/pmbench-1.0.2 -j 8 -r 100 Concurrent Worker Threads: 8 - Read-Write Ratio: 100% Reads pts/pmbench-1.0.2 -j 1 -r 0 Concurrent Worker Threads: 1 - Read-Write Ratio: 100% Writes pts/pmbench-1.0.2 -j 16 -r 100 Concurrent Worker Threads: 16 - Read-Write Ratio: 100% Reads pts/pmbench-1.0.2 -j 2 -r 0 Concurrent Worker Threads: 2 - Read-Write Ratio: 100% Writes pts/pmbench-1.0.2 -j 32 -r 100 Concurrent Worker Threads: 32 - Read-Write Ratio: 100% Reads pts/pmbench-1.0.2 -j 4 -r 0 Concurrent Worker Threads: 4 - Read-Write Ratio: 100% Writes pts/pmbench-1.0.2 -j 8 -r 0 Concurrent Worker Threads: 8 - Read-Write Ratio: 100% Writes pts/pmbench-1.0.2 -j 16 -r 0 Concurrent Worker Threads: 16 - Read-Write Ratio: 100% Writes pts/pmbench-1.0.2 -j 32 -r 0 Concurrent Worker Threads: 32 - Read-Write Ratio: 100% Writes pts/pmbench-1.0.2 -j 1 -r 80 Concurrent Worker Threads: 1 - Read-Write Ratio: 80% Reads 20% Writes pts/pmbench-1.0.2 -j 2 -r 80 Concurrent Worker Threads: 2 - Read-Write Ratio: 80% Reads 20% Writes pts/pmbench-1.0.2 -j 4 -r 80 Concurrent Worker Threads: 4 - Read-Write Ratio: 80% Reads 20% Writes pts/pmbench-1.0.2 -j 8 -r 80 Concurrent Worker Threads: 8 - Read-Write Ratio: 80% Reads 20% Writes pts/pmbench-1.0.2 -j 16 -r 80 Concurrent Worker Threads: 16 - Read-Write Ratio: 80% Reads 20% Writes pts/pmbench-1.0.2 -j 32 -r 80 Concurrent Worker Threads: 32 - Read-Write Ratio: 80% Reads 20% Writes