Tests fot a future article. AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4402 BIOS) and llvmpipe on Ubuntu 22.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2302262-NE-5950XSUND70
5950x sundat
Tests fot a future article. AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4402 BIOS) and llvmpipe on Ubuntu 22.10 via the Phoronix Test Suite.
a:
Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4402 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0 + 2000GB, Graphics: llvmpipe, Audio: Intel Device 4f92, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.10, Kernel: 5.19.0-23-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server 1.21.1.4, OpenGL: 4.5 Mesa 22.2.1 (LLVM 15.0.2 256 bits), Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
b:
Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4402 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 500GB Western Digital WDS500G3X0C-00SJG0 + 2000GB, Graphics: llvmpipe, Audio: Intel Device 4f92, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.10, Kernel: 5.19.0-23-generic (x86_64), Desktop: GNOME Shell 43.0, Display Server: X Server 1.21.1.4, OpenGL: 4.5 Mesa 22.2.1 (LLVM 15.0.2 256 bits), Vulkan: 1.3.224, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 3840x2160
BRL-CAD 7.34
VGR Performance Metric
VGR Performance Metric > Higher Is Better
a . 273690 |===================================================================
b . 274004 |===================================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 107.98 |===================================================================
b . 107.78 |===================================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 119.88 |=================================================================
b . 122.78 |===================================================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 8.37549 |==================================================================
b . 8.32324 |==================================================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 7.72199 |=================================================
b . 10.23540 |=================================================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 13.43 |===================================================================
b . 13.63 |====================================================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 16.98 |====================================================================
b . 17.08 |====================================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 691.24 |===================================================================
b . 671.58 |=================================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 712.70 |===========================================================
b . 803.47 |===================================================================
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 1.38126 |==================================================================
b . 1.38396 |==================================================================
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 1.46417 |=============================================
b . 2.13179 |==================================================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 27.86 |====================================================================
b . 27.70 |====================================================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 26.61 |========================================================
b . 32.33 |====================================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 233.40 |==================================================================
b . 236.54 |===================================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 235.49 |==================================================================
b . 240.42 |===================================================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 101.53 |===================================================================
b . 101.35 |===================================================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 166.40 |===================================================================
b . 103.55 |==========================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inferences Per Second > Higher Is Better
a . 36.04 |====================================================================
b . 36.19 |====================================================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inferences Per Second > Higher Is Better
a . 41.35 |====================================================================
b . 41.10 |====================================================================
GROMACS 2023
Implementation: MPI CPU - Input: water_GMX50_bare
Ns Per Day > Higher Is Better
a . 1.270 |====================================================================
b . 1.278 |====================================================================
Zstd Compression 1.5.4
Compression Level: 3 - Compression Speed
MB/s > Higher Is Better
a . 2781.2 |==================================================================
b . 2803.5 |===================================================================
Zstd Compression 1.5.4
Compression Level: 3 - Decompression Speed
MB/s > Higher Is Better
a . 1722.4 |=================================================================
b . 1768.7 |===================================================================
Zstd Compression 1.5.4
Compression Level: 8 - Compression Speed
MB/s > Higher Is Better
a . 432.2 |====================================================================
b . 429.5 |====================================================================
Zstd Compression 1.5.4
Compression Level: 8 - Decompression Speed
MB/s > Higher Is Better
a . 1887.7 |===================================================================
b . 1853.1 |==================================================================
Zstd Compression 1.5.4
Compression Level: 12 - Compression Speed
MB/s > Higher Is Better
a . 128.0 |====================================================================
b . 127.7 |====================================================================
Zstd Compression 1.5.4
Compression Level: 12 - Decompression Speed
MB/s > Higher Is Better
a . 1882.7 |==================================================================
b . 1916.8 |===================================================================
Zstd Compression 1.5.4
Compression Level: 19 - Compression Speed
MB/s > Higher Is Better
a . 18.5 |====================================================================
b . 18.7 |=====================================================================
Zstd Compression 1.5.4
Compression Level: 19 - Decompression Speed
MB/s > Higher Is Better
a . 1701.6 |===================================================================
b . 1672.0 |==================================================================
Zstd Compression 1.5.4
Compression Level: 3, Long Mode - Compression Speed
MB/s > Higher Is Better
a . 722.2 |===================================================================
b . 732.1 |====================================================================
Zstd Compression 1.5.4
Compression Level: 3, Long Mode - Decompression Speed
MB/s > Higher Is Better
a . 1757.8 |==================================================================
b . 1797.9 |===================================================================
Zstd Compression 1.5.4
Compression Level: 8, Long Mode - Compression Speed
MB/s > Higher Is Better
a . 429.5 |===================================================================
b . 434.8 |====================================================================
Zstd Compression 1.5.4
Compression Level: 8, Long Mode - Decompression Speed
MB/s > Higher Is Better
a . 1896.5 |==================================================================
b . 1912.1 |===================================================================
Zstd Compression 1.5.4
Compression Level: 19, Long Mode - Compression Speed
MB/s > Higher Is Better
a . 10 |=======================================================================
b . 10 |=======================================================================
Zstd Compression 1.5.4
Compression Level: 19, Long Mode - Decompression Speed
MB/s > Higher Is Better
a . 1617.9 |==================================================================
b . 1633.3 |===================================================================
Timed Linux Kernel Compilation 6.1
Build: defconfig
Seconds < Lower Is Better
a . 69.97 |===================================================================
b . 71.40 |====================================================================
Timed Linux Kernel Compilation 6.1
Build: allmodconfig
Seconds < Lower Is Better
a . 812.17 |=================================================================
b . 831.22 |===================================================================
Kvazaar 2.2
Video Input: Bosphorus 4K - Video Preset: Slow
Frames Per Second > Higher Is Better
a . 15.16 |====================================================================
b . 15.17 |====================================================================
Kvazaar 2.2
Video Input: Bosphorus 4K - Video Preset: Medium
Frames Per Second > Higher Is Better
a . 15.46 |===================================================================
b . 15.61 |====================================================================
Kvazaar 2.2
Video Input: Bosphorus 1080p - Video Preset: Slow
Frames Per Second > Higher Is Better
a . 62.18 |====================================================================
b . 62.39 |====================================================================
Kvazaar 2.2
Video Input: Bosphorus 1080p - Video Preset: Medium
Frames Per Second > Higher Is Better
a . 64.52 |====================================================================
b . 64.66 |====================================================================
Kvazaar 2.2
Video Input: Bosphorus 4K - Video Preset: Very Fast
Frames Per Second > Higher Is Better
a . 34.26 |====================================================================
b . 34.51 |====================================================================
Kvazaar 2.2
Video Input: Bosphorus 4K - Video Preset: Super Fast
Frames Per Second > Higher Is Better
a . 44.11 |====================================================================
b . 44.13 |====================================================================
Kvazaar 2.2
Video Input: Bosphorus 4K - Video Preset: Ultra Fast
Frames Per Second > Higher Is Better
a . 57.01 |====================================================================
b . 57.02 |====================================================================
Kvazaar 2.2
Video Input: Bosphorus 1080p - Video Preset: Very Fast
Frames Per Second > Higher Is Better
a . 135.84 |===================================================================
b . 136.82 |===================================================================
Kvazaar 2.2
Video Input: Bosphorus 1080p - Video Preset: Super Fast
Frames Per Second > Higher Is Better
a . 175.59 |===================================================================
b . 176.19 |===================================================================
Kvazaar 2.2
Video Input: Bosphorus 1080p - Video Preset: Ultra Fast
Frames Per Second > Higher Is Better
a . 218.35 |===================================================================
b . 218.96 |===================================================================
AOM AV1 3.6
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 0.31 |=====================================================================
b . 0.31 |=====================================================================
AOM AV1 3.6
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 9.71 |====================================================================
b . 9.84 |=====================================================================
AOM AV1 3.6
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 58.91 |===================================================================
b . 59.54 |====================================================================
AOM AV1 3.6
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 16.98 |===================================================================
b . 17.27 |====================================================================
AOM AV1 3.6
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 55.92 |====================================================================
b . 51.37 |==============================================================
AOM AV1 3.6
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 63.21 |===================================================================
b . 64.48 |====================================================================
AOM AV1 3.6
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 63.05 |==================================================================
b . 65.40 |====================================================================
AOM AV1 3.6
Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 0.91 |====================================================================
b . 0.92 |=====================================================================
AOM AV1 3.6
Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 20.32 |====================================================================
b . 20.46 |====================================================================
AOM AV1 3.6
Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 115.45 |===================================================================
b . 110.58 |================================================================
AOM AV1 3.6
Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 40.10 |===================================================================
b . 40.55 |====================================================================
AOM AV1 3.6
Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 123.88 |===================================================================
b . 122.97 |===================================================================
AOM AV1 3.6
Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 104.28 |===================================================================
b . 99.56 |================================================================
AOM AV1 3.6
Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 119.21 |===================================================================
b . 116.63 |==================================================================
VP9 libvpx Encoding 1.13
Speed: Speed 0 - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 8.62 |=====================================================================
b . 8.55 |====================================================================
VP9 libvpx Encoding 1.13
Speed: Speed 5 - Input: Bosphorus 4K
Frames Per Second > Higher Is Better
a . 20.56 |====================================================================
b . 20.67 |====================================================================
VP9 libvpx Encoding 1.13
Speed: Speed 0 - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 17.25 |====================================================================
b . 17.07 |===================================================================
VP9 libvpx Encoding 1.13
Speed: Speed 5 - Input: Bosphorus 1080p
Frames Per Second > Higher Is Better
a . 36.07 |===================================================================
b . 36.61 |====================================================================
uvg266 0.4.1
Video Input: Bosphorus 4K - Video Preset: Slow
Frames Per Second > Higher Is Better
a . 9.72 |=====================================================================
b . 9.74 |=====================================================================
uvg266 0.4.1
Video Input: Bosphorus 4K - Video Preset: Medium
Frames Per Second > Higher Is Better
a . 10.92 |====================================================================
b . 10.90 |====================================================================
uvg266 0.4.1
Video Input: Bosphorus 1080p - Video Preset: Slow
Frames Per Second > Higher Is Better
a . 41.50 |====================================================================
b . 41.17 |===================================================================
uvg266 0.4.1
Video Input: Bosphorus 1080p - Video Preset: Medium
Frames Per Second > Higher Is Better
a . 46.51 |====================================================================
b . 46.30 |====================================================================
uvg266 0.4.1
Video Input: Bosphorus 4K - Video Preset: Very Fast
Frames Per Second > Higher Is Better
a . 30.93 |====================================================================
b . 30.80 |====================================================================
uvg266 0.4.1
Video Input: Bosphorus 4K - Video Preset: Super Fast
Frames Per Second > Higher Is Better
a . 32.98 |====================================================================
b . 32.64 |===================================================================
uvg266 0.4.1
Video Input: Bosphorus 4K - Video Preset: Ultra Fast
Frames Per Second > Higher Is Better
a . 38.68 |====================================================================
b . 38.61 |====================================================================
uvg266 0.4.1
Video Input: Bosphorus 1080p - Video Preset: Very Fast
Frames Per Second > Higher Is Better
a . 123.09 |===================================================================
b . 123.24 |===================================================================
uvg266 0.4.1
Video Input: Bosphorus 1080p - Video Preset: Super Fast
Frames Per Second > Higher Is Better
a . 131.42 |===================================================================
b . 131.34 |===================================================================
uvg266 0.4.1
Video Input: Bosphorus 1080p - Video Preset: Ultra Fast
Frames Per Second > Higher Is Better
a . 151.22 |===================================================================
b . 150.45 |===================================================================
VVenC 1.7
Video Input: Bosphorus 4K - Video Preset: Fast
Frames Per Second > Higher Is Better
a . 4.477 |====================================================================
b . 4.501 |====================================================================
VVenC 1.7
Video Input: Bosphorus 4K - Video Preset: Faster
Frames Per Second > Higher Is Better
a . 9.462 |====================================================================
b . 9.489 |====================================================================
VVenC 1.7
Video Input: Bosphorus 1080p - Video Preset: Fast
Frames Per Second > Higher Is Better
a . 11.05 |===================================================================
b . 11.20 |====================================================================
VVenC 1.7
Video Input: Bosphorus 1080p - Video Preset: Faster
Frames Per Second > Higher Is Better
a . 25.96 |===================================================================
b . 26.31 |====================================================================
Embree 4.0
Binary: Pathtracer - Model: Crown
Frames Per Second > Higher Is Better
a . 23.75 |====================================================================
b . 23.81 |====================================================================
Embree 4.0
Binary: Pathtracer ISPC - Model: Crown
Frames Per Second > Higher Is Better
a . 22.60 |====================================================================
b . 22.73 |====================================================================
Embree 4.0
Binary: Pathtracer - Model: Asian Dragon
Frames Per Second > Higher Is Better
a . 24.32 |==================================================================
b . 24.88 |====================================================================
Embree 4.0
Binary: Pathtracer - Model: Asian Dragon Obj
Frames Per Second > Higher Is Better
a . 21.89 |===================================================================
b . 22.10 |====================================================================
Embree 4.0
Binary: Pathtracer ISPC - Model: Asian Dragon
Frames Per Second > Higher Is Better
a . 24.51 |====================================================================
b . 24.60 |====================================================================
Embree 4.0
Binary: Pathtracer ISPC - Model: Asian Dragon Obj
Frames Per Second > Higher Is Better
a . 21.26 |===================================================================
b . 21.44 |====================================================================
OpenVKL 1.3.1
Benchmark: vklBenchmark ISPC
Items / Sec > Higher Is Better
a . 227 |======================================================================
b . 228 |======================================================================
OpenVKL 1.3.1
Benchmark: vklBenchmark Scalar
Items / Sec > Higher Is Better
a . 142 |======================================================================
b . 143 |======================================================================
Apache Spark 3.3
Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time
Seconds < Lower Is Better
a . 3.20 |=====================================================================
b . 3.17 |====================================================================
Apache Spark 3.3
Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark
Seconds < Lower Is Better
a . 86.52 |====================================================================
b . 86.45 |====================================================================
Apache Spark 3.3
Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe
Seconds < Lower Is Better
a . 5.07 |=====================================================================
b . 5.04 |=====================================================================
Apache Spark 3.3
Row Count: 1000000 - Partitions: 100 - Group By Test Time
Seconds < Lower Is Better
a . 3.48 |=====================================================================
b . 3.43 |====================================================================
Apache Spark 3.3
Row Count: 1000000 - Partitions: 100 - Repartition Test Time
Seconds < Lower Is Better
a . 1.98 |=====================================================================
b . 1.95 |====================================================================
Apache Spark 3.3
Row Count: 1000000 - Partitions: 100 - Inner Join Test Time
Seconds < Lower Is Better
a . 1.649008243 |==============================================================
b . 1.620000000 |=============================================================
Apache Spark 3.3
Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time
Seconds < Lower Is Better
a . 1.40 |==============================================================
b . 1.55 |=====================================================================
Apache Spark 3.3
Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time
Seconds < Lower Is Better
a . 15.86 |====================================================================
b . 15.92 |====================================================================
Apache Spark 3.3
Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark
Seconds < Lower Is Better
a . 86.31 |====================================================================
b . 86.63 |====================================================================
Apache Spark 3.3
Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe
Seconds < Lower Is Better
a . 5.05 |=====================================================================
b . 5.06 |=====================================================================
Apache Spark 3.3
Row Count: 10000000 - Partitions: 100 - Group By Test Time
Seconds < Lower Is Better
a . 6.91 |=====================================================================
b . 6.73 |===================================================================
Apache Spark 3.3
Row Count: 10000000 - Partitions: 100 - Repartition Test Time
Seconds < Lower Is Better
a . 10.05 |====================================================================
b . 10.06 |====================================================================
Apache Spark 3.3
Row Count: 10000000 - Partitions: 100 - Inner Join Test Time
Seconds < Lower Is Better
a . 11.98 |====================================================================
b . 11.27 |================================================================
Apache Spark 3.3
Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time
Seconds < Lower Is Better
a . 11.43 |====================================================================
b . 11.05 |==================================================================
Apache Spark 3.3
Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time
Seconds < Lower Is Better
a . 31.53 |====================================================================
b . 29.92 |=================================================================
Apache Spark 3.3
Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark
Seconds < Lower Is Better
a . 86.47 |====================================================================
b . 86.27 |====================================================================
Apache Spark 3.3
Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe
Seconds < Lower Is Better
a . 5.09 |=====================================================================
b . 5.05 |====================================================================
Apache Spark 3.3
Row Count: 20000000 - Partitions: 100 - Group By Test Time
Seconds < Lower Is Better
a . 10.34 |===================================================================
b . 10.46 |====================================================================
Apache Spark 3.3
Row Count: 20000000 - Partitions: 100 - Repartition Test Time
Seconds < Lower Is Better
a . 19.79 |==================================================================
b . 20.49 |====================================================================
Apache Spark 3.3
Row Count: 20000000 - Partitions: 100 - Inner Join Test Time
Seconds < Lower Is Better
a . 21.64 |==================================================================
b . 22.33 |====================================================================
Apache Spark 3.3
Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time
Seconds < Lower Is Better
a . 22.02 |====================================================================
b . 22.00 |====================================================================
Apache Spark 3.3
Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time
Seconds < Lower Is Better
a . 58.63 |====================================================================
b . 58.76 |====================================================================
Apache Spark 3.3
Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark
Seconds < Lower Is Better
a . 87.08 |====================================================================
b . 86.18 |===================================================================
Apache Spark 3.3
Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe
Seconds < Lower Is Better
a . 5.09 |=====================================================================
b . 5.07 |=====================================================================
Apache Spark 3.3
Row Count: 40000000 - Partitions: 100 - Group By Test Time
Seconds < Lower Is Better
a . 30.31 |====================================================================
b . 30.19 |====================================================================
Apache Spark 3.3
Row Count: 40000000 - Partitions: 100 - Repartition Test Time
Seconds < Lower Is Better
a . 40.28 |====================================================================
b . 39.59 |===================================================================
Apache Spark 3.3
Row Count: 40000000 - Partitions: 100 - Inner Join Test Time
Seconds < Lower Is Better
a . 45.15 |====================================================================
b . 43.05 |=================================================================
Apache Spark 3.3
Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time
Seconds < Lower Is Better
a . 44.50 |====================================================================
b . 44.09 |===================================================================
ClickHouse 22.12.3.5
100M Rows Hits Dataset, First Run / Cold Cache
Queries Per Minute, Geo Mean > Higher Is Better
a . 118.22 |=================================================================
b . 122.32 |===================================================================
ClickHouse 22.12.3.5
100M Rows Hits Dataset, Second Run
Queries Per Minute, Geo Mean > Higher Is Better
a . 130.72 |==================================================================
b . 132.24 |===================================================================
ClickHouse 22.12.3.5
100M Rows Hits Dataset, Third Run
Queries Per Minute, Geo Mean > Higher Is Better
a . 134.32 |==================================================================
b . 136.05 |===================================================================
CockroachDB 22.2
Workload: MoVR - Concurrency: 128
ops/s > Higher Is Better
a . 729.5 |===================================================================
b . 737.4 |====================================================================
CockroachDB 22.2
Workload: MoVR - Concurrency: 1024
ops/s > Higher Is Better
a . 737.4 |====================================================================
b . 737.4 |====================================================================
CockroachDB 22.2
Workload: KV, 10% Reads - Concurrency: 128
ops/s > Higher Is Better
a . 50930.0 |==============================================================
b . 54375.8 |==================================================================
CockroachDB 22.2
Workload: KV, 50% Reads - Concurrency: 128
ops/s > Higher Is Better
a . 66349.7 |=================================================================
b . 67666.0 |==================================================================
CockroachDB 22.2
Workload: KV, 60% Reads - Concurrency: 128
ops/s > Higher Is Better
a . 71625.1 |================================================================
b . 73703.7 |==================================================================
CockroachDB 22.2
Workload: KV, 95% Reads - Concurrency: 128
ops/s > Higher Is Better
a . 95581.2 |==================================================================
b . 93880.3 |=================================================================
CockroachDB 22.2
Workload: KV, 10% Reads - Concurrency: 1024
ops/s > Higher Is Better
a . 41319.9 |==================================================================
b . 41158.6 |==================================================================
CockroachDB 22.2
Workload: KV, 50% Reads - Concurrency: 1024
ops/s > Higher Is Better
a . 49144.6 |==================================================================
b . 48708.5 |=================================================================
CockroachDB 22.2
Workload: KV, 60% Reads - Concurrency: 1024
ops/s > Higher Is Better
a . 51077.9 |===============================================================
b . 53184.2 |==================================================================
CockroachDB 22.2
Workload: KV, 95% Reads - Concurrency: 1024
ops/s > Higher Is Better
a . 71285.1 |==================================================================
b . 70079.9 |=================================================================
RocksDB 7.9.2
Test: Random Fill
Op/s > Higher Is Better
a . 1273257 |==================================================================
b . 1278893 |==================================================================
RocksDB 7.9.2
Test: Random Read
Op/s > Higher Is Better
a . 94785075 |=================================================================
b . 94703211 |=================================================================
RocksDB 7.9.2
Test: Update Random
Op/s > Higher Is Better
a . 733550 |===================================================================
b . 737184 |===================================================================
RocksDB 7.9.2
Test: Sequential Fill
Op/s > Higher Is Better
a . 1454965 |==================================================================
b . 1441186 |=================================================================
RocksDB 7.9.2
Test: Random Fill Sync
Op/s > Higher Is Better
a . 27798 |====================================================================
b . 27749 |====================================================================
RocksDB 7.9.2
Test: Read While Writing
Op/s > Higher Is Better
a . 3594042 |=================================================================
b . 3623493 |==================================================================
RocksDB 7.9.2
Test: Read Random Write Random
Op/s > Higher Is Better
a . 2510683 |==================================================================
b . 2501578 |==================================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 9.25704 |==================================================================
b . 9.27425 |==================================================================
ONNX Runtime 1.14
Model: GPT-2 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 8.33864 |==================================================================
b . 8.14204 |================================================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 119.39 |===================================================================
b . 120.14 |===================================================================
ONNX Runtime 1.14
Model: yolov4 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 129.50 |===================================================================
b . 97.70 |===================================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 74.48 |====================================================================
b . 73.34 |===================================================================
ONNX Runtime 1.14
Model: bertsquad-12 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 58.89 |====================================================================
b . 58.56 |====================================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 1.44574 |================================================================
b . 1.48798 |==================================================================
ONNX Runtime 1.14
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 1.40262 |==================================================================
b . 1.24424 |===========================================================
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 723.97 |===================================================================
b . 722.56 |===================================================================
ONNX Runtime 1.14
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 682.98 |===================================================================
b . 469.09 |==============================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 35.89 |====================================================================
b . 36.10 |====================================================================
ONNX Runtime 1.14
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 37.57 |====================================================================
b . 30.93 |========================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 4.28386 |==================================================================
b . 4.22687 |=================================================================
ONNX Runtime 1.14
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 4.24607 |==================================================================
b . 4.15881 |=================================================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 9.84858 |==================================================================
b . 9.86564 |==================================================================
ONNX Runtime 1.14
Model: super-resolution-10 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 6.00923 |=========================================
b . 9.65702 |==================================================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
Inference Time Cost (ms) < Lower Is Better
a . 27.74 |====================================================================
b . 27.63 |====================================================================
ONNX Runtime 1.14
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Inference Time Cost (ms) < Lower Is Better
a . 24.18 |====================================================================
b . 24.33 |====================================================================