ra

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and Intel Arc A770 DG2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401078-PTS-RA95128640
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
Python Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
January 07
  41 Minutes
b
January 08
  42 Minutes
c
January 08
  42 Minutes
d
January 08
  42 Minutes
Invert Hiding All Results Option
  42 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ra - Phoronix Test Suite

ra

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and Intel Arc A770 DG2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401078-PTS-RA95128640&grw&rdt&rro.

raProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLOpenCLCompilerFile-SystemScreen ResolutionabcdIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a2732GBWestern Digital WD_BLACK SN850X 1000GBIntel Arc A770 DG2 16GB (2400MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.7.0-060700rc7daily20231224-generic (x86_64)GNOME Shell 45.1X Server 1.21.1.74.6 Mesa 24.0~git2312240600.c05261~oibaf~m (git-c05261a 2023-12-24 mantic-oibaf-ppa)OpenCL 3.0GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

raquicksilver: CTS2opencl-benchmark: Memory Bandwidth Coalesced Readopencl-benchmark: Memory Bandwidth Coalesced Writeopencl-benchmark: INT16 Computeopencl-benchmark: INT8 Computeopencl-benchmark: INT64 Computeopencl-benchmark: INT32 Computeopencl-benchmark: FP16 Computeopencl-benchmark: FP32 Computequicksilver: CORAL2 P1quicksilver: CORAL2 P2tensorflow: CPU - 1 - VGG-16tensorflow: CPU - 16 - VGG-16tensorflow: CPU - 16 - AlexNettensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 16 - GoogLeNety-cruncher: 500My-cruncher: 5By-cruncher: 1Btensorflow: CPU - 1 - AlexNettensorflow: CPU - 16 - ResNet-50pytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lspeedb: Rand Fillspeedb: Rand Readspeedb: Update Randspeedb: Seq Fillspeedb: Rand Fill Syncspeedb: Read While Writingspeedb: Read Rand Write Randgpuowl: 57885161abcd16370000221.35445.7830.26411.3791.285.46518.30812.00218660000173100005.7710.35159.9467.6219.79115.249.632137.77122.64121.836.0373.6428.5144.1044.0017.0016.9913.2911.5711.5910724351618620465877071173524457356719085328424917160000224.17447.5329.75311.3811.265.46517.7912.06421210000170400005.7710.33159.3567.7819.84117.819.894137.49722.53121.7736.1571.6422.1143.5443.3617.1315.7712.898.7011.6310721341533763355864021158616459305231830327448515220000219.56438.8929.76511.3861.2825.46618.2911.9918630000172200005.7810.36159.3567.3419.87118.399.759137.11822.53121.7235.6558.9327.8843.8343.7617.0817.0413.198.818.7210837541618053805858051189205462577297328329358316520000221.78442.5929.72411.3861.2775.46618.31312.06721180000171900005.7810.36159.6568.0519.87114.459.865137.69922.54821.7635.6973.1327.2538.1237.9417.0017.0012.9511.678.70108126116105216559278711515344569752366573282528OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2dcba4M8M12M16M20M165200001522000017160000163700001. (CXX) g++ options: -fopenmp -O3 -march=native

ProjectPhysX OpenCL-Benchmark

Operation: Memory Bandwidth Coalesced Read

OpenBenchmarking.orgGB/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: Memory Bandwidth Coalesced Readdcba50100150200250221.78219.56224.17221.351. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: Memory Bandwidth Coalesced Write

OpenBenchmarking.orgGB/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: Memory Bandwidth Coalesced Writedcba100200300400500442.59438.89447.53445.781. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT16 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT16 Computedcba71421283529.7229.7729.7530.261. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT8 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT8 Computedcba369121511.3911.3911.3811.381. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT64 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT64 Computedcba0.28850.5770.86551.1541.44251.2771.2821.2601.2801. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT32 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT32 Computedcba1.22992.45983.68974.91966.14955.4665.4665.4655.4651. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: FP16 Compute

OpenBenchmarking.orgTFLOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: FP16 Computedcba51015202518.3118.2917.7918.311. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: FP32 Compute

OpenBenchmarking.orgTFLOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: FP32 Computedcba369121512.0711.9912.0612.001. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1dcba5M10M15M20M25M211800001863000021210000186600001. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2dcba4M8M12M16M20M171900001722000017040000173100001. (CXX) g++ options: -fopenmp -O3 -march=native

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16dcba1.30052.6013.90155.2026.50255.785.785.775.77

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16dcba369121510.3610.3610.3310.35

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetdcba4080120160200159.65159.35159.35159.94

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetdcba153045607568.0567.3467.7867.62

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50dcba51015202519.8719.8719.8419.79

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetdcba306090120150114.45118.39117.81115.24

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500Mdcba36912159.8659.7599.8949.632

Y-Cruncher

Pi Digits To Calculate: 5B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 5Bdcba306090120150137.70137.12137.50137.77

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1Bdcba51015202522.5522.5322.5322.64

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetdcba51015202521.7621.7221.7721.80

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50dcba81624324035.6935.6536.1536.03

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50dcba163248648073.1358.9371.6473.64MIN: 17.49 / MAX: 74.29MIN: 58.53 / MAX: 70.01MIN: 18.09 / MAX: 72.77MIN: 72.2 / MAX: 73.92

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152dcba71421283527.2527.8822.1128.51MIN: 8.44 / MAX: 28.21MIN: 8.7 / MAX: 28.91MIN: 8.15 / MAX: 26.78MIN: 8.48 / MAX: 28.98

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50dcba102030405038.1243.8343.5444.10MIN: 9.95 / MAX: 42.77MIN: 15.12 / MAX: 45.84MIN: 10.62 / MAX: 45.46MIN: 13.24 / MAX: 45.85

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50dcba102030405037.9443.7643.3644.00MIN: 9.49 / MAX: 39.47MIN: 15 / MAX: 45.94MIN: 13.12 / MAX: 45.54MIN: 11.96 / MAX: 45.72

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152dcba4812162017.0017.0817.1317.00MIN: 7.61 / MAX: 17.69MIN: 8.2 / MAX: 17.84MIN: 10.27 / MAX: 17.85MIN: 6.54 / MAX: 17.74

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152dcba4812162017.0017.0415.7716.99MIN: 9.03 / MAX: 17.71MIN: 11.16 / MAX: 17.77MIN: 6.2 / MAX: 17.91MIN: 7 / MAX: 17.7

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_ldcba369121512.9513.1912.8913.29MIN: 4.12 / MAX: 17.48MIN: 3.75 / MAX: 17.57MIN: 3.58 / MAX: 17.58MIN: 3.93 / MAX: 17.56

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_ldcba369121511.678.818.7011.57MIN: 5.2 / MAX: 12.21MIN: 4.44 / MAX: 9.1MIN: 4.37 / MAX: 9.01MIN: 5.95 / MAX: 12.1

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_ldcba36912158.708.7211.6311.59MIN: 4.32 / MAX: 9.45MIN: 4.42 / MAX: 9MIN: 5.33 / MAX: 12.18MIN: 5.99 / MAX: 12.15

Speedb

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Filldcba200K400K600K800K1000K10812611083754107213410724351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Readdcba30M60M90M120M150M1610521651618053801533763351618620461. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Randomdcba130K260K390K520K650K5927875858055864025877071. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential Filldcba300K600K900K1200K1500K11515341189205115861611735241. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill Syncdcba10K20K30K40K50K456974625745930457351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writingdcba1.6M3.2M4.8M6.4M8M52366577297328523183067190851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Randomdcba700K1400K2100K2800K3500K32825283293583327448532842491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread


Phoronix Test Suite v10.8.4