ra

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and Intel Arc A770 DG2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401078-PTS-RA95128640&grr.

raProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLOpenCLCompilerFile-SystemScreen ResolutionabcdIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a2732GBWestern Digital WD_BLACK SN850X 1000GBIntel Arc A770 DG2 16GB (2400MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.7.0-060700rc7daily20231224-generic (x86_64)GNOME Shell 45.1X Server 1.21.1.74.6 Mesa 24.0~git2312240600.c05261~oibaf~m (git-c05261a 2023-12-24 mantic-oibaf-ppa)OpenCL 3.0GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

raquicksilver: CTS2quicksilver: CORAL2 P2pytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_ltensorflow: CPU - 16 - VGG-16y-cruncher: 5Bpytorch: CPU - 32 - ResNet-152pytorch: CPU - 16 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lspeedb: Rand Fill Syncspeedb: Rand Fillspeedb: Update Randspeedb: Read While Writingspeedb: Read Rand Write Randspeedb: Rand Readquicksilver: CORAL2 P1tensorflow: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 32 - ResNet-50speedb: Seq Filly-cruncher: 1Btensorflow: CPU - 1 - VGG-16pytorch: CPU - 1 - ResNet-50tensorflow: CPU - 16 - GoogLeNetopencl-benchmark: Memory Bandwidth Coalesced Writeopencl-benchmark: Memory Bandwidth Coalesced Readopencl-benchmark: INT8 Computeopencl-benchmark: INT16 Computeopencl-benchmark: INT32 Computeopencl-benchmark: INT64 Computeopencl-benchmark: FP16 Computeopencl-benchmark: FP32 Computetensorflow: CPU - 16 - AlexNety-cruncher: 500Mtensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 1 - AlexNettensorflow: CPU - 1 - GoogLeNetgpuowl: 77936867abcd163700001731000011.5911.5710.35137.77116.9917.0013.29457351072435587707671908532842491618620461866000036.0344.1028.5144.00117352422.6415.7773.64115.24447.57220.3511.38429.735.4661.25918.3212.002159.949.63219.7921.867.62171600001704000011.638.7010.33137.49715.7717.1312.89459301072134586402523183032744851533763352121000036.1543.5422.1143.36115861622.5315.7771.64117.81447.53224.1711.38129.7535.4651.2617.7912.064159.359.89419.8421.7767.7815220000172200008.728.8110.36137.11817.0417.0813.19462571083754585805729732832935831618053801863000035.6543.8327.8843.76118920522.5315.7858.93118.39438.89219.5611.38629.7655.4661.28218.2911.99159.359.75919.8721.7267.3416520000171900008.7011.6710.36137.69917.0017.0012.95456971081261592787523665732825281610521652118000035.6938.1227.2537.94115153422.5485.7873.13114.45442.59221.7811.38629.7245.4661.27718.31312.067159.659.86519.8721.7668.05OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2abcd4M8M12M16M20M163700001716000015220000165200001. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2abcd4M8M12M16M20M173100001704000017220000171900001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_labcd369121511.5911.638.728.70MIN: 5.99 / MAX: 12.15MIN: 5.33 / MAX: 12.18MIN: 4.42 / MAX: 9MIN: 4.32 / MAX: 9.45

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_labcd369121511.578.708.8111.67MIN: 5.95 / MAX: 12.1MIN: 4.37 / MAX: 9.01MIN: 4.44 / MAX: 9.1MIN: 5.2 / MAX: 12.21

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16abcd369121510.3510.3310.3610.36

Y-Cruncher

Pi Digits To Calculate: 5B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 5Babcd306090120150137.77137.50137.12137.70

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152abcd4812162016.9915.7717.0417.00MIN: 7 / MAX: 17.7MIN: 6.2 / MAX: 17.91MIN: 11.16 / MAX: 17.77MIN: 9.03 / MAX: 17.71

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152abcd4812162017.0017.1317.0817.00MIN: 6.54 / MAX: 17.74MIN: 10.27 / MAX: 17.85MIN: 8.2 / MAX: 17.84MIN: 7.61 / MAX: 17.69

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_labcd369121513.2912.8913.1912.95MIN: 3.93 / MAX: 17.56MIN: 3.58 / MAX: 17.58MIN: 3.75 / MAX: 17.57MIN: 4.12 / MAX: 17.48

Speedb

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill Syncabcd10K20K30K40K50K457354593046257456971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fillabcd200K400K600K800K1000K10724351072134108375410812611. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Randomabcd130K260K390K520K650K5877075864025858055927871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writingabcd1.6M3.2M4.8M6.4M8M67190855231830729732852366571. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Randomabcd700K1400K2100K2800K3500K32842493274485329358332825281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Readabcd30M60M90M120M150M1618620461533763351618053801610521651. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1abcd5M10M15M20M25M186600002121000018630000211800001. (CXX) g++ options: -fopenmp -O3 -march=native

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50abcd81624324036.0336.1535.6535.69

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50abcd102030405044.1043.5443.8338.12MIN: 13.24 / MAX: 45.85MIN: 10.62 / MAX: 45.46MIN: 15.12 / MAX: 45.84MIN: 9.95 / MAX: 42.77

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152abcd71421283528.5122.1127.8827.25MIN: 8.48 / MAX: 28.98MIN: 8.15 / MAX: 26.78MIN: 8.7 / MAX: 28.91MIN: 8.44 / MAX: 28.21

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50abcd102030405044.0043.3643.7637.94MIN: 11.96 / MAX: 45.72MIN: 13.12 / MAX: 45.54MIN: 15 / MAX: 45.94MIN: 9.49 / MAX: 39.47

Speedb

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential Fillabcd300K600K900K1200K1500K11735241158616118920511515341. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1Babcd51015202522.6422.5322.5322.55

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16abcd1.30052.6013.90155.2026.50255.775.775.785.78

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50abcd163248648073.6471.6458.9373.13MIN: 72.2 / MAX: 73.92MIN: 18.09 / MAX: 72.77MIN: 58.53 / MAX: 70.01MIN: 17.49 / MAX: 74.29

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetabcd306090120150115.24117.81118.39114.45

ProjectPhysX OpenCL-Benchmark

Operation: Memory Bandwidth Coalesced Write

OpenBenchmarking.orgGB/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: Memory Bandwidth Coalesced Writeabcd100200300400500445.78447.53438.89442.591. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: Memory Bandwidth Coalesced Read

OpenBenchmarking.orgGB/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: Memory Bandwidth Coalesced Readabcd50100150200250221.35224.17219.56221.781. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT8 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT8 Computeabcd369121511.3811.3811.3911.391. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT16 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT16 Computeabcd71421283530.2629.7529.7729.721. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT32 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT32 Computeabcd1.22992.45983.68974.91966.14955.4655.4655.4665.4661. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT64 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT64 Computeabcd0.28850.5770.86551.1541.44251.2801.2601.2821.2771. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: FP16 Compute

OpenBenchmarking.orgTFLOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: FP16 Computeabcd51015202518.3117.7918.2918.311. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: FP32 Compute

OpenBenchmarking.orgTFLOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: FP32 Computeabcd369121512.0012.0611.9912.071. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetabcd4080120160200159.94159.35159.35159.65

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500Mabcd36912159.6329.8949.7599.865

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50abcd51015202519.7919.8419.8719.87

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetabcd51015202521.8021.7721.7221.76

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetabcd153045607567.6267.7867.3468.05


Phoronix Test Suite v10.8.4