ra

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and Intel Arc A770 DG2 16GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401078-PTS-RA95128640&grs&sor.

raProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLOpenCLCompilerFile-SystemScreen ResolutionabcdIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a2732GBWestern Digital WD_BLACK SN850X 1000GBIntel Arc A770 DG2 16GB (2400MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.7.0-060700rc7daily20231224-generic (x86_64)GNOME Shell 45.1X Server 1.21.1.74.6 Mesa 24.0~git2312240600.c05261~oibaf~m (git-c05261a 2023-12-24 mantic-oibaf-ppa)OpenCL 3.0GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

raspeedb: Read While Writingpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-50quicksilver: CORAL2 P1quicksilver: CTS2pytorch: CPU - 32 - ResNet-152speedb: Rand Readtensorflow: CPU - 16 - GoogLeNetspeedb: Seq Fillpytorch: CPU - 1 - Efficientnet_v2_lopencl-benchmark: FP16 Computey-cruncher: 500Mopencl-benchmark: Memory Bandwidth Coalesced Readopencl-benchmark: Memory Bandwidth Coalesced Writeopencl-benchmark: INT64 Computeopencl-benchmark: INT16 Computequicksilver: CORAL2 P2tensorflow: CPU - 16 - ResNet-50speedb: Rand Fill Syncspeedb: Update Randspeedb: Rand Filltensorflow: CPU - 1 - GoogLeNetpytorch: CPU - 16 - ResNet-152opencl-benchmark: FP32 Computespeedb: Read Rand Write Randy-cruncher: 1By-cruncher: 5Btensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 16 - AlexNettensorflow: CPU - 1 - AlexNettensorflow: CPU - 16 - VGG-16tensorflow: CPU - 1 - VGG-16opencl-benchmark: INT8 Computeopencl-benchmark: INT32 Computegpuowl: 57885161abcd671908511.5711.5928.5173.6444.0044.10186600001637000016.99161862046115.24117352413.2918.329.632220.35447.571.25929.731731000036.0345735587707107243567.6217.0012.002328424922.641137.77119.79159.9421.810.355.7711.3845.46652318308.7011.6322.1171.6443.3643.54212100001716000015.77153376335117.81115861612.8917.799.894224.17447.531.2629.7531704000036.1545930586402107213467.7817.1312.064327448522.531137.49719.84159.3521.7710.335.7711.3815.46572973288.818.7227.8858.9343.7643.83186300001522000017.04161805380118.39118920513.1918.299.759219.56438.891.28229.7651722000035.6546257585805108375467.3417.0811.99329358322.531137.11819.87159.3521.7210.365.7811.3865.466523665711.678.7027.2573.1337.9438.12211800001652000017.00161052165114.45115153412.9518.3139.865221.78442.591.27729.7241719000035.6945697592787108126168.0517.0012.067328252822.548137.69919.87159.6521.7610.365.7811.3865.466OpenBenchmarking.org

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writingcadb1.6M3.2M4.8M6.4M8M72973286719085523665752318301. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_ldacb369121511.6711.578.818.70MIN: 5.2 / MAX: 12.21MIN: 5.95 / MAX: 12.1MIN: 4.44 / MAX: 9.1MIN: 4.37 / MAX: 9.01

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lbacd369121511.6311.598.728.70MIN: 5.33 / MAX: 12.18MIN: 5.99 / MAX: 12.15MIN: 4.42 / MAX: 9MIN: 4.32 / MAX: 9.45

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152acdb71421283528.5127.8827.2522.11MIN: 8.48 / MAX: 28.98MIN: 8.7 / MAX: 28.91MIN: 8.44 / MAX: 28.21MIN: 8.15 / MAX: 26.78

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50adbc163248648073.6473.1371.6458.93MIN: 72.2 / MAX: 73.92MIN: 17.49 / MAX: 74.29MIN: 18.09 / MAX: 72.77MIN: 58.53 / MAX: 70.01

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50acbd102030405044.0043.7643.3637.94MIN: 11.96 / MAX: 45.72MIN: 15 / MAX: 45.94MIN: 13.12 / MAX: 45.54MIN: 9.49 / MAX: 39.47

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50acbd102030405044.1043.8343.5438.12MIN: 13.24 / MAX: 45.85MIN: 15.12 / MAX: 45.84MIN: 10.62 / MAX: 45.46MIN: 9.95 / MAX: 42.77

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1bdac5M10M15M20M25M212100002118000018660000186300001. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2bdac4M8M12M16M20M171600001652000016370000152200001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152cdab4812162017.0417.0016.9915.77MIN: 11.16 / MAX: 17.77MIN: 9.03 / MAX: 17.71MIN: 7 / MAX: 17.7MIN: 6.2 / MAX: 17.91

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Readacdb30M60M90M120M150M1618620461618053801610521651533763351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetcbad306090120150118.39117.81115.24114.45

Speedb

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Sequential Fillcabd300K600K900K1200K1500K11892051173524115861611515341. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lacdb369121513.2913.1912.9512.89MIN: 3.93 / MAX: 17.56MIN: 3.75 / MAX: 17.57MIN: 4.12 / MAX: 17.48MIN: 3.58 / MAX: 17.58

ProjectPhysX OpenCL-Benchmark

Operation: FP16 Compute

OpenBenchmarking.orgTFLOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: FP16 Computeadcb51015202518.3218.3118.2917.791. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500Macdb36912159.6329.7599.8659.894

ProjectPhysX OpenCL-Benchmark

Operation: Memory Bandwidth Coalesced Read

OpenBenchmarking.orgGB/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: Memory Bandwidth Coalesced Readbdac50100150200250224.17221.78221.35219.561. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: Memory Bandwidth Coalesced Write

OpenBenchmarking.orgGB/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: Memory Bandwidth Coalesced Writeabdc100200300400500447.57447.53442.59438.891. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT64 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT64 Computecadb0.28850.5770.86551.1541.44251.2821.2801.2771.2601. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT16 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT16 Computeacbd71421283530.2629.7729.7529.721. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2acdb4M8M12M16M20M173100001722000017190000170400001. (CXX) g++ options: -fopenmp -O3 -march=native

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50badc81624324036.1536.0335.6935.65

Speedb

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fill Synccbad10K20K30K40K50K462574593045735456971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Randomdabc130K260K390K520K650K5927875877075864025858051. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Fillcdab200K400K600K800K1000K10837541081261107243510721341. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNetdbac153045607568.0567.7867.6267.34

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152bcda4812162017.1317.0817.0017.00MIN: 10.27 / MAX: 17.85MIN: 8.2 / MAX: 17.84MIN: 7.61 / MAX: 17.69MIN: 6.54 / MAX: 17.74

ProjectPhysX OpenCL-Benchmark

Operation: FP32 Compute

OpenBenchmarking.orgTFLOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: FP32 Computedbac369121512.0712.0612.0011.991. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Randomcadb700K1400K2100K2800K3500K32935833284249328252832744851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1Bbcda51015202522.5322.5322.5522.64

Y-Cruncher

Pi Digits To Calculate: 5B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 5Bcbda306090120150137.12137.50137.70137.77

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-50dcba51015202519.8719.8719.8419.79

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetadcb4080120160200159.94159.65159.35159.35

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNetabdc51015202521.8021.7721.7621.72

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-16dcab369121510.3610.3610.3510.33

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-16dcba1.30052.6013.90155.2026.50255.785.785.775.77

ProjectPhysX OpenCL-Benchmark

Operation: INT8 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT8 Computedcab369121511.3911.3911.3811.381. (CXX) g++ options: -std=c++17 -pthread -lOpenCL

ProjectPhysX OpenCL-Benchmark

Operation: INT32 Compute

OpenBenchmarking.orgTIOPs/s, More Is BetterProjectPhysX OpenCL-Benchmark 1.2Operation: INT32 Computedcab1.22992.45983.68974.91966.14955.4665.4665.4665.4651. (CXX) g++ options: -std=c++17 -pthread -lOpenCL


Phoronix Test Suite v10.8.5