new tests eo nov

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and AMD Radeon 15GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2311285-PTS-NEWTESTS44&grs&sro.

new tests eo novProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdeIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a2732GBWestern Digital WD_BLACK SN850X 1000GBAMD Radeon 15GB (1617/1124MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.5.0-10-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.7 + Wayland4.6 Mesa 24.0~git2311100600.05fb6b~oibaf~m (git-05fb6b9 2023-11-10 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.54)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4Java Details- OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu1)Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

new tests eo novpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lwebp2: Quality 100, Compression Effort 5pytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 32 - ResNet-152webp2: Quality 75, Compression Effort 7pytorch: CPU - 512 - ResNet-152webp2: Defaultopenssl: RSA4096openssl: RSA4096java-scimark2: Dense LU Matrix Factorizationopenssl: SHA512openssl: SHA256java-scimark2: Compositepytorch: CPU - 512 - ResNet-50java-scimark2: Fast Fourier Transformembree: Pathtracer - Crownpytorch: CPU - 32 - ResNet-50embree: Pathtracer ISPC - Crownjava-scimark2: Monte Carloembree: Pathtracer ISPC - Asian Dragon Objpytorch: CPU - 1 - Efficientnet_v2_lembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer - Asian Dragonjava-scimark2: Sparse Matrix Multiplyjava-scimark2: Jacobi Successive Over-Relaxationwebp2: Quality 100, Lossless Compressionwebp2: Quality 95, Compression Effort 7openssl: ChaCha20abcde10.098.948.798.908.858.8328.7158.8414.8918.0839.0546.8246.3217.1718.040.3517.1315.78347145.5536013059.8910849481210354749530104716.0146.881219.7330.071346.4830.47831567.5131.85813.5036.212431.151634.4464792.042940.870.040.168.8211.9811.6511.6710.518.9628.9760.1118.4217.7139.3438.9344.4717.6317.120.3318.1515.89355954.3547613387.7211062578760359986660204785.1746.761232.0230.059846.7330.24711567.5131.747313.4236.408631.122434.44244790.642947.940.040.168.898.958.8810.5011.609.9322.3574.5414.6518.0447.0046.9546.4717.9916.9017.9715.40359762.4553613341.6710975727400357623368804773.5846.321232.9130.30746.7630.20131556.1531.72313.4336.247931.088934.47944789.242947.940.040.1611.7211.598.808.968.927.6522.7375.6718.0814.8847.8344.5244.1415.1418.140.3418.0816.24351474.25410.113337.510815857230356253913404772.946.401230.6929.992346.3030.19131567.5131.654913.4836.421731.266334.37764780.862947.940.040.1612.098.7811.8611.698.879.9822.4574.0318.0918.3039.2946.6738.6818.1718.240.3418.0715.65352828.95429.513354.211006330870355671215404779.5246.581231.1330.096446.4930.27311568.0831.881313.4836.222731.129434.50324794.852949.360.040.16OpenBenchmarking.org

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_labcde369121510.098.828.8911.7212.09MIN: 4.54 / MAX: 12.07MIN: 5.03 / MAX: 9.05MIN: 3.85 / MAX: 9.08MIN: 5.22 / MAX: 12.2MIN: 5.97 / MAX: 12.57

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_labcde36912158.9411.988.9511.598.78MIN: 5 / MAX: 9.09MIN: 5.05 / MAX: 12.49MIN: 4.32 / MAX: 9MIN: 5.55 / MAX: 12.11MIN: 4.28 / MAX: 9.7

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_labcde36912158.7911.658.888.8011.86MIN: 4.35 / MAX: 9.75MIN: 5.27 / MAX: 12.15MIN: 4.94 / MAX: 9.08MIN: 5.17 / MAX: 8.94MIN: 5.09 / MAX: 12.28

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_labcde36912158.9011.6710.508.9611.69MIN: 4.78 / MAX: 9.1MIN: 5.41 / MAX: 12.17MIN: 4.78 / MAX: 10.98MIN: 4.16 / MAX: 9.35MIN: 5.58 / MAX: 12.18

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_labcde36912158.8510.5111.608.928.87MIN: 4.16 / MAX: 9.06MIN: 4.81 / MAX: 11.29MIN: 5.57 / MAX: 12.14MIN: 4.97 / MAX: 9.08MIN: 4.6 / MAX: 9.03

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5abcde36912158.838.969.937.659.981. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152abcde71421283528.7128.9722.3522.7322.45MIN: 8.87 / MAX: 29.47MIN: 7.95 / MAX: 29.71MIN: 22.12 / MAX: 27.32MIN: 22.46 / MAX: 27.6MIN: 22.19 / MAX: 27.15

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50abcde2040608010058.8460.1174.5475.6774.03MIN: 57.25 / MAX: 68.79MIN: 59.29 / MAX: 71.98MIN: 71.89 / MAX: 75.12MIN: 72.62 / MAX: 75.95MIN: 71.54 / MAX: 75.27

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152abcde51015202514.8918.4214.6518.0818.09MIN: 6.18 / MAX: 17.49MIN: 9.4 / MAX: 19.35MIN: 6.03 / MAX: 16.53MIN: 8.98 / MAX: 18.85MIN: 8.97 / MAX: 18.85

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152abcde51015202518.0817.7118.0414.8818.30MIN: 10.66 / MAX: 18.86MIN: 6.7 / MAX: 18.52MIN: 8.26 / MAX: 18.82MIN: 6.24 / MAX: 18.19MIN: 10.64 / MAX: 19.07

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50abcde112233445539.0539.3447.0047.8339.29MIN: 10.5 / MAX: 40.73MIN: 10.03 / MAX: 46.87MIN: 11.89 / MAX: 48.99MIN: 16.97 / MAX: 49.88MIN: 10.75 / MAX: 42.24

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50abcde112233445546.8238.9346.9544.5246.67MIN: 12.21 / MAX: 48.82MIN: 10.46 / MAX: 47.12MIN: 11.8 / MAX: 48.85MIN: 13.09 / MAX: 46.68MIN: 15.18 / MAX: 48.54

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50abcde112233445546.3244.4746.4744.1438.68MIN: 12.67 / MAX: 49.3MIN: 12.19 / MAX: 46.31MIN: 11.72 / MAX: 48.37MIN: 11.59 / MAX: 46.03MIN: 9.93 / MAX: 46.76

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152abcde4812162017.1717.6317.9915.1418.17MIN: 7.36 / MAX: 17.91MIN: 8.26 / MAX: 18.7MIN: 7.44 / MAX: 18.86MIN: 5.99 / MAX: 17.74MIN: 9.41 / MAX: 18.96

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152abcde4812162018.0417.1216.9018.1418.24MIN: 9.24 / MAX: 18.83MIN: 6.99 / MAX: 17.92MIN: 6.08 / MAX: 17.69MIN: 9.88 / MAX: 18.9MIN: 8.5 / MAX: 19.02

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7abde0.07880.15760.23640.31520.3940.350.330.340.341. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152abcde4812162017.1318.1517.9718.0818.07MIN: 6.62 / MAX: 17.92MIN: 6.25 / MAX: 18.92MIN: 8.82 / MAX: 18.73MIN: 11.59 / MAX: 18.87MIN: 7.25 / MAX: 18.84

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultabcde4812162015.7815.8915.4016.2415.651. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLAlgorithm: RSA4096abcde80K160K240K320K400K347145.5355954.3359762.4351474.2352828.91. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLAlgorithm: RSA4096abcde120024003600480060005360.05476.05536.05410.15429.51. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Java SciMark

Computational Test: Dense LU Matrix Factorization

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationabcde3K6K9K12K15K13059.8913387.7213341.6713337.5013354.20

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA512abcde2000M4000M6000M8000M10000M10849481210110625787601097572740010815857230110063308701. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA256abcde8000M16000M24000M32000M40000M35474953010359986660203576233688035625391340355671215401. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Java SciMark

Computational Test: Composite

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositeabcde100020003000400050004716.014785.174773.584772.904779.52

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50abcde112233445546.8846.7646.3246.4046.58MIN: 15.71 / MAX: 48.91MIN: 16.58 / MAX: 48.66MIN: 12.42 / MAX: 48.73MIN: 11.75 / MAX: 48.73MIN: 12.98 / MAX: 49.17

Java SciMark

Computational Test: Fast Fourier Transform

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformabcde300600900120015001219.731232.021232.911230.691231.13

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownabcde71421283530.0730.0630.3129.9930.10MIN: 29.59 / MAX: 31.69MIN: 29.51 / MAX: 31.67MIN: 29.74 / MAX: 31.91MIN: 29.36 / MAX: 31.72MIN: 29.54 / MAX: 31.88

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50abcde112233445546.4846.7346.7646.3046.49MIN: 14.21 / MAX: 48.5MIN: 12.71 / MAX: 49.06MIN: 12.51 / MAX: 48.77MIN: 12.41 / MAX: 48.18MIN: 11.98 / MAX: 48.65

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownabcde71421283530.4830.2530.2030.1930.27MIN: 29.84 / MAX: 32.11MIN: 29.79 / MAX: 32.07MIN: 29.65 / MAX: 31.81MIN: 29.61 / MAX: 31.75MIN: 29.65 / MAX: 32.08

Java SciMark

Computational Test: Monte Carlo

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloabcde300600900120015001567.511567.511556.151567.511568.08

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objabcde71421283531.8631.7531.7231.6531.88MIN: 31.53 / MAX: 32.96MIN: 31.42 / MAX: 32.35MIN: 31.39 / MAX: 32.38MIN: 31.25 / MAX: 33.03MIN: 31.49 / MAX: 33.08

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_labcde369121513.5013.4213.4313.4813.48MIN: 11.22 / MAX: 17.95MIN: 10.94 / MAX: 18.06MIN: 10.97 / MAX: 17.86MIN: 11.3 / MAX: 17.93MIN: 10.66 / MAX: 17.95

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonabcde81624324036.2136.4136.2536.4236.22MIN: 35.74 / MAX: 37.79MIN: 35.88 / MAX: 37.89MIN: 35.88 / MAX: 36.94MIN: 35.89 / MAX: 38.27MIN: 35.75 / MAX: 37.76

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objabcde71421283531.1531.1231.0931.2731.13MIN: 30.34 / MAX: 32.27MIN: 30.41 / MAX: 32.05MIN: 30.45 / MAX: 32.14MIN: 30.85 / MAX: 31.86MIN: 30.37 / MAX: 32.25

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonabcde81624324034.4534.4434.4834.3834.50MIN: 33.88 / MAX: 35.95MIN: 33.88 / MAX: 35.6MIN: 33.82 / MAX: 35.74MIN: 33.76 / MAX: 35.61MIN: 33.99 / MAX: 35.71

Java SciMark

Computational Test: Sparse Matrix Multiply

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplyabcde100020003000400050004792.044790.644789.244780.864794.85

Java SciMark

Computational Test: Jacobi Successive Over-Relaxation

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationabcde60012001800240030002940.872947.942947.942947.942949.36

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressionabcde0.0090.0180.0270.0360.0450.040.040.040.040.041. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7abcde0.0360.0720.1080.1440.180.160.160.160.160.161. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl


Phoronix Test Suite v10.8.5