new tests eo nov

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and AMD Radeon 15GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2311285-PTS-NEWTESTS44&sor&grs.

new tests eo novProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdeIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a2732GBWestern Digital WD_BLACK SN850X 1000GBAMD Radeon 15GB (1617/1124MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.5.0-10-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.7 + Wayland4.6 Mesa 24.0~git2311100600.05fb6b~oibaf~m (git-05fb6b9 2023-11-10 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.54)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4Java Details- OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu1)Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

new tests eo novpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lwebp2: Quality 100, Compression Effort 5pytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 32 - ResNet-152webp2: Quality 75, Compression Effort 7pytorch: CPU - 512 - ResNet-152webp2: Defaultopenssl: RSA4096openssl: RSA4096java-scimark2: Dense LU Matrix Factorizationopenssl: SHA512openssl: SHA256java-scimark2: Compositepytorch: CPU - 512 - ResNet-50java-scimark2: Fast Fourier Transformembree: Pathtracer - Crownpytorch: CPU - 32 - ResNet-50embree: Pathtracer ISPC - Crownjava-scimark2: Monte Carloembree: Pathtracer ISPC - Asian Dragon Objpytorch: CPU - 1 - Efficientnet_v2_lembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer - Asian Dragonjava-scimark2: Sparse Matrix Multiplyjava-scimark2: Jacobi Successive Over-Relaxationwebp2: Quality 100, Lossless Compressionwebp2: Quality 95, Compression Effort 7openssl: ChaCha20abcde10.098.948.798.908.858.8328.7158.8414.8918.0839.0546.8246.3217.1718.040.3517.1315.78347145.5536013059.8910849481210354749530104716.0146.881219.7330.071346.4830.47831567.5131.85813.5036.212431.151634.4464792.042940.870.040.168.8211.9811.6511.6710.518.9628.9760.1118.4217.7139.3438.9344.4717.6317.120.3318.1515.89355954.3547613387.7211062578760359986660204785.1746.761232.0230.059846.7330.24711567.5131.747313.4236.408631.122434.44244790.642947.940.040.168.898.958.8810.5011.609.9322.3574.5414.6518.0447.0046.9546.4717.9916.9017.9715.40359762.4553613341.6710975727400357623368804773.5846.321232.9130.30746.7630.20131556.1531.72313.4336.247931.088934.47944789.242947.940.040.1611.7211.598.808.968.927.6522.7375.6718.0814.8847.8344.5244.1415.1418.140.3418.0816.24351474.25410.113337.510815857230356253913404772.946.401230.6929.992346.3030.19131567.5131.654913.4836.421731.266334.37764780.862947.940.040.1612.098.7811.8611.698.879.9822.4574.0318.0918.3039.2946.6738.6818.1718.240.3418.0715.65352828.95429.513354.211006330870355671215404779.5246.581231.1330.096446.4930.27311568.0831.881313.4836.222731.129434.50324794.852949.360.040.16OpenBenchmarking.org

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_ledacb369121512.0911.7210.098.898.82MIN: 5.97 / MAX: 12.57MIN: 5.22 / MAX: 12.2MIN: 4.54 / MAX: 12.07MIN: 3.85 / MAX: 9.08MIN: 5.03 / MAX: 9.05

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lbdcae369121511.9811.598.958.948.78MIN: 5.05 / MAX: 12.49MIN: 5.55 / MAX: 12.11MIN: 4.32 / MAX: 9MIN: 5 / MAX: 9.09MIN: 4.28 / MAX: 9.7

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lebcda369121511.8611.658.888.808.79MIN: 5.09 / MAX: 12.28MIN: 5.27 / MAX: 12.15MIN: 4.94 / MAX: 9.08MIN: 5.17 / MAX: 8.94MIN: 4.35 / MAX: 9.75

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lebcda369121511.6911.6710.508.968.90MIN: 5.58 / MAX: 12.18MIN: 5.41 / MAX: 12.17MIN: 4.78 / MAX: 10.98MIN: 4.16 / MAX: 9.35MIN: 4.78 / MAX: 9.1

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lcbdea369121511.6010.518.928.878.85MIN: 5.57 / MAX: 12.14MIN: 4.81 / MAX: 11.29MIN: 4.97 / MAX: 9.08MIN: 4.6 / MAX: 9.03MIN: 4.16 / MAX: 9.06

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5ecbad36912159.989.938.968.837.651. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152badec71421283528.9728.7122.7322.4522.35MIN: 7.95 / MAX: 29.71MIN: 8.87 / MAX: 29.47MIN: 22.46 / MAX: 27.6MIN: 22.19 / MAX: 27.15MIN: 22.12 / MAX: 27.32

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50dceba2040608010075.6774.5474.0360.1158.84MIN: 72.62 / MAX: 75.95MIN: 71.89 / MAX: 75.12MIN: 71.54 / MAX: 75.27MIN: 59.29 / MAX: 71.98MIN: 57.25 / MAX: 68.79

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152bedac51015202518.4218.0918.0814.8914.65MIN: 9.4 / MAX: 19.35MIN: 8.97 / MAX: 18.85MIN: 8.98 / MAX: 18.85MIN: 6.18 / MAX: 17.49MIN: 6.03 / MAX: 16.53

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152eacbd51015202518.3018.0818.0417.7114.88MIN: 10.64 / MAX: 19.07MIN: 10.66 / MAX: 18.86MIN: 8.26 / MAX: 18.82MIN: 6.7 / MAX: 18.52MIN: 6.24 / MAX: 18.19

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50dcbea112233445547.8347.0039.3439.2939.05MIN: 16.97 / MAX: 49.88MIN: 11.89 / MAX: 48.99MIN: 10.03 / MAX: 46.87MIN: 10.75 / MAX: 42.24MIN: 10.5 / MAX: 40.73

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50caedb112233445546.9546.8246.6744.5238.93MIN: 11.8 / MAX: 48.85MIN: 12.21 / MAX: 48.82MIN: 15.18 / MAX: 48.54MIN: 13.09 / MAX: 46.68MIN: 10.46 / MAX: 47.12

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50cabde112233445546.4746.3244.4744.1438.68MIN: 11.72 / MAX: 48.37MIN: 12.67 / MAX: 49.3MIN: 12.19 / MAX: 46.31MIN: 11.59 / MAX: 46.03MIN: 9.93 / MAX: 46.76

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152ecbad4812162018.1717.9917.6317.1715.14MIN: 9.41 / MAX: 18.96MIN: 7.44 / MAX: 18.86MIN: 8.26 / MAX: 18.7MIN: 7.36 / MAX: 17.91MIN: 5.99 / MAX: 17.74

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152edabc4812162018.2418.1418.0417.1216.90MIN: 8.5 / MAX: 19.02MIN: 9.88 / MAX: 18.9MIN: 9.24 / MAX: 18.83MIN: 6.99 / MAX: 17.92MIN: 6.08 / MAX: 17.69

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7aedb0.07880.15760.23640.31520.3940.350.340.340.331. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152bdeca4812162018.1518.0818.0717.9717.13MIN: 6.25 / MAX: 18.92MIN: 11.59 / MAX: 18.87MIN: 7.25 / MAX: 18.84MIN: 8.82 / MAX: 18.73MIN: 6.62 / MAX: 17.92

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultdbaec4812162016.2415.8915.7815.6515.401. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLAlgorithm: RSA4096cbeda80K160K240K320K400K359762.4355954.3352828.9351474.2347145.51. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLAlgorithm: RSA4096cbeda120024003600480060005536.05476.05429.55410.15360.01. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Java SciMark

Computational Test: Dense LU Matrix Factorization

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationbecda3K6K9K12K15K13387.7213354.2013341.6713337.5013059.89

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA512becad2000M4000M6000M8000M10000M11062578760110063308701097572740010849481210108158572301. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA256bcdea8000M16000M24000M32000M40000M35998666020357623368803562539134035567121540354749530101. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

Java SciMark

Computational Test: Composite

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositebecda100020003000400050004785.174779.524773.584772.904716.01

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50abedc112233445546.8846.7646.5846.4046.32MIN: 15.71 / MAX: 48.91MIN: 16.58 / MAX: 48.66MIN: 12.98 / MAX: 49.17MIN: 11.75 / MAX: 48.73MIN: 12.42 / MAX: 48.73

Java SciMark

Computational Test: Fast Fourier Transform

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformcbeda300600900120015001232.911232.021231.131230.691219.73

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownceabd71421283530.3130.1030.0730.0629.99MIN: 29.74 / MAX: 31.91MIN: 29.54 / MAX: 31.88MIN: 29.59 / MAX: 31.69MIN: 29.51 / MAX: 31.67MIN: 29.36 / MAX: 31.72

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50cbead112233445546.7646.7346.4946.4846.30MIN: 12.51 / MAX: 48.77MIN: 12.71 / MAX: 49.06MIN: 11.98 / MAX: 48.65MIN: 14.21 / MAX: 48.5MIN: 12.41 / MAX: 48.18

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownaebcd71421283530.4830.2730.2530.2030.19MIN: 29.84 / MAX: 32.11MIN: 29.65 / MAX: 32.08MIN: 29.79 / MAX: 32.07MIN: 29.65 / MAX: 31.81MIN: 29.61 / MAX: 31.75

Java SciMark

Computational Test: Monte Carlo

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloedbac300600900120015001568.081567.511567.511567.511556.15

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objeabcd71421283531.8831.8631.7531.7231.65MIN: 31.49 / MAX: 33.08MIN: 31.53 / MAX: 32.96MIN: 31.42 / MAX: 32.35MIN: 31.39 / MAX: 32.38MIN: 31.25 / MAX: 33.03

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_laedcb369121513.5013.4813.4813.4313.42MIN: 11.22 / MAX: 17.95MIN: 10.66 / MAX: 17.95MIN: 11.3 / MAX: 17.93MIN: 10.97 / MAX: 17.86MIN: 10.94 / MAX: 18.06

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragondbcea81624324036.4236.4136.2536.2236.21MIN: 35.89 / MAX: 38.27MIN: 35.88 / MAX: 37.89MIN: 35.88 / MAX: 36.94MIN: 35.75 / MAX: 37.76MIN: 35.74 / MAX: 37.79

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objdaebc71421283531.2731.1531.1331.1231.09MIN: 30.85 / MAX: 31.86MIN: 30.34 / MAX: 32.27MIN: 30.37 / MAX: 32.25MIN: 30.41 / MAX: 32.05MIN: 30.45 / MAX: 32.14

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonecabd81624324034.5034.4834.4534.4434.38MIN: 33.99 / MAX: 35.71MIN: 33.82 / MAX: 35.74MIN: 33.88 / MAX: 35.95MIN: 33.88 / MAX: 35.6MIN: 33.76 / MAX: 35.61

Java SciMark

Computational Test: Sparse Matrix Multiply

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplyeabcd100020003000400050004794.854792.044790.644789.244780.86

Java SciMark

Computational Test: Jacobi Successive Over-Relaxation

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationedcba60012001800240030002949.362947.942947.942947.942940.87

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressionedcba0.0090.0180.0270.0360.0450.040.040.040.040.041. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7edcba0.0360.0720.1080.1440.180.160.160.160.160.161. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl


Phoronix Test Suite v10.8.5