Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and AMD Radeon 15GB on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2311285-PTS-NEWTESTS44 new tests eo nov - Phoronix Test Suite new tests eo nov Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and AMD Radeon 15GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2311285-PTS-NEWTESTS44&sro&grt .
new tests eo nov Processor Motherboard Chipset Memory Disk Graphics Audio Monitor OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d e Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads) ASUS PRIME Z790-P WIFI (1402 BIOS) Intel Device 7a27 32GB Western Digital WD_BLACK SN850X 1000GB AMD Radeon 15GB (1617/1124MHz) Realtek ALC897 ASUS VP28U Ubuntu 23.10 6.5.0-10-generic (x86_64) GNOME Shell 45.0 X Server 1.21.1.7 + Wayland 4.6 Mesa 24.0~git2311100600.05fb6b~oibaf~m (git-05fb6b9 2023-11-10 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.54) GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4 Java Details - OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu1) Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
new tests eo nov embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj java-scimark2: Composite java-scimark2: Monte Carlo java-scimark2: Fast Fourier Transform java-scimark2: Sparse Matrix Multiply java-scimark2: Dense LU Matrix Factorization java-scimark2: Jacobi Successive Over-Relaxation openssl: SHA256 openssl: SHA512 openssl: RSA4096 openssl: RSA4096 pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 256 - ResNet-50 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 512 - ResNet-50 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 256 - ResNet-152 pytorch: CPU - 512 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l pytorch: CPU - 256 - Efficientnet_v2_l pytorch: CPU - 512 - Efficientnet_v2_l webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression a b c d e 30.0713 30.4783 34.446 31.1516 36.2124 31.858 4716.01 1567.51 1219.73 4792.04 13059.89 2940.87 35474953010 10849481210 5360 347145.5 58.84 28.71 46.32 46.48 46.82 17.17 39.05 18.04 46.88 14.89 18.08 17.13 13.50 8.79 10.09 8.85 8.94 8.90 15.78 0.35 0.16 8.83 0.04 30.0598 30.2471 34.4424 31.1224 36.4086 31.7473 4785.17 1567.51 1232.02 4790.64 13387.72 2947.94 35998666020 11062578760 5476 355954.3 60.11 28.97 44.47 46.73 38.93 17.63 39.34 17.12 46.76 18.42 17.71 18.15 13.42 11.65 8.82 10.51 11.98 11.67 15.89 0.33 0.16 8.96 0.04 30.307 30.2013 34.4794 31.0889 36.2479 31.723 4773.58 1556.15 1232.91 4789.24 13341.67 2947.94 35762336880 10975727400 5536 359762.4 74.54 22.35 46.47 46.76 46.95 17.99 47.00 16.90 46.32 14.65 18.04 17.97 13.43 8.88 8.89 11.60 8.95 10.50 15.40 0.16 9.93 0.04 29.9923 30.1913 34.3776 31.2663 36.4217 31.6549 4772.9 1567.51 1230.69 4780.86 13337.5 2947.94 35625391340 10815857230 5410.1 351474.2 75.67 22.73 44.14 46.30 44.52 15.14 47.83 18.14 46.40 18.08 14.88 18.08 13.48 8.80 11.72 8.92 11.59 8.96 16.24 0.34 0.16 7.65 0.04 30.0964 30.2731 34.5032 31.1294 36.2227 31.8813 4779.52 1568.08 1231.13 4794.85 13354.2 2949.36 35567121540 11006330870 5429.5 352828.9 74.03 22.45 38.68 46.49 46.67 18.17 39.29 18.24 46.58 18.09 18.30 18.07 13.48 11.86 12.09 8.87 8.78 11.69 15.65 0.34 0.16 9.98 0.04 OpenBenchmarking.org
Embree Binary: Pathtracer - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown a b c d e 7 14 21 28 35 30.07 30.06 30.31 29.99 30.10 MIN: 29.59 / MAX: 31.69 MIN: 29.51 / MAX: 31.67 MIN: 29.74 / MAX: 31.91 MIN: 29.36 / MAX: 31.72 MIN: 29.54 / MAX: 31.88
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b c d e 7 14 21 28 35 30.48 30.25 30.20 30.19 30.27 MIN: 29.84 / MAX: 32.11 MIN: 29.79 / MAX: 32.07 MIN: 29.65 / MAX: 31.81 MIN: 29.61 / MAX: 31.75 MIN: 29.65 / MAX: 32.08
Embree Binary: Pathtracer - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon a b c d e 8 16 24 32 40 34.45 34.44 34.48 34.38 34.50 MIN: 33.88 / MAX: 35.95 MIN: 33.88 / MAX: 35.6 MIN: 33.82 / MAX: 35.74 MIN: 33.76 / MAX: 35.61 MIN: 33.99 / MAX: 35.71
Embree Binary: Pathtracer - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a b c d e 7 14 21 28 35 31.15 31.12 31.09 31.27 31.13 MIN: 30.34 / MAX: 32.27 MIN: 30.41 / MAX: 32.05 MIN: 30.45 / MAX: 32.14 MIN: 30.85 / MAX: 31.86 MIN: 30.37 / MAX: 32.25
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b c d e 8 16 24 32 40 36.21 36.41 36.25 36.42 36.22 MIN: 35.74 / MAX: 37.79 MIN: 35.88 / MAX: 37.89 MIN: 35.88 / MAX: 36.94 MIN: 35.89 / MAX: 38.27 MIN: 35.75 / MAX: 37.76
Embree Binary: Pathtracer ISPC - Model: Asian Dragon Obj OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c d e 7 14 21 28 35 31.86 31.75 31.72 31.65 31.88 MIN: 31.53 / MAX: 32.96 MIN: 31.42 / MAX: 32.35 MIN: 31.39 / MAX: 32.38 MIN: 31.25 / MAX: 33.03 MIN: 31.49 / MAX: 33.08
Java SciMark Computational Test: Composite OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Composite a b c d e 1000 2000 3000 4000 5000 4716.01 4785.17 4773.58 4772.90 4779.52
Java SciMark Computational Test: Monte Carlo OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Monte Carlo a b c d e 300 600 900 1200 1500 1567.51 1567.51 1556.15 1567.51 1568.08
Java SciMark Computational Test: Fast Fourier Transform OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Fast Fourier Transform a b c d e 300 600 900 1200 1500 1219.73 1232.02 1232.91 1230.69 1231.13
Java SciMark Computational Test: Sparse Matrix Multiply OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Sparse Matrix Multiply a b c d e 1000 2000 3000 4000 5000 4792.04 4790.64 4789.24 4780.86 4794.85
Java SciMark Computational Test: Dense LU Matrix Factorization OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Dense LU Matrix Factorization a b c d e 3K 6K 9K 12K 15K 13059.89 13387.72 13341.67 13337.50 13354.20
Java SciMark Computational Test: Jacobi Successive Over-Relaxation OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Jacobi Successive Over-Relaxation a b c d e 600 1200 1800 2400 3000 2940.87 2947.94 2947.94 2947.94 2949.36
OpenSSL Algorithm: SHA256 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA256 a b c d e 8000M 16000M 24000M 32000M 40000M 35474953010 35998666020 35762336880 35625391340 35567121540 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
OpenSSL Algorithm: SHA512 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA512 a b c d e 2000M 4000M 6000M 8000M 10000M 10849481210 11062578760 10975727400 10815857230 11006330870 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
OpenSSL Algorithm: RSA4096 OpenBenchmarking.org sign/s, More Is Better OpenSSL Algorithm: RSA4096 a b c d e 1200 2400 3600 4800 6000 5360.0 5476.0 5536.0 5410.1 5429.5 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
OpenSSL Algorithm: RSA4096 OpenBenchmarking.org verify/s, More Is Better OpenSSL Algorithm: RSA4096 a b c d e 80K 160K 240K 320K 400K 347145.5 355954.3 359762.4 351474.2 352828.9 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b c d e 20 40 60 80 100 58.84 60.11 74.54 75.67 74.03 MIN: 57.25 / MAX: 68.79 MIN: 59.29 / MAX: 71.98 MIN: 71.89 / MAX: 75.12 MIN: 72.62 / MAX: 75.95 MIN: 71.54 / MAX: 75.27
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b c d e 7 14 21 28 35 28.71 28.97 22.35 22.73 22.45 MIN: 8.87 / MAX: 29.47 MIN: 7.95 / MAX: 29.71 MIN: 22.12 / MAX: 27.32 MIN: 22.46 / MAX: 27.6 MIN: 22.19 / MAX: 27.15
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b c d e 11 22 33 44 55 46.32 44.47 46.47 44.14 38.68 MIN: 12.67 / MAX: 49.3 MIN: 12.19 / MAX: 46.31 MIN: 11.72 / MAX: 48.37 MIN: 11.59 / MAX: 46.03 MIN: 9.93 / MAX: 46.76
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b c d e 11 22 33 44 55 46.48 46.73 46.76 46.30 46.49 MIN: 14.21 / MAX: 48.5 MIN: 12.71 / MAX: 49.06 MIN: 12.51 / MAX: 48.77 MIN: 12.41 / MAX: 48.18 MIN: 11.98 / MAX: 48.65
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b c d e 11 22 33 44 55 46.82 38.93 46.95 44.52 46.67 MIN: 12.21 / MAX: 48.82 MIN: 10.46 / MAX: 47.12 MIN: 11.8 / MAX: 48.85 MIN: 13.09 / MAX: 46.68 MIN: 15.18 / MAX: 48.54
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b c d e 4 8 12 16 20 17.17 17.63 17.99 15.14 18.17 MIN: 7.36 / MAX: 17.91 MIN: 8.26 / MAX: 18.7 MIN: 7.44 / MAX: 18.86 MIN: 5.99 / MAX: 17.74 MIN: 9.41 / MAX: 18.96
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b c d e 11 22 33 44 55 39.05 39.34 47.00 47.83 39.29 MIN: 10.5 / MAX: 40.73 MIN: 10.03 / MAX: 46.87 MIN: 11.89 / MAX: 48.99 MIN: 16.97 / MAX: 49.88 MIN: 10.75 / MAX: 42.24
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b c d e 4 8 12 16 20 18.04 17.12 16.90 18.14 18.24 MIN: 9.24 / MAX: 18.83 MIN: 6.99 / MAX: 17.92 MIN: 6.08 / MAX: 17.69 MIN: 9.88 / MAX: 18.9 MIN: 8.5 / MAX: 19.02
PyTorch Device: CPU - Batch Size: 512 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 a b c d e 11 22 33 44 55 46.88 46.76 46.32 46.40 46.58 MIN: 15.71 / MAX: 48.91 MIN: 16.58 / MAX: 48.66 MIN: 12.42 / MAX: 48.73 MIN: 11.75 / MAX: 48.73 MIN: 12.98 / MAX: 49.17
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b c d e 5 10 15 20 25 14.89 18.42 14.65 18.08 18.09 MIN: 6.18 / MAX: 17.49 MIN: 9.4 / MAX: 19.35 MIN: 6.03 / MAX: 16.53 MIN: 8.98 / MAX: 18.85 MIN: 8.97 / MAX: 18.85
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 a b c d e 5 10 15 20 25 18.08 17.71 18.04 14.88 18.30 MIN: 10.66 / MAX: 18.86 MIN: 6.7 / MAX: 18.52 MIN: 8.26 / MAX: 18.82 MIN: 6.24 / MAX: 18.19 MIN: 10.64 / MAX: 19.07
PyTorch Device: CPU - Batch Size: 512 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 a b c d e 4 8 12 16 20 17.13 18.15 17.97 18.08 18.07 MIN: 6.62 / MAX: 17.92 MIN: 6.25 / MAX: 18.92 MIN: 8.82 / MAX: 18.73 MIN: 11.59 / MAX: 18.87 MIN: 7.25 / MAX: 18.84
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b c d e 3 6 9 12 15 13.50 13.42 13.43 13.48 13.48 MIN: 11.22 / MAX: 17.95 MIN: 10.94 / MAX: 18.06 MIN: 10.97 / MAX: 17.86 MIN: 11.3 / MAX: 17.93 MIN: 10.66 / MAX: 17.95
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b c d e 3 6 9 12 15 8.79 11.65 8.88 8.80 11.86 MIN: 4.35 / MAX: 9.75 MIN: 5.27 / MAX: 12.15 MIN: 4.94 / MAX: 9.08 MIN: 5.17 / MAX: 8.94 MIN: 5.09 / MAX: 12.28
PyTorch Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b c d e 3 6 9 12 15 10.09 8.82 8.89 11.72 12.09 MIN: 4.54 / MAX: 12.07 MIN: 5.03 / MAX: 9.05 MIN: 3.85 / MAX: 9.08 MIN: 5.22 / MAX: 12.2 MIN: 5.97 / MAX: 12.57
PyTorch Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b c d e 3 6 9 12 15 8.85 10.51 11.60 8.92 8.87 MIN: 4.16 / MAX: 9.06 MIN: 4.81 / MAX: 11.29 MIN: 5.57 / MAX: 12.14 MIN: 4.97 / MAX: 9.08 MIN: 4.6 / MAX: 9.03
PyTorch Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l a b c d e 3 6 9 12 15 8.94 11.98 8.95 11.59 8.78 MIN: 5 / MAX: 9.09 MIN: 5.05 / MAX: 12.49 MIN: 4.32 / MAX: 9 MIN: 5.55 / MAX: 12.11 MIN: 4.28 / MAX: 9.7
PyTorch Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l a b c d e 3 6 9 12 15 8.90 11.67 10.50 8.96 11.69 MIN: 4.78 / MAX: 9.1 MIN: 5.41 / MAX: 12.17 MIN: 4.78 / MAX: 10.98 MIN: 4.16 / MAX: 9.35 MIN: 5.58 / MAX: 12.18
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default a b c d e 4 8 12 16 20 15.78 15.89 15.40 16.24 15.65 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 75, Compression Effort 7 a b d e 0.0788 0.1576 0.2364 0.3152 0.394 0.35 0.33 0.34 0.34 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 95, Compression Effort 7 a b c d e 0.036 0.072 0.108 0.144 0.18 0.16 0.16 0.16 0.16 0.16 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Compression Effort 5 a b c d e 3 6 9 12 15 8.83 8.96 9.93 7.65 9.98 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Lossless Compression OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Lossless Compression a b c d e 0.009 0.018 0.027 0.036 0.045 0.04 0.04 0.04 0.04 0.04 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Phoronix Test Suite v10.8.4