Desktop machine learning AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2405015-VPA1-DESKTOP46&grw&sor&rro .
Desktop machine learning Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Display Server Display Driver OpenCL Compiler File-System Screen Resolution mantic mantic-no-omit-framepointer noble AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads) MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) AMD Starship/Matisse 2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK 2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310C NVIDIA GeForce RTX 3060 12GB NVIDIA GA104 HD Audio DELL P2314H Realtek RTL8111/8168/8411 Ubuntu 23.10 6.5.0-9-generic (x86_64) X Server 1.21.1.7 NVIDIA OpenCL 3.0 CUDA 12.2.146 GCC 13.2.0 + CUDA 12.2 ext4 1920x1080 NVIDIA GeForce RTX 3060 DELL P2314H + U32J59x Realtek RTL8111/8168/8211/8411 Ubuntu 24.04 6.8.0-31-generic (x86_64) GCC 13.2.0 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - mantic: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - mantic-no-omit-framepointer: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - noble: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701013 Python Details - mantic: Python 3.11.6 - mantic-no-omit-framepointer: Python 3.11.6 - noble: Python 3.12.3 Security Details - mantic: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - mantic-no-omit-framepointer: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - noble: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected Environment Details - mantic-no-omit-framepointer: CXXFLAGS=-fno-omit-frame-pointer QMAKE_CFLAGS=-fno-omit-frame-pointer CFLAGS=-fno-omit-frame-pointer CFLAGS_OVERRIDE=-fno-omit-frame-pointer QMAKE_CXXFLAGS=-fno-omit-frame-pointer FFLAGS=-fno-omit-frame-pointer - noble: CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS_OVERRIDE="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" FFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2"
Desktop machine learning scikit-learn: GLM scikit-learn: SAGA scikit-learn: Tree scikit-learn: Lasso scikit-learn: Sparsify scikit-learn: Plot Ward scikit-learn: MNIST Dataset scikit-learn: Plot Neighbors scikit-learn: SGD Regression scikit-learn: SGDOneClassSVM scikit-learn: Isolation Forest scikit-learn: Text Vectorizers scikit-learn: Plot Hierarchical scikit-learn: Plot OMP vs. LARS scikit-learn: Feature Expansions scikit-learn: LocalOutlierFactor scikit-learn: TSNE MNIST Dataset scikit-learn: Isotonic / Logistic scikit-learn: Plot Incremental PCA scikit-learn: Hist Gradient Boosting scikit-learn: Sample Without Replacement scikit-learn: Covertype Dataset Benchmark scikit-learn: Hist Gradient Boosting Adult scikit-learn: Isotonic / Perturbed Logarithm scikit-learn: Hist Gradient Boosting Threading scikit-learn: 20 Newsgroups / Logistic Regression scikit-learn: Plot Polynomial Kernel Approximation scikit-learn: Hist Gradient Boosting Categorical Only scikit-learn: Kernel PCA Solvers / Time vs. N Samples scikit-learn: Kernel PCA Solvers / Time vs. N Components scikit-learn: Sparse Rand Projections / 100 Iterations numpy: pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 256 - ResNet-50 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 512 - ResNet-50 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 256 - ResNet-152 pytorch: CPU - 512 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l pytorch: CPU - 256 - Efficientnet_v2_l pytorch: CPU - 512 - Efficientnet_v2_l pytorch: NVIDIA CUDA GPU - 1 - ResNet-50 pytorch: NVIDIA CUDA GPU - 1 - ResNet-152 pytorch: NVIDIA CUDA GPU - 16 - ResNet-50 pytorch: NVIDIA CUDA GPU - 32 - ResNet-50 pytorch: NVIDIA CUDA GPU - 64 - ResNet-50 pytorch: NVIDIA CUDA GPU - 16 - ResNet-152 pytorch: NVIDIA CUDA GPU - 256 - ResNet-50 pytorch: NVIDIA CUDA GPU - 32 - ResNet-152 pytorch: NVIDIA CUDA GPU - 512 - ResNet-50 pytorch: NVIDIA CUDA GPU - 64 - ResNet-152 pytorch: NVIDIA CUDA GPU - 256 - ResNet-152 pytorch: NVIDIA CUDA GPU - 512 - ResNet-152 pytorch: NVIDIA CUDA GPU - 1 - Efficientnet_v2_l pytorch: NVIDIA CUDA GPU - 16 - Efficientnet_v2_l pytorch: NVIDIA CUDA GPU - 32 - Efficientnet_v2_l pytorch: NVIDIA CUDA GPU - 64 - Efficientnet_v2_l pytorch: NVIDIA CUDA GPU - 256 - Efficientnet_v2_l pytorch: NVIDIA CUDA GPU - 512 - Efficientnet_v2_l pyhpc: CPU - Numpy - 16384 - Equation of State pyhpc: CPU - Numpy - 16384 - Isoneutral Mixing pyhpc: CPU - Numpy - 65536 - Equation of State pyhpc: CPU - Numpy - 65536 - Isoneutral Mixing pyhpc: GPU - Numpy - 16384 - Equation of State pyhpc: GPU - Numpy - 16384 - Isoneutral Mixing pyhpc: GPU - Numpy - 65536 - Equation of State pyhpc: GPU - Numpy - 65536 - Isoneutral Mixing pyhpc: CPU - Numpy - 262144 - Equation of State pyhpc: CPU - Numpy - 262144 - Isoneutral Mixing pyhpc: GPU - Numpy - 262144 - Equation of State pyhpc: GPU - Numpy - 262144 - Isoneutral Mixing pyhpc: CPU - Numpy - 1048576 - Equation of State pyhpc: CPU - Numpy - 1048576 - Isoneutral Mixing pyhpc: CPU - Numpy - 4194304 - Equation of State pyhpc: CPU - Numpy - 4194304 - Isoneutral Mixing pyhpc: GPU - Numpy - 1048576 - Equation of State pyhpc: GPU - Numpy - 1048576 - Isoneutral Mixing pyhpc: GPU - Numpy - 4194304 - Equation of State pyhpc: GPU - Numpy - 4194304 - Isoneutral Mixing pyperformance: go pyperformance: 2to3 pyperformance: chaos pyperformance: float pyperformance: nbody pyperformance: pathlib pyperformance: raytrace pyperformance: json_loads pyperformance: crypto_pyaes pyperformance: regex_compile pyperformance: python_startup pyperformance: django_template pyperformance: pickle_pure_python pybench: Total For Average Test Times mantic mantic-no-omit-framepointer noble 293.598 868.018 48.338 511.848 127.282 57.824 65.763 147.752 106.315 379.739 289.371 60.814 211.286 91.499 131.277 53.464 236.865 1470.806 31.006 109.984 158.262 376.145 103.497 1788.259 110.215 41.519 150.732 18.579 72.541 37.242 613.547 426.28 32.36 12.72 24.28 24.29 24.24 9.88 24.42 9.84 24.13 9.88 9.77 9.87 7.31 5.63 5.63 5.62 5.61 5.61 210.88 73.91 200.30 199.46 201.41 73.01 202.72 74.15 203.18 71.81 71.74 72.31 39.35 38.95 37.71 37.88 37.36 37.43 0.003 0.009 0.015 0.032 0.003 0.009 0.015 0.033 0.061 0.131 0.062 0.131 0.263 0.619 1.402 2.670 0.263 0.631 1.422 2.662 129 221 62.8 67.4 76.2 19.7 262 19.5 65.1 116 7.61 28.5 259 774 295.096 873.822 52.969 509.537 125.442 57.545 65.877 142.451 107.527 382.611 336.372 63.875 208.391 92.582 133.092 56.754 236.786 1471.834 31.057 111.255 161.460 370.694 105.647 1828.300 110.374 41.728 150.376 18.865 72.909 37.889 631.071 428.61 32.54 12.78 24.38 24.35 24.40 9.93 24.37 10.00 24.28 9.91 9.91 9.80 7.32 5.64 5.64 5.65 5.64 5.65 211.46 72.27 200.17 202.68 205.95 72.24 203.22 73.36 201.14 73.65 72.91 73.75 37.29 36.10 37.16 37.24 36.60 37.22 0.003 0.008 0.015 0.032 0.002 0.008 0.015 0.033 0.058 0.132 0.058 0.128 0.262 0.618 1.405 2.626 0.260 0.622 1.411 2.620 131 224 63.6 66.9 77.1 20.2 274 20.8 66.6 120 7.64 29.5 263 790 269.806 869.369 47.033 345.400 125.069 56.132 65.416 142.159 78.880 385.383 314.034 66.393 207.104 68.172 133.144 54.288 285.823 1684.546 30.617 117.407 179.638 381.447 112.713 1963.772 111.554 41.914 145.363 19.932 70.022 37.107 663.953 430.83 32.34 12.89 24.43 24.12 24.19 9.88 24.33 9.81 24.30 9.87 9.86 9.87 7.31 5.59 5.59 5.60 5.61 5.60 0.003 0.009 0.016 0.033 0.003 0.008 0.015 0.034 0.060 0.133 0.061 0.136 0.261 0.631 1.436 2.720 0.262 0.630 1.446 2.668 121 22.8 8.76 839 OpenBenchmarking.org
Scikit-Learn Benchmark: GLM OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: GLM mantic-no-omit-framepointer mantic noble 60 120 180 240 300 SE +/- 1.07, N = 3 SE +/- 1.06, N = 3 SE +/- 0.93, N = 3 295.10 293.60 269.81 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: SAGA OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: SAGA mantic-no-omit-framepointer noble mantic 200 400 600 800 1000 SE +/- 5.60, N = 3 SE +/- 10.35, N = 3 SE +/- 8.69, N = 6 873.82 869.37 868.02 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Tree OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Tree mantic-no-omit-framepointer mantic noble 12 24 36 48 60 SE +/- 0.48, N = 15 SE +/- 0.59, N = 4 SE +/- 0.52, N = 3 52.97 48.34 47.03 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Lasso OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Lasso mantic mantic-no-omit-framepointer noble 110 220 330 440 550 SE +/- 3.22, N = 3 SE +/- 3.50, N = 3 SE +/- 1.37, N = 3 511.85 509.54 345.40 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Sparsify OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify mantic mantic-no-omit-framepointer noble 30 60 90 120 150 SE +/- 1.36, N = 5 SE +/- 1.28, N = 5 SE +/- 0.65, N = 3 127.28 125.44 125.07 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Ward OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Ward mantic mantic-no-omit-framepointer noble 13 26 39 52 65 SE +/- 0.21, N = 3 SE +/- 0.22, N = 3 SE +/- 0.20, N = 3 57.82 57.55 56.13 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: MNIST Dataset mantic-no-omit-framepointer mantic noble 15 30 45 60 75 SE +/- 0.47, N = 3 SE +/- 0.82, N = 4 SE +/- 0.67, N = 3 65.88 65.76 65.42 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Neighbors OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Neighbors mantic mantic-no-omit-framepointer noble 30 60 90 120 150 SE +/- 1.34, N = 7 SE +/- 0.59, N = 3 SE +/- 1.09, N = 3 147.75 142.45 142.16 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: SGD Regression OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: SGD Regression mantic-no-omit-framepointer mantic noble 20 40 60 80 100 SE +/- 0.49, N = 3 SE +/- 1.06, N = 6 SE +/- 0.05, N = 3 107.53 106.32 78.88 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: SGDOneClassSVM OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM noble mantic-no-omit-framepointer mantic 80 160 240 320 400 SE +/- 3.55, N = 3 SE +/- 3.48, N = 7 SE +/- 4.18, N = 3 385.38 382.61 379.74 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Isolation Forest OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest mantic-no-omit-framepointer noble mantic 70 140 210 280 350 SE +/- 51.04, N = 9 SE +/- 2.83, N = 3 SE +/- 1.30, N = 3 336.37 314.03 289.37 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Text Vectorizers OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers noble mantic-no-omit-framepointer mantic 15 30 45 60 75 SE +/- 0.32, N = 3 SE +/- 0.08, N = 3 SE +/- 0.19, N = 3 66.39 63.88 60.81 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Hierarchical OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical mantic mantic-no-omit-framepointer noble 50 100 150 200 250 SE +/- 0.75, N = 3 SE +/- 0.42, N = 3 SE +/- 2.35, N = 3 211.29 208.39 207.10 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot OMP vs. LARS OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS mantic-no-omit-framepointer mantic noble 20 40 60 80 100 SE +/- 0.44, N = 3 SE +/- 0.08, N = 3 SE +/- 0.03, N = 3 92.58 91.50 68.17 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Feature Expansions OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Feature Expansions noble mantic-no-omit-framepointer mantic 30 60 90 120 150 SE +/- 1.21, N = 3 SE +/- 1.22, N = 3 SE +/- 0.86, N = 3 133.14 133.09 131.28 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: LocalOutlierFactor OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor mantic-no-omit-framepointer noble mantic 13 26 39 52 65 SE +/- 0.74, N = 15 SE +/- 0.02, N = 3 SE +/- 0.18, N = 3 56.75 54.29 53.46 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: TSNE MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset noble mantic mantic-no-omit-framepointer 60 120 180 240 300 SE +/- 0.91, N = 3 SE +/- 0.44, N = 3 SE +/- 0.54, N = 3 285.82 236.87 236.79 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Isotonic / Logistic OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic noble mantic-no-omit-framepointer mantic 400 800 1200 1600 2000 SE +/- 9.43, N = 3 SE +/- 14.46, N = 3 SE +/- 12.29, N = 3 1684.55 1471.83 1470.81 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Incremental PCA OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA mantic-no-omit-framepointer mantic noble 7 14 21 28 35 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 31.06 31.01 30.62 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting noble mantic-no-omit-framepointer mantic 30 60 90 120 150 SE +/- 0.17, N = 3 SE +/- 0.25, N = 3 SE +/- 0.22, N = 3 117.41 111.26 109.98 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Sample Without Replacement OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement noble mantic-no-omit-framepointer mantic 40 80 120 160 200 SE +/- 2.21, N = 3 SE +/- 0.62, N = 3 SE +/- 0.60, N = 3 179.64 161.46 158.26 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Covertype Dataset Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark noble mantic mantic-no-omit-framepointer 80 160 240 320 400 SE +/- 2.58, N = 3 SE +/- 4.88, N = 3 SE +/- 3.40, N = 3 381.45 376.15 370.69 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting Adult OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult noble mantic-no-omit-framepointer mantic 30 60 90 120 150 SE +/- 0.52, N = 3 SE +/- 0.59, N = 3 SE +/- 0.70, N = 3 112.71 105.65 103.50 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Isotonic / Perturbed Logarithm OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm noble mantic-no-omit-framepointer mantic 400 800 1200 1600 2000 SE +/- 1.48, N = 3 SE +/- 16.46, N = 3 SE +/- 24.41, N = 3 1963.77 1828.30 1788.26 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting Threading OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading noble mantic-no-omit-framepointer mantic 20 40 60 80 100 SE +/- 0.13, N = 3 SE +/- 0.15, N = 3 SE +/- 0.13, N = 3 111.55 110.37 110.22 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: 20 Newsgroups / Logistic Regression OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression noble mantic-no-omit-framepointer mantic 10 20 30 40 50 SE +/- 0.12, N = 3 SE +/- 0.24, N = 3 SE +/- 0.19, N = 3 41.91 41.73 41.52 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Plot Polynomial Kernel Approximation OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation mantic mantic-no-omit-framepointer noble 30 60 90 120 150 SE +/- 1.22, N = 3 SE +/- 1.20, N = 3 SE +/- 1.46, N = 3 150.73 150.38 145.36 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Hist Gradient Boosting Categorical Only OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only noble mantic-no-omit-framepointer mantic 5 10 15 20 25 SE +/- 0.10, N = 3 SE +/- 0.12, N = 3 SE +/- 0.06, N = 3 19.93 18.87 18.58 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Kernel PCA Solvers / Time vs. N Samples OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples mantic-no-omit-framepointer mantic noble 16 32 48 64 80 SE +/- 0.16, N = 3 SE +/- 0.05, N = 3 SE +/- 0.44, N = 3 72.91 72.54 70.02 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Kernel PCA Solvers / Time vs. N Components OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components mantic-no-omit-framepointer mantic noble 9 18 27 36 45 SE +/- 0.36, N = 3 SE +/- 0.21, N = 3 SE +/- 0.43, N = 3 37.89 37.24 37.11 -O2 1. (F9X) gfortran options: -O0
Scikit-Learn Benchmark: Sparse Random Projections / 100 Iterations OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations noble mantic-no-omit-framepointer mantic 140 280 420 560 700 SE +/- 4.34, N = 3 SE +/- 7.06, N = 4 SE +/- 3.80, N = 3 663.95 631.07 613.55 -O2 1. (F9X) gfortran options: -O0
Numpy Benchmark OpenBenchmarking.org Score, More Is Better Numpy Benchmark mantic mantic-no-omit-framepointer noble 90 180 270 360 450 SE +/- 1.20, N = 3 SE +/- 0.90, N = 3 SE +/- 1.01, N = 3 426.28 428.61 430.83
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 noble mantic mantic-no-omit-framepointer 8 16 24 32 40 SE +/- 0.17, N = 3 SE +/- 0.11, N = 3 SE +/- 0.16, N = 3 32.34 32.36 32.54 MIN: 28.9 / MAX: 32.83 MIN: 31.89 / MAX: 32.7 MIN: 31.64 / MAX: 32.94
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 mantic mantic-no-omit-framepointer noble 3 6 9 12 15 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 12.72 12.78 12.89 MIN: 11.99 / MAX: 12.8 MIN: 11.9 / MAX: 12.9 MIN: 12.36 / MAX: 13.05
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 mantic mantic-no-omit-framepointer noble 6 12 18 24 30 SE +/- 0.05, N = 3 SE +/- 0.16, N = 3 SE +/- 0.01, N = 3 24.28 24.38 24.43 MIN: 20.22 / MAX: 24.56 MIN: 22.2 / MAX: 24.87 MIN: 22.57 / MAX: 24.72
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 noble mantic mantic-no-omit-framepointer 6 12 18 24 30 SE +/- 0.06, N = 3 SE +/- 0.10, N = 3 SE +/- 0.16, N = 3 24.12 24.29 24.35 MIN: 22.33 / MAX: 24.46 MIN: 22.24 / MAX: 24.66 MIN: 23.67 / MAX: 24.87
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 noble mantic mantic-no-omit-framepointer 6 12 18 24 30 SE +/- 0.11, N = 3 SE +/- 0.04, N = 3 SE +/- 0.15, N = 3 24.19 24.24 24.40 MIN: 22.75 / MAX: 24.73 MIN: 23.59 / MAX: 24.49 MIN: 21.6 / MAX: 24.8
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 noble mantic mantic-no-omit-framepointer 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 9.88 9.88 9.93 MIN: 9.15 / MAX: 9.98 MIN: 9.31 / MAX: 10.01 MIN: 9.39 / MAX: 10.01
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 noble mantic-no-omit-framepointer mantic 6 12 18 24 30 SE +/- 0.06, N = 3 SE +/- 0.11, N = 3 SE +/- 0.03, N = 3 24.33 24.37 24.42 MIN: 22.79 / MAX: 24.66 MIN: 23.76 / MAX: 24.81 MIN: 20.15 / MAX: 24.74
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 noble mantic mantic-no-omit-framepointer 3 6 9 12 15 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.09, N = 3 9.81 9.84 10.00 MIN: 9.42 / MAX: 9.93 MIN: 9.6 / MAX: 9.98 MIN: 8.09 / MAX: 10.27
PyTorch Device: CPU - Batch Size: 512 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 mantic mantic-no-omit-framepointer noble 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.08, N = 3 SE +/- 0.14, N = 3 24.13 24.28 24.30 MIN: 23.58 / MAX: 24.41 MIN: 22.31 / MAX: 24.53 MIN: 22.45 / MAX: 24.75
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 noble mantic mantic-no-omit-framepointer 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 9.87 9.88 9.91 MIN: 8.61 / MAX: 9.96 MIN: 8.8 / MAX: 9.98 MIN: 8.69 / MAX: 10.08
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 mantic noble mantic-no-omit-framepointer 3 6 9 12 15 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 9.77 9.86 9.91 MIN: 9.17 / MAX: 10 MIN: 8.69 / MAX: 9.99 MIN: 9.19 / MAX: 10.05
PyTorch Device: CPU - Batch Size: 512 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 mantic-no-omit-framepointer noble mantic 3 6 9 12 15 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 9.80 9.87 9.87 MIN: 9.12 / MAX: 9.98 MIN: 9.21 / MAX: 10 MIN: 9.09 / MAX: 9.96
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l noble mantic mantic-no-omit-framepointer 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 7.31 7.31 7.32 MIN: 7.07 / MAX: 7.36 MIN: 7.16 / MAX: 7.34 MIN: 7.23 / MAX: 7.38
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l noble mantic mantic-no-omit-framepointer 1.269 2.538 3.807 5.076 6.345 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 5.59 5.63 5.64 MIN: 5.31 / MAX: 5.65 MIN: 5.39 / MAX: 5.71 MIN: 5.45 / MAX: 5.68
PyTorch Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l noble mantic mantic-no-omit-framepointer 1.269 2.538 3.807 5.076 6.345 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 5.59 5.63 5.64 MIN: 5.46 / MAX: 5.64 MIN: 5.31 / MAX: 5.68 MIN: 5.52 / MAX: 5.69
PyTorch Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l noble mantic mantic-no-omit-framepointer 1.2713 2.5426 3.8139 5.0852 6.3565 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 5.60 5.62 5.65 MIN: 5.32 / MAX: 5.64 MIN: 5.35 / MAX: 5.66 MIN: 5.45 / MAX: 5.7
PyTorch Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l noble mantic mantic-no-omit-framepointer 1.269 2.538 3.807 5.076 6.345 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 5.61 5.61 5.64 MIN: 5.46 / MAX: 5.67 MIN: 5.44 / MAX: 5.65 MIN: 5.29 / MAX: 5.68
PyTorch Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l noble mantic mantic-no-omit-framepointer 1.2713 2.5426 3.8139 5.0852 6.3565 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 5.60 5.61 5.65 MIN: 5.37 / MAX: 5.66 MIN: 5.45 / MAX: 5.66 MIN: 5.36 / MAX: 5.93
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 mantic mantic-no-omit-framepointer 50 100 150 200 250 SE +/- 2.67, N = 3 SE +/- 1.46, N = 15 210.88 211.46 MIN: 195.21 / MAX: 218.16 MIN: 192.13 / MAX: 223.01
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 mantic-no-omit-framepointer mantic 16 32 48 64 80 SE +/- 0.96, N = 3 SE +/- 0.56, N = 3 72.27 73.91 MIN: 68.86 / MAX: 76.62 MIN: 68.9 / MAX: 75.9
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 mantic-no-omit-framepointer mantic 40 80 120 160 200 SE +/- 0.96, N = 3 SE +/- 0.25, N = 3 200.17 200.30 MIN: 183.43 / MAX: 203.55 MIN: 182.88 / MAX: 202.36
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 mantic mantic-no-omit-framepointer 40 80 120 160 200 SE +/- 1.06, N = 3 SE +/- 2.52, N = 4 199.46 202.68 MIN: 182.77 / MAX: 206.03 MIN: 182.69 / MAX: 211.53
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 mantic mantic-no-omit-framepointer 50 100 150 200 250 SE +/- 0.58, N = 3 SE +/- 1.98, N = 3 201.41 205.95 MIN: 184.02 / MAX: 203.68 MIN: 186.96 / MAX: 210.21
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152 mantic-no-omit-framepointer mantic 16 32 48 64 80 SE +/- 0.20, N = 3 SE +/- 0.96, N = 3 72.24 73.01 MIN: 68.36 / MAX: 73.14 MIN: 68.06 / MAX: 75.3
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 mantic mantic-no-omit-framepointer 40 80 120 160 200 SE +/- 1.76, N = 3 SE +/- 1.21, N = 3 202.72 203.22 MIN: 183.1 / MAX: 207.93 MIN: 185.88 / MAX: 206.71
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 mantic-no-omit-framepointer mantic 16 32 48 64 80 SE +/- 0.74, N = 3 SE +/- 0.96, N = 3 73.36 74.15 MIN: 68.19 / MAX: 74.63 MIN: 68.27 / MAX: 75.61
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 mantic-no-omit-framepointer mantic 40 80 120 160 200 SE +/- 0.33, N = 3 SE +/- 1.69, N = 3 201.14 203.18 MIN: 183.61 / MAX: 202.73 MIN: 183.76 / MAX: 207.98
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 mantic mantic-no-omit-framepointer 16 32 48 64 80 SE +/- 0.44, N = 3 SE +/- 0.66, N = 3 71.81 73.65 MIN: 67.31 / MAX: 72.89 MIN: 68.88 / MAX: 75.03
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 mantic mantic-no-omit-framepointer 16 32 48 64 80 SE +/- 0.24, N = 3 SE +/- 0.83, N = 3 71.74 72.91 MIN: 67.87 / MAX: 72.6 MIN: 68 / MAX: 75.45
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 mantic mantic-no-omit-framepointer 16 32 48 64 80 SE +/- 0.94, N = 3 SE +/- 0.50, N = 3 72.31 73.75 MIN: 67.38 / MAX: 74.62 MIN: 68.91 / MAX: 75.15
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l mantic-no-omit-framepointer mantic 9 18 27 36 45 SE +/- 0.26, N = 3 SE +/- 0.47, N = 3 37.29 39.35 MIN: 35.83 / MAX: 39.17 MIN: 36.65 / MAX: 40.42
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l mantic-no-omit-framepointer mantic 9 18 27 36 45 SE +/- 0.02, N = 3 SE +/- 0.08, N = 3 36.10 38.95 MIN: 34.25 / MAX: 38.01 MIN: 37.12 / MAX: 39.27
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l mantic-no-omit-framepointer mantic 9 18 27 36 45 SE +/- 0.30, N = 15 SE +/- 0.24, N = 3 37.16 37.71 MIN: 34.12 / MAX: 39.48 MIN: 35.52 / MAX: 38.25
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l mantic-no-omit-framepointer mantic 9 18 27 36 45 SE +/- 0.31, N = 15 SE +/- 0.30, N = 9 37.24 37.88 MIN: 33.97 / MAX: 39.43 MIN: 35.67 / MAX: 39.63
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l mantic-no-omit-framepointer mantic 9 18 27 36 45 SE +/- 0.30, N = 15 SE +/- 0.15, N = 3 36.60 37.36 MIN: 33.07 / MAX: 39.53 MIN: 35.47 / MAX: 37.85
PyTorch Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l mantic-no-omit-framepointer mantic 9 18 27 36 45 SE +/- 0.33, N = 8 SE +/- 0.03, N = 3 37.22 37.43 MIN: 34.99 / MAX: 39.08 MIN: 35.81 / MAX: 38.02
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State noble mantic-no-omit-framepointer mantic 0.0007 0.0014 0.0021 0.0028 0.0035 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.003 0.003 0.003
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing noble mantic mantic-no-omit-framepointer 0.002 0.004 0.006 0.008 0.01 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.009 0.009 0.008
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State noble mantic-no-omit-framepointer mantic 0.0036 0.0072 0.0108 0.0144 0.018 SE +/- 0.000, N = 15 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.016 0.015 0.015
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing noble mantic-no-omit-framepointer mantic 0.0074 0.0148 0.0222 0.0296 0.037 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.033 0.032 0.032
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State noble mantic mantic-no-omit-framepointer 0.0007 0.0014 0.0021 0.0028 0.0035 SE +/- 0.000, N = 12 SE +/- 0.000, N = 3 SE +/- 0.000, N = 15 0.003 0.003 0.002
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing mantic noble mantic-no-omit-framepointer 0.002 0.004 0.006 0.008 0.01 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.009 0.008 0.008
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State noble mantic-no-omit-framepointer mantic 0.0034 0.0068 0.0102 0.0136 0.017 SE +/- 0.000, N = 7 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.015 0.015 0.015
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing noble mantic-no-omit-framepointer mantic 0.0077 0.0154 0.0231 0.0308 0.0385 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.034 0.033 0.033
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State mantic noble mantic-no-omit-framepointer 0.0137 0.0274 0.0411 0.0548 0.0685 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.061 0.060 0.058
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing noble mantic-no-omit-framepointer mantic 0.0299 0.0598 0.0897 0.1196 0.1495 SE +/- 0.002, N = 3 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 0.133 0.132 0.131
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State mantic noble mantic-no-omit-framepointer 0.014 0.028 0.042 0.056 0.07 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.062 0.061 0.058
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing noble mantic mantic-no-omit-framepointer 0.0306 0.0612 0.0918 0.1224 0.153 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 0.136 0.131 0.128
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State mantic mantic-no-omit-framepointer noble 0.0592 0.1184 0.1776 0.2368 0.296 SE +/- 0.002, N = 3 SE +/- 0.000, N = 3 SE +/- 0.002, N = 3 0.263 0.262 0.261
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing noble mantic mantic-no-omit-framepointer 0.142 0.284 0.426 0.568 0.71 SE +/- 0.006, N = 3 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 0.631 0.619 0.618
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State noble mantic-no-omit-framepointer mantic 0.3231 0.6462 0.9693 1.2924 1.6155 SE +/- 0.003, N = 3 SE +/- 0.004, N = 3 SE +/- 0.003, N = 3 1.436 1.405 1.402
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing noble mantic mantic-no-omit-framepointer 0.612 1.224 1.836 2.448 3.06 SE +/- 0.010, N = 3 SE +/- 0.010, N = 3 SE +/- 0.002, N = 3 2.720 2.670 2.626
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State mantic noble mantic-no-omit-framepointer 0.0592 0.1184 0.1776 0.2368 0.296 SE +/- 0.002, N = 3 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 0.263 0.262 0.260
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing mantic noble mantic-no-omit-framepointer 0.142 0.284 0.426 0.568 0.71 SE +/- 0.002, N = 3 SE +/- 0.004, N = 3 SE +/- 0.007, N = 3 0.631 0.630 0.622
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State noble mantic mantic-no-omit-framepointer 0.3254 0.6508 0.9762 1.3016 1.627 SE +/- 0.006, N = 3 SE +/- 0.004, N = 3 SE +/- 0.001, N = 3 1.446 1.422 1.411
PyHPC Benchmarks Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing noble mantic mantic-no-omit-framepointer 0.6003 1.2006 1.8009 2.4012 3.0015 SE +/- 0.006, N = 3 SE +/- 0.006, N = 3 SE +/- 0.006, N = 3 2.668 2.662 2.620
PyPerformance Benchmark: go OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: go mantic-no-omit-framepointer mantic noble 30 60 90 120 150 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 131 129 121
PyPerformance Benchmark: 2to3 OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: 2to3 mantic-no-omit-framepointer mantic 50 100 150 200 250 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 224 221
PyPerformance Benchmark: chaos OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: chaos mantic-no-omit-framepointer mantic 14 28 42 56 70 SE +/- 0.20, N = 3 SE +/- 0.03, N = 3 63.6 62.8
PyPerformance Benchmark: float OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: float mantic mantic-no-omit-framepointer 15 30 45 60 75 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 67.4 66.9
PyPerformance Benchmark: nbody OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: nbody mantic-no-omit-framepointer mantic 20 40 60 80 100 SE +/- 0.07, N = 3 SE +/- 0.06, N = 3 77.1 76.2
PyPerformance Benchmark: pathlib OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: pathlib mantic-no-omit-framepointer mantic 5 10 15 20 25 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 20.2 19.7
PyPerformance Benchmark: raytrace OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: raytrace mantic-no-omit-framepointer mantic 60 120 180 240 300 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 274 262
PyPerformance Benchmark: json_loads OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: json_loads noble mantic-no-omit-framepointer mantic 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 22.8 20.8 19.5
PyPerformance Benchmark: crypto_pyaes OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: crypto_pyaes mantic-no-omit-framepointer mantic 15 30 45 60 75 SE +/- 0.00, N = 3 SE +/- 0.06, N = 3 66.6 65.1
PyPerformance Benchmark: regex_compile OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: regex_compile mantic-no-omit-framepointer mantic 30 60 90 120 150 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 120 116
PyPerformance Benchmark: python_startup OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: python_startup noble mantic-no-omit-framepointer mantic 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 8.76 7.64 7.61
PyPerformance Benchmark: django_template OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: django_template mantic-no-omit-framepointer mantic 7 14 21 28 35 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 29.5 28.5
PyPerformance Benchmark: pickle_pure_python OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.0.0 Benchmark: pickle_pure_python mantic-no-omit-framepointer mantic 60 120 180 240 300 SE +/- 0.58, N = 3 SE +/- 0.33, N = 3 263 259
PyBench Total For Average Test Times OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times noble mantic-no-omit-framepointer mantic 200 400 600 800 1000 SE +/- 8.70, N = 4 SE +/- 1.20, N = 3 SE +/- 1.00, N = 3 839 790 774
Phoronix Test Suite v10.8.5