machine learning AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite. mantic: Processor: AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS), Chipset: AMD Starship/Matisse, Memory: 2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK, Disk: 2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310C, Graphics: NVIDIA GeForce RTX 3060 12GB, Audio: NVIDIA GA104 HD Audio, Monitor: DELL P2314H, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 23.10, Kernel: 6.5.0-9-generic (x86_64), Display Server: X Server 1.21.1.7, Display Driver: NVIDIA, OpenCL: OpenCL 3.0 CUDA 12.2.146, Compiler: GCC 13.2.0 + CUDA 12.2, File-System: ext4, Screen Resolution: 1920x1080 mantic-no-omit-framepointer: Processor: AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS), Chipset: AMD Starship/Matisse, Memory: 2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK, Disk: 2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310C, Graphics: NVIDIA GeForce RTX 3060, Audio: NVIDIA GA104 HD Audio, Monitor: DELL P2314H, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 23.10, Kernel: 6.5.0-9-generic (x86_64), Display Server: X Server 1.21.1.7, Compiler: GCC 13.2.0 + CUDA 12.2, File-System: ext4, Screen Resolution: 1920x1080 Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better mantic ...................... 868.02 |========================================= mantic-no-omit-framepointer . 873.82 |========================================= Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better mantic ...................... 1788.26 |======================================= mantic-no-omit-framepointer . 1828.30 |======================================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better mantic ...................... 1470.81 |======================================== mantic-no-omit-framepointer . 1471.83 |======================================== Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better mantic ...................... 289.37 |=================================== mantic-no-omit-framepointer . 336.37 |========================================= Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better mantic ...................... 613.55 |======================================== mantic-no-omit-framepointer . 631.07 |========================================= Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better mantic ...................... 379.74 |========================================= mantic-no-omit-framepointer . 382.61 |========================================= Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better mantic ...................... 511.85 |========================================= mantic-no-omit-framepointer . 509.54 |========================================= Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better mantic ...................... 376.15 |========================================= mantic-no-omit-framepointer . 370.69 |======================================== Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better mantic ...................... 293.60 |========================================= mantic-no-omit-framepointer . 295.10 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 5.61 |=========================================== mantic-no-omit-framepointer . 5.64 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 5.63 |=========================================== mantic-no-omit-framepointer . 5.64 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 5.61 |=========================================== mantic-no-omit-framepointer . 5.65 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 5.63 |=========================================== mantic-no-omit-framepointer . 5.64 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 5.62 |=========================================== mantic-no-omit-framepointer . 5.65 |=========================================== Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better mantic ...................... 236.87 |========================================= mantic-no-omit-framepointer . 236.79 |========================================= Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better mantic ...................... 147.75 |========================================= mantic-no-omit-framepointer . 142.45 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better mantic ...................... 211.29 |========================================= mantic-no-omit-framepointer . 208.39 |======================================== Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better mantic ...................... 127.28 |========================================= mantic-no-omit-framepointer . 125.44 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.88 |========================================== mantic-no-omit-framepointer . 37.24 |========================================= Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better mantic ...................... 31.01 |========================================== mantic-no-omit-framepointer . 31.06 |========================================== Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better mantic ...................... 158.26 |======================================== mantic-no-omit-framepointer . 161.46 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 9.87 |=========================================== mantic-no-omit-framepointer . 9.80 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 9.77 |========================================== mantic-no-omit-framepointer . 9.91 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 9.88 |=========================================== mantic-no-omit-framepointer . 9.93 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 9.84 |========================================= mantic-no-omit-framepointer . 10.00 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 9.88 |=========================================== mantic-no-omit-framepointer . 9.91 |=========================================== Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better mantic ...................... 150.73 |========================================= mantic-no-omit-framepointer . 150.38 |========================================= Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better mantic ...................... 106.32 |========================================= mantic-no-omit-framepointer . 107.53 |========================================= Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better mantic ...................... 53.46 |======================================== mantic-no-omit-framepointer . 56.75 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.36 |========================================== mantic-no-omit-framepointer . 36.60 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.71 |========================================== mantic-no-omit-framepointer . 37.16 |========================================= Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better mantic ...................... 48.34 |====================================== mantic-no-omit-framepointer . 52.97 |========================================== Numpy Benchmark Score > Higher Is Better mantic ...................... 426.28 |========================================= mantic-no-omit-framepointer . 428.61 |========================================= Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better mantic ...................... 131.28 |======================================== mantic-no-omit-framepointer . 133.09 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 7.31 |=========================================== mantic-no-omit-framepointer . 7.32 |=========================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better mantic ...................... 109.98 |========================================= mantic-no-omit-framepointer . 111.26 |========================================= Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better mantic ...................... 110.22 |========================================= mantic-no-omit-framepointer . 110.37 |========================================= Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better mantic ...................... 103.50 |======================================== mantic-no-omit-framepointer . 105.65 |========================================= Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better mantic ...................... 91.50 |========================================== mantic-no-omit-framepointer . 92.58 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 2.670 |========================================== mantic-no-omit-framepointer . 2.626 |========================================= PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 2.662 |========================================== mantic-no-omit-framepointer . 2.620 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.43 |========================================== mantic-no-omit-framepointer . 37.22 |========================================== Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better mantic ...................... 65.76 |========================================== mantic-no-omit-framepointer . 65.88 |========================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better mantic ...................... 72.54 |========================================== mantic-no-omit-framepointer . 72.91 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 12.72 |========================================== mantic-no-omit-framepointer . 12.78 |========================================== Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better mantic ...................... 60.81 |======================================== mantic-no-omit-framepointer . 63.88 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 24.13 |========================================== mantic-no-omit-framepointer . 24.28 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 24.24 |========================================== mantic-no-omit-framepointer . 24.40 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 24.28 |========================================== mantic-no-omit-framepointer . 24.38 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 24.29 |========================================== mantic-no-omit-framepointer . 24.35 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 24.42 |========================================== mantic-no-omit-framepointer . 24.37 |========================================== Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better mantic ...................... 57.82 |========================================== mantic-no-omit-framepointer . 57.55 |========================================== PyPerformance 1.0.0 Benchmark: python_startup Milliseconds < Lower Is Better mantic ...................... 7.61 |=========================================== mantic-no-omit-framepointer . 7.64 |=========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 38.95 |========================================== mantic-no-omit-framepointer . 36.10 |======================================= Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better mantic ...................... 41.52 |========================================== mantic-no-omit-framepointer . 41.73 |========================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better mantic ...................... 37.24 |========================================= mantic-no-omit-framepointer . 37.89 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 1.422 |========================================== mantic-no-omit-framepointer . 1.411 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 1.402 |========================================== mantic-no-omit-framepointer . 1.405 |========================================== PyPerformance 1.0.0 Benchmark: raytrace Milliseconds < Lower Is Better mantic ...................... 262 |========================================== mantic-no-omit-framepointer . 274 |============================================ PyPerformance 1.0.0 Benchmark: 2to3 Milliseconds < Lower Is Better mantic ...................... 221 |=========================================== mantic-no-omit-framepointer . 224 |============================================ PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 32.36 |========================================== mantic-no-omit-framepointer . 32.54 |========================================== PyPerformance 1.0.0 Benchmark: pathlib Milliseconds < Lower Is Better mantic ...................... 19.7 |========================================== mantic-no-omit-framepointer . 20.2 |=========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 71.74 |========================================= mantic-no-omit-framepointer . 72.91 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 73.01 |========================================== mantic-no-omit-framepointer . 72.24 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 71.81 |========================================= mantic-no-omit-framepointer . 73.65 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 72.31 |========================================= mantic-no-omit-framepointer . 73.75 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 74.15 |========================================== mantic-no-omit-framepointer . 73.36 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 39.35 |========================================== mantic-no-omit-framepointer . 37.29 |======================================== PyPerformance 1.0.0 Benchmark: pickle_pure_python Milliseconds < Lower Is Better mantic ...................... 259 |=========================================== mantic-no-omit-framepointer . 263 |============================================ PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.631 |========================================== mantic-no-omit-framepointer . 0.622 |========================================= PyPerformance 1.0.0 Benchmark: go Milliseconds < Lower Is Better mantic ...................... 129 |=========================================== mantic-no-omit-framepointer . 131 |============================================ PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.619 |========================================== mantic-no-omit-framepointer . 0.618 |========================================== PyPerformance 1.0.0 Benchmark: nbody Milliseconds < Lower Is Better mantic ...................... 76.2 |========================================== mantic-no-omit-framepointer . 77.1 |=========================================== PyPerformance 1.0.0 Benchmark: django_template Milliseconds < Lower Is Better mantic ...................... 28.5 |========================================== mantic-no-omit-framepointer . 29.5 |=========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 210.88 |========================================= mantic-no-omit-framepointer . 211.46 |========================================= PyPerformance 1.0.0 Benchmark: json_loads Milliseconds < Lower Is Better mantic ...................... 19.5 |======================================== mantic-no-omit-framepointer . 20.8 |=========================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better mantic ...................... 18.58 |========================================= mantic-no-omit-framepointer . 18.87 |========================================== PyPerformance 1.0.0 Benchmark: float Milliseconds < Lower Is Better mantic ...................... 67.4 |=========================================== mantic-no-omit-framepointer . 66.9 |=========================================== PyPerformance 1.0.0 Benchmark: regex_compile Milliseconds < Lower Is Better mantic ...................... 116 |=========================================== mantic-no-omit-framepointer . 120 |============================================ PyPerformance 1.0.0 Benchmark: crypto_pyaes Milliseconds < Lower Is Better mantic ...................... 65.1 |========================================== mantic-no-omit-framepointer . 66.6 |=========================================== PyPerformance 1.0.0 Benchmark: chaos Milliseconds < Lower Is Better mantic ...................... 62.8 |========================================== mantic-no-omit-framepointer . 63.6 |=========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.131 |========================================== mantic-no-omit-framepointer . 0.132 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.131 |========================================== mantic-no-omit-framepointer . 0.128 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 73.91 |========================================== mantic-no-omit-framepointer . 72.27 |========================================= PyBench 2018-02-16 Total For Average Test Times Milliseconds < Lower Is Better mantic ...................... 774 |=========================================== mantic-no-omit-framepointer . 790 |============================================ PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 199.46 |======================================== mantic-no-omit-framepointer . 202.68 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 200.30 |========================================= mantic-no-omit-framepointer . 200.17 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 203.18 |========================================= mantic-no-omit-framepointer . 201.14 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 202.72 |========================================= mantic-no-omit-framepointer . 203.22 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 201.41 |======================================== mantic-no-omit-framepointer . 205.95 |========================================= PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.009 |========================================== mantic-no-omit-framepointer . 0.008 |===================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.009 |========================================== mantic-no-omit-framepointer . 0.008 |===================================== Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.263 |========================================== mantic-no-omit-framepointer . 0.262 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.263 |========================================== mantic-no-omit-framepointer . 0.260 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.003 |========================================== mantic-no-omit-framepointer . 0.002 |============================ PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.062 |========================================== mantic-no-omit-framepointer . 0.058 |======================================= PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.061 |========================================== mantic-no-omit-framepointer . 0.058 |======================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.033 |========================================== mantic-no-omit-framepointer . 0.033 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.032 |========================================== mantic-no-omit-framepointer . 0.032 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.003 |========================================== mantic-no-omit-framepointer . 0.003 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.015 |========================================== mantic-no-omit-framepointer . 0.015 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.015 |========================================== mantic-no-omit-framepointer . 0.015 |========================================== Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better