Desktop machine learning AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite. mantic: Processor: AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS), Chipset: AMD Starship/Matisse, Memory: 2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK, Disk: 2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310C, Graphics: NVIDIA GeForce RTX 3060 12GB, Audio: NVIDIA GA104 HD Audio, Monitor: DELL P2314H, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 23.10, Kernel: 6.5.0-9-generic (x86_64), Display Server: X Server 1.21.1.7, Display Driver: NVIDIA, OpenCL: OpenCL 3.0 CUDA 12.2.146, Compiler: GCC 13.2.0 + CUDA 12.2, File-System: ext4, Screen Resolution: 1920x1080 mantic-no-omit-framepointer: Processor: AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS), Chipset: AMD Starship/Matisse, Memory: 2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK, Disk: 2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310C, Graphics: NVIDIA GeForce RTX 3060, Audio: NVIDIA GA104 HD Audio, Monitor: DELL P2314H, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 23.10, Kernel: 6.5.0-9-generic (x86_64), Display Server: X Server 1.21.1.7, Compiler: GCC 13.2.0 + CUDA 12.2, File-System: ext4, Screen Resolution: 1920x1080 noble: Processor: AMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS), Chipset: AMD Starship/Matisse, Memory: 2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK, Disk: 2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310C, Graphics: NVIDIA GeForce RTX 3060, Audio: NVIDIA GA104 HD Audio, Monitor: DELL P2314H + U32J59x, Network: Realtek RTL8111/8168/8211/8411 OS: Ubuntu 24.04, Kernel: 6.8.0-31-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080 Numpy Benchmark Score > Higher Is Better mantic ...................... 426.28 |========================================= mantic-no-omit-framepointer . 428.61 |========================================= noble ....................... 430.83 |========================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better noble ....................... 32.34 |========================================== mantic ...................... 32.36 |========================================== mantic-no-omit-framepointer . 32.54 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better noble ....................... 12.89 |========================================== mantic ...................... 12.72 |========================================= mantic-no-omit-framepointer . 12.78 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better noble ....................... 24.43 |========================================== mantic ...................... 24.28 |========================================== mantic-no-omit-framepointer . 24.38 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better noble ....................... 24.12 |========================================== mantic ...................... 24.29 |========================================== mantic-no-omit-framepointer . 24.35 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better noble ....................... 24.19 |========================================== mantic ...................... 24.24 |========================================== mantic-no-omit-framepointer . 24.40 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better noble ....................... 9.88 |=========================================== mantic ...................... 9.88 |=========================================== mantic-no-omit-framepointer . 9.93 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better noble ....................... 24.33 |========================================== mantic ...................... 24.42 |========================================== mantic-no-omit-framepointer . 24.37 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better noble ....................... 9.81 |========================================= mantic ...................... 9.84 |========================================= mantic-no-omit-framepointer . 10.00 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better noble ....................... 24.30 |========================================== mantic ...................... 24.13 |========================================== mantic-no-omit-framepointer . 24.28 |========================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better noble ....................... 9.87 |=========================================== mantic ...................... 9.88 |=========================================== mantic-no-omit-framepointer . 9.91 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better noble ....................... 9.86 |=========================================== mantic ...................... 9.77 |========================================== mantic-no-omit-framepointer . 9.91 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better noble ....................... 9.87 |=========================================== mantic ...................... 9.87 |=========================================== mantic-no-omit-framepointer . 9.80 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better noble ....................... 7.31 |=========================================== mantic ...................... 7.31 |=========================================== mantic-no-omit-framepointer . 7.32 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better noble ....................... 5.59 |=========================================== mantic ...................... 5.63 |=========================================== mantic-no-omit-framepointer . 5.64 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better noble ....................... 5.59 |=========================================== mantic ...................... 5.63 |=========================================== mantic-no-omit-framepointer . 5.64 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better noble ....................... 5.60 |=========================================== mantic ...................... 5.62 |=========================================== mantic-no-omit-framepointer . 5.65 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better noble ....................... 5.61 |=========================================== mantic ...................... 5.61 |=========================================== mantic-no-omit-framepointer . 5.64 |=========================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better noble ....................... 5.60 |=========================================== mantic ...................... 5.61 |=========================================== mantic-no-omit-framepointer . 5.65 |=========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 210.88 |========================================= mantic-no-omit-framepointer . 211.46 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 73.91 |========================================== mantic-no-omit-framepointer . 72.27 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 200.30 |========================================= mantic-no-omit-framepointer . 200.17 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 199.46 |======================================== mantic-no-omit-framepointer . 202.68 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 201.41 |======================================== mantic-no-omit-framepointer . 205.95 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 73.01 |========================================== mantic-no-omit-framepointer . 72.24 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 202.72 |========================================= mantic-no-omit-framepointer . 203.22 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 74.15 |========================================== mantic-no-omit-framepointer . 73.36 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better mantic ...................... 203.18 |========================================= mantic-no-omit-framepointer . 201.14 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 71.81 |========================================= mantic-no-omit-framepointer . 73.65 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 71.74 |========================================= mantic-no-omit-framepointer . 72.91 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better mantic ...................... 72.31 |========================================= mantic-no-omit-framepointer . 73.75 |========================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 39.35 |========================================== mantic-no-omit-framepointer . 37.29 |======================================== PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 38.95 |========================================== mantic-no-omit-framepointer . 36.10 |======================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.71 |========================================== mantic-no-omit-framepointer . 37.16 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.88 |========================================== mantic-no-omit-framepointer . 37.24 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.36 |========================================== mantic-no-omit-framepointer . 36.60 |========================================= PyTorch 2.1 Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better mantic ...................... 37.43 |========================================== mantic-no-omit-framepointer . 37.22 |========================================== PyBench 2018-02-16 Total For Average Test Times Milliseconds < Lower Is Better mantic ...................... 774 |========================================= mantic-no-omit-framepointer . 790 |========================================= noble ....................... 839 |============================================ PyPerformance 1.0.0 Benchmark: go Milliseconds < Lower Is Better mantic ...................... 129 |=========================================== mantic-no-omit-framepointer . 131 |============================================ noble ....................... 121 |========================================= PyPerformance 1.0.0 Benchmark: 2to3 Milliseconds < Lower Is Better mantic ...................... 221 |=========================================== mantic-no-omit-framepointer . 224 |============================================ PyPerformance 1.0.0 Benchmark: chaos Milliseconds < Lower Is Better mantic ...................... 62.8 |========================================== mantic-no-omit-framepointer . 63.6 |=========================================== PyPerformance 1.0.0 Benchmark: float Milliseconds < Lower Is Better mantic ...................... 67.4 |=========================================== mantic-no-omit-framepointer . 66.9 |=========================================== PyPerformance 1.0.0 Benchmark: nbody Milliseconds < Lower Is Better mantic ...................... 76.2 |========================================== mantic-no-omit-framepointer . 77.1 |=========================================== PyPerformance 1.0.0 Benchmark: pathlib Milliseconds < Lower Is Better mantic ...................... 19.7 |========================================== mantic-no-omit-framepointer . 20.2 |=========================================== PyPerformance 1.0.0 Benchmark: raytrace Milliseconds < Lower Is Better mantic ...................... 262 |========================================== mantic-no-omit-framepointer . 274 |============================================ PyPerformance 1.0.0 Benchmark: json_loads Milliseconds < Lower Is Better mantic ...................... 19.5 |===================================== mantic-no-omit-framepointer . 20.8 |======================================= noble ....................... 22.8 |=========================================== PyPerformance 1.0.0 Benchmark: crypto_pyaes Milliseconds < Lower Is Better mantic ...................... 65.1 |========================================== mantic-no-omit-framepointer . 66.6 |=========================================== PyPerformance 1.0.0 Benchmark: regex_compile Milliseconds < Lower Is Better mantic ...................... 116 |=========================================== mantic-no-omit-framepointer . 120 |============================================ PyPerformance 1.0.0 Benchmark: python_startup Milliseconds < Lower Is Better mantic ...................... 7.61 |===================================== mantic-no-omit-framepointer . 7.64 |====================================== noble ....................... 8.76 |=========================================== PyPerformance 1.0.0 Benchmark: django_template Milliseconds < Lower Is Better mantic ...................... 28.5 |========================================== mantic-no-omit-framepointer . 29.5 |=========================================== PyPerformance 1.0.0 Benchmark: pickle_pure_python Milliseconds < Lower Is Better mantic ...................... 259 |=========================================== mantic-no-omit-framepointer . 263 |============================================ PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.003 |========================================== mantic-no-omit-framepointer . 0.003 |========================================== noble ....................... 0.003 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.009 |========================================== mantic-no-omit-framepointer . 0.008 |===================================== noble ....................... 0.009 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.015 |======================================= mantic-no-omit-framepointer . 0.015 |======================================= noble ....................... 0.016 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.032 |========================================= mantic-no-omit-framepointer . 0.032 |========================================= noble ....................... 0.033 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.003 |========================================== mantic-no-omit-framepointer . 0.002 |============================ noble ....................... 0.003 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.009 |========================================== mantic-no-omit-framepointer . 0.008 |===================================== noble ....................... 0.008 |===================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.015 |========================================== mantic-no-omit-framepointer . 0.015 |========================================== noble ....................... 0.015 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.033 |========================================= mantic-no-omit-framepointer . 0.033 |========================================= noble ....................... 0.034 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.061 |========================================== mantic-no-omit-framepointer . 0.058 |======================================== noble ....................... 0.060 |========================================= PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.131 |========================================= mantic-no-omit-framepointer . 0.132 |========================================== noble ....................... 0.133 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.062 |========================================== mantic-no-omit-framepointer . 0.058 |======================================= noble ....................... 0.061 |========================================= PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.131 |======================================== mantic-no-omit-framepointer . 0.128 |======================================== noble ....................... 0.136 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.263 |========================================== mantic-no-omit-framepointer . 0.262 |========================================== noble ....................... 0.261 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.619 |========================================= mantic-no-omit-framepointer . 0.618 |========================================= noble ....................... 0.631 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 1.402 |========================================= mantic-no-omit-framepointer . 1.405 |========================================= noble ....................... 1.436 |========================================== PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 2.670 |========================================= mantic-no-omit-framepointer . 2.626 |========================================= noble ....................... 2.720 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 0.263 |========================================== mantic-no-omit-framepointer . 0.260 |========================================== noble ....................... 0.262 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 0.631 |========================================== mantic-no-omit-framepointer . 0.622 |========================================= noble ....................... 0.630 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better mantic ...................... 1.422 |========================================= mantic-no-omit-framepointer . 1.411 |========================================= noble ....................... 1.446 |========================================== PyHPC Benchmarks 3.0 Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better mantic ...................... 2.662 |========================================== mantic-no-omit-framepointer . 2.620 |========================================= noble ....................... 2.668 |========================================== Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better mantic ...................... 293.60 |========================================= mantic-no-omit-framepointer . 295.10 |========================================= noble ....................... 269.81 |===================================== Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better mantic ...................... 868.02 |========================================= mantic-no-omit-framepointer . 873.82 |========================================= noble ....................... 869.37 |========================================= Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better mantic ...................... 48.34 |====================================== mantic-no-omit-framepointer . 52.97 |========================================== noble ....................... 47.03 |===================================== Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better mantic ...................... 511.85 |========================================= mantic-no-omit-framepointer . 509.54 |========================================= noble ....................... 345.40 |============================ Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better mantic ...................... 127.28 |========================================= mantic-no-omit-framepointer . 125.44 |======================================== noble ....................... 125.07 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better mantic ...................... 57.82 |========================================== mantic-no-omit-framepointer . 57.55 |========================================== noble ....................... 56.13 |========================================= Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better mantic ...................... 65.76 |========================================== mantic-no-omit-framepointer . 65.88 |========================================== noble ....................... 65.42 |========================================== Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better mantic ...................... 147.75 |========================================= mantic-no-omit-framepointer . 142.45 |======================================== noble ....................... 142.16 |======================================= Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better mantic ...................... 106.32 |========================================= mantic-no-omit-framepointer . 107.53 |========================================= noble ....................... 78.88 |============================== Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better mantic ...................... 379.74 |======================================== mantic-no-omit-framepointer . 382.61 |========================================= noble ....................... 385.38 |========================================= Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better mantic ...................... 289.37 |=================================== mantic-no-omit-framepointer . 336.37 |========================================= noble ....................... 314.03 |====================================== Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better mantic ...................... 60.81 |====================================== mantic-no-omit-framepointer . 63.88 |======================================== noble ....................... 66.39 |========================================== Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better mantic ...................... 211.29 |========================================= mantic-no-omit-framepointer . 208.39 |======================================== noble ....................... 207.10 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better mantic ...................... 91.50 |========================================== mantic-no-omit-framepointer . 92.58 |========================================== noble ....................... 68.17 |=============================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better mantic ...................... 131.28 |======================================== mantic-no-omit-framepointer . 133.09 |========================================= noble ....................... 133.14 |========================================= Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better mantic ...................... 53.46 |======================================== mantic-no-omit-framepointer . 56.75 |========================================== noble ....................... 54.29 |======================================== Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better mantic ...................... 236.87 |================================== mantic-no-omit-framepointer . 236.79 |================================== noble ....................... 285.82 |========================================= Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better mantic ...................... 1470.81 |=================================== mantic-no-omit-framepointer . 1471.83 |=================================== noble ....................... 1684.55 |======================================== Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better mantic ...................... 31.01 |========================================== mantic-no-omit-framepointer . 31.06 |========================================== noble ....................... 30.62 |========================================= Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better mantic ...................... 109.98 |====================================== mantic-no-omit-framepointer . 111.26 |======================================= noble ....................... 117.41 |========================================= Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better mantic ...................... 158.26 |==================================== mantic-no-omit-framepointer . 161.46 |===================================== noble ....................... 179.64 |========================================= Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better mantic ...................... 376.15 |======================================== mantic-no-omit-framepointer . 370.69 |======================================== noble ....................... 381.45 |========================================= Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better mantic ...................... 103.50 |====================================== mantic-no-omit-framepointer . 105.65 |====================================== noble ....................... 112.71 |========================================= Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better mantic ...................... 1788.26 |==================================== mantic-no-omit-framepointer . 1828.30 |===================================== noble ....................... 1963.77 |======================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better mantic ...................... 110.22 |========================================= mantic-no-omit-framepointer . 110.37 |========================================= noble ....................... 111.55 |========================================= Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better mantic ...................... 41.52 |========================================== mantic-no-omit-framepointer . 41.73 |========================================== noble ....................... 41.91 |========================================== Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better mantic ...................... 150.73 |========================================= mantic-no-omit-framepointer . 150.38 |========================================= noble ....................... 145.36 |======================================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better mantic ...................... 18.58 |======================================= mantic-no-omit-framepointer . 18.87 |======================================== noble ....................... 19.93 |========================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better mantic ...................... 72.54 |========================================== mantic-no-omit-framepointer . 72.91 |========================================== noble ....................... 70.02 |======================================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better mantic ...................... 37.24 |========================================= mantic-no-omit-framepointer . 37.89 |========================================== noble ....................... 37.11 |========================================= Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better mantic ...................... 613.55 |====================================== mantic-no-omit-framepointer . 631.07 |======================================= noble ....................... 663.95 |========================================= PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State Seconds < Lower Is Better PyHPC Benchmarks 3.0 Device: GPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better