Desktop machine learning

AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2405015-VPA1-DESKTOP46
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 3 Tests
Programmer / Developer System Benchmarks 2 Tests
Python 5 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
mantic
February 23
  15 Hours, 54 Minutes
mantic-no-omit-framepointer
February 24
  19 Hours, 11 Minutes
noble
April 30
  14 Hours, 21 Minutes
Invert Hiding All Results Option
  16 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Desktop machine learningProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerDisplay DriverOpenCLCompilerFile-SystemScreen Resolutionmanticmantic-no-omit-framepointernobleAMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads)MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS)AMD Starship/Matisse2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310CNVIDIA GeForce RTX 3060 12GBNVIDIA GA104 HD AudioDELL P2314HRealtek RTL8111/8168/8411Ubuntu 23.106.5.0-9-generic (x86_64)X Server 1.21.1.7NVIDIAOpenCL 3.0 CUDA 12.2.146GCC 13.2.0 + CUDA 12.2ext41920x1080NVIDIA GeForce RTX 3060DELL P2314H + U32J59xRealtek RTL8111/8168/8211/8411Ubuntu 24.046.8.0-31-generic (x86_64)GCC 13.2.0OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- mantic: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - mantic-no-omit-framepointer: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - noble: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701013Python Details- mantic: Python 3.11.6- mantic-no-omit-framepointer: Python 3.11.6- noble: Python 3.12.3Security Details- mantic: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - mantic-no-omit-framepointer: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - noble: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected Environment Details- mantic-no-omit-framepointer: CXXFLAGS=-fno-omit-frame-pointer QMAKE_CFLAGS=-fno-omit-frame-pointer CFLAGS=-fno-omit-frame-pointer CFLAGS_OVERRIDE=-fno-omit-frame-pointer QMAKE_CXXFLAGS=-fno-omit-frame-pointer FFLAGS=-fno-omit-frame-pointer - noble: CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS_OVERRIDE="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" FFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2"

manticmantic-no-omit-framepointernobleResult OverviewPhoronix Test Suite100%133%165%198%231%PyPerformancePyBenchPyHPC BenchmarksScikit-LearnNumpy BenchmarkPyTorch

Desktop machine learningnumpy: pytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 1 - ResNet-50pytorch: NVIDIA CUDA GPU - 1 - ResNet-152pytorch: NVIDIA CUDA GPU - 16 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-50pytorch: NVIDIA CUDA GPU - 16 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-152pytorch: NVIDIA CUDA GPU - 1 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 16 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 32 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 64 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 256 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 512 - Efficientnet_v2_lpybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonpyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numpy - 16384 - Isoneutral Mixingpyhpc: CPU - Numpy - 65536 - Equation of Statepyhpc: CPU - Numpy - 65536 - Isoneutral Mixingpyhpc: GPU - Numpy - 16384 - Equation of Statepyhpc: GPU - Numpy - 16384 - Isoneutral Mixingpyhpc: GPU - Numpy - 65536 - Equation of Statepyhpc: GPU - Numpy - 65536 - Isoneutral Mixingpyhpc: CPU - Numpy - 262144 - Equation of Statepyhpc: CPU - Numpy - 262144 - Isoneutral Mixingpyhpc: GPU - Numpy - 262144 - Equation of Statepyhpc: GPU - Numpy - 262144 - Isoneutral Mixingpyhpc: CPU - Numpy - 1048576 - Equation of Statepyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: CPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingpyhpc: GPU - Numpy - 1048576 - Equation of Statepyhpc: GPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: GPU - Numpy - 4194304 - Equation of Statepyhpc: GPU - Numpy - 4194304 - Isoneutral Mixingscikit-learn: GLMscikit-learn: SAGAscikit-learn: Treescikit-learn: Lassoscikit-learn: Sparsifyscikit-learn: Plot Wardscikit-learn: MNIST Datasetscikit-learn: Plot Neighborsscikit-learn: SGD Regressionscikit-learn: SGDOneClassSVMscikit-learn: Isolation Forestscikit-learn: Text Vectorizersscikit-learn: Plot Hierarchicalscikit-learn: Plot OMP vs. LARSscikit-learn: Feature Expansionsscikit-learn: LocalOutlierFactorscikit-learn: TSNE MNIST Datasetscikit-learn: Isotonic / Logisticscikit-learn: Plot Incremental PCAscikit-learn: Hist Gradient Boostingscikit-learn: Sample Without Replacementscikit-learn: Covertype Dataset Benchmarkscikit-learn: Hist Gradient Boosting Adultscikit-learn: Isotonic / Perturbed Logarithmscikit-learn: Hist Gradient Boosting Threadingscikit-learn: 20 Newsgroups / Logistic Regressionscikit-learn: Plot Polynomial Kernel Approximationscikit-learn: Hist Gradient Boosting Categorical Onlyscikit-learn: Kernel PCA Solvers / Time vs. N Samplesscikit-learn: Kernel PCA Solvers / Time vs. N Componentsscikit-learn: Sparse Rand Projections / 100 Iterationsscikit-learn: Plot Singular Value Decompositionmanticmantic-no-omit-framepointernoble426.2832.3612.7224.2824.2924.249.8824.429.8424.139.889.779.877.315.635.635.625.615.61210.8873.91200.30199.46201.4173.01202.7274.15203.1871.8171.7472.3139.3538.9537.7137.8837.3637.4377412922162.867.476.219.726219.565.11167.6128.52590.0030.0090.0150.0320.0030.0090.0150.0330.0610.1310.0620.1310.2630.6191.4022.6700.2630.6311.4222.662293.598868.01848.338511.848127.28257.82465.763147.752106.315379.739289.37160.814211.28691.499131.27753.464236.8651470.80631.006109.984158.262376.145103.4971788.259110.21541.519150.73218.57972.54137.242613.547428.6132.5412.7824.3824.3524.409.9324.3710.0024.289.919.919.807.325.645.645.655.645.65211.4672.27200.17202.68205.9572.24203.2273.36201.1473.6572.9173.7537.2936.1037.1637.2436.6037.2279013122463.666.977.120.227420.866.61207.6429.52630.0030.0080.0150.0320.0020.0080.0150.0330.0580.1320.0580.1280.2620.6181.4052.6260.2600.6221.4112.620295.096873.82252.969509.537125.44257.54565.877142.451107.527382.611336.37263.875208.39192.582133.09256.754236.7861471.83431.057111.255161.460370.694105.6471828.300110.37441.728150.37618.86572.90937.889631.071430.8332.3412.8924.4324.1224.199.8824.339.8124.309.879.869.877.315.595.595.605.615.6083912122.88.760.0030.0090.0160.0330.0030.0080.0150.0340.0600.1330.0610.1360.2610.6311.4362.7200.2620.6301.4462.668269.806869.36947.033345.400125.06956.13265.416142.15978.880385.383314.03466.393207.10468.172133.14454.288285.8231684.54630.617117.407179.638381.447112.7131963.772111.55441.914145.36319.93270.02237.107663.953OpenBenchmarking.org

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkmanticmantic-no-omit-framepointernoble90180270360450SE +/- 1.20, N = 3SE +/- 0.90, N = 3SE +/- 1.01, N = 3426.28428.61430.83
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkmanticmantic-no-omit-framepointernoble80160240320400Min: 424.24 / Avg: 426.28 / Max: 428.4Min: 427.08 / Avg: 428.61 / Max: 430.2Min: 428.85 / Avg: 430.83 / Max: 432.17

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50noblemanticmantic-no-omit-framepointer816243240SE +/- 0.17, N = 3SE +/- 0.11, N = 3SE +/- 0.16, N = 332.3432.3632.54MIN: 28.9 / MAX: 32.83MIN: 31.89 / MAX: 32.7MIN: 31.64 / MAX: 32.94
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50noblemanticmantic-no-omit-framepointer714212835Min: 32.09 / Avg: 32.34 / Max: 32.67Min: 32.15 / Avg: 32.36 / Max: 32.53Min: 32.24 / Avg: 32.54 / Max: 32.79

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 312.8912.7212.78MIN: 12.36 / MAX: 13.05MIN: 11.99 / MAX: 12.8MIN: 11.9 / MAX: 12.9
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152noblemanticmantic-no-omit-framepointer48121620Min: 12.8 / Avg: 12.89 / Max: 12.98Min: 12.67 / Avg: 12.72 / Max: 12.75Min: 12.7 / Avg: 12.78 / Max: 12.85

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.16, N = 324.4324.2824.38MIN: 22.57 / MAX: 24.72MIN: 20.22 / MAX: 24.56MIN: 22.2 / MAX: 24.87
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430Min: 24.42 / Avg: 24.43 / Max: 24.44Min: 24.21 / Avg: 24.28 / Max: 24.38Min: 24.17 / Avg: 24.38 / Max: 24.69

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 324.1224.2924.35MIN: 22.33 / MAX: 24.46MIN: 22.24 / MAX: 24.66MIN: 23.67 / MAX: 24.87
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430Min: 24.05 / Avg: 24.12 / Max: 24.24Min: 24.16 / Avg: 24.29 / Max: 24.49Min: 24.12 / Avg: 24.35 / Max: 24.65

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 324.1924.2424.40MIN: 22.75 / MAX: 24.73MIN: 23.59 / MAX: 24.49MIN: 21.6 / MAX: 24.8
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430Min: 23.97 / Avg: 24.19 / Max: 24.34Min: 24.18 / Avg: 24.24 / Max: 24.32Min: 24.13 / Avg: 24.4 / Max: 24.63

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 39.889.889.93MIN: 9.15 / MAX: 9.98MIN: 9.31 / MAX: 10.01MIN: 9.39 / MAX: 10.01
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215Min: 9.85 / Avg: 9.88 / Max: 9.9Min: 9.81 / Avg: 9.88 / Max: 9.94Min: 9.91 / Avg: 9.93 / Max: 9.95

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 324.3324.4224.37MIN: 22.79 / MAX: 24.66MIN: 20.15 / MAX: 24.74MIN: 23.76 / MAX: 24.81
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430Min: 24.21 / Avg: 24.33 / Max: 24.43Min: 24.38 / Avg: 24.42 / Max: 24.48Min: 24.16 / Avg: 24.37 / Max: 24.55

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 39.819.8410.00MIN: 9.42 / MAX: 9.93MIN: 9.6 / MAX: 9.98MIN: 8.09 / MAX: 10.27
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215Min: 9.75 / Avg: 9.81 / Max: 9.87Min: 9.75 / Avg: 9.84 / Max: 9.92Min: 9.89 / Avg: 10 / Max: 10.19

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430SE +/- 0.14, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 324.3024.1324.28MIN: 22.45 / MAX: 24.75MIN: 23.58 / MAX: 24.41MIN: 22.31 / MAX: 24.53
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430Min: 24.04 / Avg: 24.3 / Max: 24.52Min: 24.11 / Avg: 24.13 / Max: 24.16Min: 24.13 / Avg: 24.28 / Max: 24.38

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 39.879.889.91MIN: 8.61 / MAX: 9.96MIN: 8.8 / MAX: 9.98MIN: 8.69 / MAX: 10.08
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215Min: 9.84 / Avg: 9.87 / Max: 9.89Min: 9.81 / Avg: 9.88 / Max: 9.91Min: 9.86 / Avg: 9.91 / Max: 9.95

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 39.869.779.91MIN: 8.69 / MAX: 9.99MIN: 9.17 / MAX: 10MIN: 9.19 / MAX: 10.05
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215Min: 9.81 / Avg: 9.86 / Max: 9.91Min: 9.66 / Avg: 9.77 / Max: 9.9Min: 9.85 / Avg: 9.91 / Max: 9.97

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 39.879.879.80MIN: 9.21 / MAX: 10MIN: 9.09 / MAX: 9.96MIN: 9.12 / MAX: 9.98
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215Min: 9.81 / Avg: 9.87 / Max: 9.92Min: 9.83 / Avg: 9.87 / Max: 9.89Min: 9.66 / Avg: 9.8 / Max: 9.88

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 37.317.317.32MIN: 7.07 / MAX: 7.36MIN: 7.16 / MAX: 7.34MIN: 7.23 / MAX: 7.38
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer3691215Min: 7.3 / Avg: 7.31 / Max: 7.32Min: 7.3 / Avg: 7.31 / Max: 7.31Min: 7.28 / Avg: 7.32 / Max: 7.35

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.2692.5383.8075.0766.345SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 35.595.635.64MIN: 5.31 / MAX: 5.65MIN: 5.39 / MAX: 5.71MIN: 5.45 / MAX: 5.68
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer246810Min: 5.56 / Avg: 5.59 / Max: 5.62Min: 5.6 / Avg: 5.63 / Max: 5.67Min: 5.62 / Avg: 5.64 / Max: 5.65

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.2692.5383.8075.0766.345SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.595.635.64MIN: 5.46 / MAX: 5.64MIN: 5.31 / MAX: 5.68MIN: 5.52 / MAX: 5.69
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer246810Min: 5.58 / Avg: 5.59 / Max: 5.6Min: 5.61 / Avg: 5.63 / Max: 5.65Min: 5.62 / Avg: 5.64 / Max: 5.66

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.605.625.65MIN: 5.32 / MAX: 5.64MIN: 5.35 / MAX: 5.66MIN: 5.45 / MAX: 5.7
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer246810Min: 5.59 / Avg: 5.6 / Max: 5.61Min: 5.61 / Avg: 5.62 / Max: 5.64Min: 5.64 / Avg: 5.65 / Max: 5.67

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.2692.5383.8075.0766.345SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 35.615.615.64MIN: 5.46 / MAX: 5.67MIN: 5.44 / MAX: 5.65MIN: 5.29 / MAX: 5.68
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer246810Min: 5.58 / Avg: 5.61 / Max: 5.63Min: 5.57 / Avg: 5.61 / Max: 5.62Min: 5.62 / Avg: 5.64 / Max: 5.66

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 35.605.615.65MIN: 5.37 / MAX: 5.66MIN: 5.45 / MAX: 5.66MIN: 5.36 / MAX: 5.93
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer246810Min: 5.59 / Avg: 5.6 / Max: 5.62Min: 5.59 / Avg: 5.61 / Max: 5.64Min: 5.61 / Avg: 5.65 / Max: 5.68

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50manticmantic-no-omit-framepointer50100150200250SE +/- 2.67, N = 3SE +/- 1.46, N = 15210.88211.46MIN: 195.21 / MAX: 218.16MIN: 192.13 / MAX: 223.01
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200Min: 208.16 / Avg: 210.88 / Max: 216.23Min: 201.42 / Avg: 211.46 / Max: 221.25

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.56, N = 3SE +/- 0.96, N = 373.9172.27MIN: 68.9 / MAX: 75.9MIN: 68.86 / MAX: 76.62
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152manticmantic-no-omit-framepointer1428425670Min: 72.86 / Avg: 73.91 / Max: 74.77Min: 70.94 / Avg: 72.27 / Max: 74.14

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 0.25, N = 3SE +/- 0.96, N = 3200.30200.17MIN: 182.88 / MAX: 202.36MIN: 183.43 / MAX: 203.55
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200Min: 199.88 / Avg: 200.3 / Max: 200.74Min: 198.47 / Avg: 200.17 / Max: 201.79

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 1.06, N = 3SE +/- 2.52, N = 4199.46202.68MIN: 182.77 / MAX: 206.03MIN: 182.69 / MAX: 211.53
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200Min: 198.05 / Avg: 199.46 / Max: 201.54Min: 199.57 / Avg: 202.68 / Max: 210.19

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50manticmantic-no-omit-framepointer50100150200250SE +/- 0.58, N = 3SE +/- 1.98, N = 3201.41205.95MIN: 184.02 / MAX: 203.68MIN: 186.96 / MAX: 210.21
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200Min: 200.37 / Avg: 201.41 / Max: 202.39Min: 202.2 / Avg: 205.95 / Max: 208.9

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.96, N = 3SE +/- 0.20, N = 373.0172.24MIN: 68.06 / MAX: 75.3MIN: 68.36 / MAX: 73.14
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152manticmantic-no-omit-framepointer1428425670Min: 72.04 / Avg: 73.01 / Max: 74.94Min: 71.97 / Avg: 72.24 / Max: 72.62

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 1.76, N = 3SE +/- 1.21, N = 3202.72203.22MIN: 183.1 / MAX: 207.93MIN: 185.88 / MAX: 206.71
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200Min: 200.2 / Avg: 202.72 / Max: 206.12Min: 201.47 / Avg: 203.22 / Max: 205.54

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.96, N = 3SE +/- 0.74, N = 374.1573.36MIN: 68.27 / MAX: 75.61MIN: 68.19 / MAX: 74.63
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152manticmantic-no-omit-framepointer1428425670Min: 72.24 / Avg: 74.15 / Max: 75.21Min: 71.88 / Avg: 73.36 / Max: 74.25

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 1.69, N = 3SE +/- 0.33, N = 3203.18201.14MIN: 183.76 / MAX: 207.98MIN: 183.61 / MAX: 202.73
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200Min: 200.87 / Avg: 203.18 / Max: 206.47Min: 200.57 / Avg: 201.14 / Max: 201.72

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.44, N = 3SE +/- 0.66, N = 371.8173.65MIN: 67.31 / MAX: 72.89MIN: 68.88 / MAX: 75.03
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152manticmantic-no-omit-framepointer1428425670Min: 70.98 / Avg: 71.81 / Max: 72.49Min: 72.4 / Avg: 73.65 / Max: 74.65

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.24, N = 3SE +/- 0.83, N = 371.7472.91MIN: 67.87 / MAX: 72.6MIN: 68 / MAX: 75.45
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152manticmantic-no-omit-framepointer1428425670Min: 71.3 / Avg: 71.74 / Max: 72.15Min: 71.91 / Avg: 72.91 / Max: 74.56

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.94, N = 3SE +/- 0.50, N = 372.3173.75MIN: 67.38 / MAX: 74.62MIN: 68.91 / MAX: 75.15
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152manticmantic-no-omit-framepointer1428425670Min: 71.37 / Avg: 72.31 / Max: 74.18Min: 72.74 / Avg: 73.75 / Max: 74.27

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.47, N = 3SE +/- 0.26, N = 339.3537.29MIN: 36.65 / MAX: 40.42MIN: 35.83 / MAX: 39.17
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer816243240Min: 38.62 / Avg: 39.35 / Max: 40.24Min: 36.87 / Avg: 37.29 / Max: 37.75

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.08, N = 3SE +/- 0.02, N = 338.9536.10MIN: 37.12 / MAX: 39.27MIN: 34.25 / MAX: 38.01
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer816243240Min: 38.8 / Avg: 38.95 / Max: 39.06Min: 36.07 / Avg: 36.1 / Max: 36.13

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.24, N = 3SE +/- 0.30, N = 1537.7137.16MIN: 35.52 / MAX: 38.25MIN: 34.12 / MAX: 39.48
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer816243240Min: 37.27 / Avg: 37.71 / Max: 38.07Min: 35.42 / Avg: 37.16 / Max: 39.21

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.30, N = 9SE +/- 0.31, N = 1537.8837.24MIN: 35.67 / MAX: 39.63MIN: 33.97 / MAX: 39.43
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer816243240Min: 36.42 / Avg: 37.88 / Max: 39.19Min: 35.39 / Avg: 37.24 / Max: 39.02

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.15, N = 3SE +/- 0.30, N = 1537.3636.60MIN: 35.47 / MAX: 37.85MIN: 33.07 / MAX: 39.53
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer816243240Min: 37.07 / Avg: 37.36 / Max: 37.53Min: 35.12 / Avg: 36.6 / Max: 38.83

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.03, N = 3SE +/- 0.33, N = 837.4337.22MIN: 35.81 / MAX: 38.02MIN: 34.99 / MAX: 39.08
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer816243240Min: 37.39 / Avg: 37.43 / Max: 37.48Min: 35.84 / Avg: 37.22 / Max: 38.81

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesmanticmantic-no-omit-framepointernoble2004006008001000SE +/- 1.00, N = 3SE +/- 1.20, N = 3SE +/- 8.70, N = 4774790839
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesmanticmantic-no-omit-framepointernoble150300450600750Min: 772 / Avg: 774 / Max: 775Min: 788 / Avg: 790.33 / Max: 792Min: 813 / Avg: 838.5 / Max: 852

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: gomanticmantic-no-omit-framepointernoble306090120150SE +/- 0.00, N = 3SE +/- 0.33, N = 3SE +/- 0.00, N = 3129131121
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: gomanticmantic-no-omit-framepointernoble20406080100Min: 129 / Avg: 129 / Max: 129Min: 131 / Avg: 131.33 / Max: 132Min: 121 / Avg: 121 / Max: 121

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3manticmantic-no-omit-framepointer50100150200250SE +/- 0.00, N = 3SE +/- 0.33, N = 3221224
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3manticmantic-no-omit-framepointer4080120160200Min: 221 / Avg: 221 / Max: 221Min: 223 / Avg: 223.67 / Max: 224

Benchmark: 2to3

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosmanticmantic-no-omit-framepointer1428425670SE +/- 0.03, N = 3SE +/- 0.20, N = 362.863.6
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosmanticmantic-no-omit-framepointer1224364860Min: 62.8 / Avg: 62.83 / Max: 62.9Min: 63.2 / Avg: 63.57 / Max: 63.9

Benchmark: chaos

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatmanticmantic-no-omit-framepointer1530456075SE +/- 0.03, N = 3SE +/- 0.10, N = 367.466.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatmanticmantic-no-omit-framepointer1326395265Min: 67.4 / Avg: 67.43 / Max: 67.5Min: 66.8 / Avg: 66.9 / Max: 67.1

Benchmark: float

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodymanticmantic-no-omit-framepointer20406080100SE +/- 0.06, N = 3SE +/- 0.07, N = 376.277.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodymanticmantic-no-omit-framepointer1530456075Min: 76.1 / Avg: 76.2 / Max: 76.3Min: 77 / Avg: 77.07 / Max: 77.2

Benchmark: nbody

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibmanticmantic-no-omit-framepointer510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 319.720.2
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibmanticmantic-no-omit-framepointer510152025Min: 19.7 / Avg: 19.7 / Max: 19.7Min: 20.2 / Avg: 20.2 / Max: 20.2

Benchmark: pathlib

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytracemanticmantic-no-omit-framepointer60120180240300SE +/- 0.33, N = 3SE +/- 0.33, N = 3262274
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytracemanticmantic-no-omit-framepointer50100150200250Min: 261 / Avg: 261.67 / Max: 262Min: 273 / Avg: 273.67 / Max: 274

Benchmark: raytrace

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsmanticmantic-no-omit-framepointernoble510152025SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 319.520.822.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsmanticmantic-no-omit-framepointernoble510152025Min: 19.4 / Avg: 19.5 / Max: 19.6Min: 20.7 / Avg: 20.77 / Max: 20.8Min: 22.7 / Avg: 22.77 / Max: 22.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesmanticmantic-no-omit-framepointer1530456075SE +/- 0.06, N = 3SE +/- 0.00, N = 365.166.6
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesmanticmantic-no-omit-framepointer1326395265Min: 65 / Avg: 65.1 / Max: 65.2Min: 66.6 / Avg: 66.6 / Max: 66.6

Benchmark: crypto_pyaes

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compilemanticmantic-no-omit-framepointer306090120150SE +/- 0.00, N = 3SE +/- 0.33, N = 3116120
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compilemanticmantic-no-omit-framepointer20406080100Min: 116 / Avg: 116 / Max: 116Min: 119 / Avg: 119.67 / Max: 120

Benchmark: regex_compile

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupmanticmantic-no-omit-framepointernoble246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.617.648.76
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupmanticmantic-no-omit-framepointernoble3691215Min: 7.6 / Avg: 7.61 / Max: 7.62Min: 7.63 / Avg: 7.64 / Max: 7.65Min: 8.74 / Avg: 8.76 / Max: 8.77

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templatemanticmantic-no-omit-framepointer714212835SE +/- 0.03, N = 3SE +/- 0.06, N = 328.529.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templatemanticmantic-no-omit-framepointer714212835Min: 28.4 / Avg: 28.47 / Max: 28.5Min: 29.4 / Avg: 29.5 / Max: 29.6

Benchmark: django_template

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonmanticmantic-no-omit-framepointer60120180240300SE +/- 0.33, N = 3SE +/- 0.58, N = 3259263
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonmanticmantic-no-omit-framepointer50100150200250Min: 258 / Avg: 258.67 / Max: 259Min: 262 / Avg: 263 / Max: 264

Benchmark: pickle_pure_python

noble: The test quit with a non-zero exit status. E: ERROR: No benchmark was run

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.00070.00140.00210.00280.0035SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0030.0030.003
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.0020.0040.0060.0080.01SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0090.0080.009
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.00360.00720.01080.01440.018SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 150.0150.0150.016
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.00740.01480.02220.02960.037SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0320.0320.033
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.00070.00140.00210.00280.0035SE +/- 0.000, N = 3SE +/- 0.000, N = 15SE +/- 0.000, N = 120.0030.0020.003
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.0020.0040.0060.0080.01SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0090.0080.008
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.00340.00680.01020.01360.017SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 70.0150.0150.015
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.00770.01540.02310.03080.0385SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0330.0330.034
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.01370.02740.04110.05480.0685SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0610.0580.060
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.02990.05980.08970.11960.1495SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.1310.1320.133
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.14

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.0140.0280.0420.0560.07SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0620.0580.061
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.03060.06120.09180.12240.153SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1310.1280.136
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.14 / Avg: 0.14 / Max: 0.14

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.05920.11840.17760.23680.296SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.2630.2620.261
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0.26 / Avg: 0.26 / Max: 0.27Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.26 / Avg: 0.26 / Max: 0.26

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.1420.2840.4260.5680.71SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.006, N = 30.6190.6180.631
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble246810Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.62 / Avg: 0.63 / Max: 0.64

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.32310.64620.96931.29241.6155SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 31.4021.4051.436
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble246810Min: 1.4 / Avg: 1.4 / Max: 1.41Min: 1.4 / Avg: 1.41 / Max: 1.41Min: 1.43 / Avg: 1.44 / Max: 1.44

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.6121.2241.8362.4483.06SE +/- 0.010, N = 3SE +/- 0.002, N = 3SE +/- 0.010, N = 32.6702.6262.720
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble246810Min: 2.65 / Avg: 2.67 / Max: 2.68Min: 2.62 / Avg: 2.63 / Max: 2.63Min: 2.7 / Avg: 2.72 / Max: 2.74

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.05920.11840.17760.23680.296SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 30.2630.2600.262
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble12345Min: 0.26 / Avg: 0.26 / Max: 0.27Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.26 / Avg: 0.26 / Max: 0.27

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.1420.2840.4260.5680.71SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.004, N = 30.6310.6220.630
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble246810Min: 0.63 / Avg: 0.63 / Max: 0.64Min: 0.61 / Avg: 0.62 / Max: 0.64Min: 0.63 / Avg: 0.63 / Max: 0.64

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.32540.65080.97621.30161.627SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.006, N = 31.4221.4111.446
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble246810Min: 1.41 / Avg: 1.42 / Max: 1.43Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 1.43 / Avg: 1.45 / Max: 1.45

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.60031.20061.80092.40123.0015SE +/- 0.006, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 32.6622.6202.668
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble246810Min: 2.65 / Avg: 2.66 / Max: 2.67Min: 2.61 / Avg: 2.62 / Max: 2.63Min: 2.66 / Avg: 2.67 / Max: 2.68

Scikit-Learn

Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMmanticmantic-no-omit-framepointernoble60120180240300SE +/- 1.06, N = 3SE +/- 1.07, N = 3SE +/- 0.93, N = 3293.60295.10269.81-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMmanticmantic-no-omit-framepointernoble50100150200250Min: 291.59 / Avg: 293.6 / Max: 295.16Min: 293.86 / Avg: 295.1 / Max: 297.23Min: 268.21 / Avg: 269.81 / Max: 271.441. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SAGAmanticmantic-no-omit-framepointernoble2004006008001000SE +/- 8.69, N = 6SE +/- 5.60, N = 3SE +/- 10.35, N = 3868.02873.82869.37-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SAGAmanticmantic-no-omit-framepointernoble150300450600750Min: 837.59 / Avg: 868.02 / Max: 888.85Min: 862.7 / Avg: 873.82 / Max: 880.53Min: 851.23 / Avg: 869.37 / Max: 887.061. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Treemanticmantic-no-omit-framepointernoble1224364860SE +/- 0.59, N = 4SE +/- 0.48, N = 15SE +/- 0.52, N = 348.3452.9747.03-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Treemanticmantic-no-omit-framepointernoble1122334455Min: 47.22 / Avg: 48.34 / Max: 49.93Min: 49.51 / Avg: 52.97 / Max: 56.87Min: 46.03 / Avg: 47.03 / Max: 47.751. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Lassomanticmantic-no-omit-framepointernoble110220330440550SE +/- 3.22, N = 3SE +/- 3.50, N = 3SE +/- 1.37, N = 3511.85509.54345.40-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Lassomanticmantic-no-omit-framepointernoble90180270360450Min: 505.7 / Avg: 511.85 / Max: 516.57Min: 505.22 / Avg: 509.54 / Max: 516.47Min: 342.7 / Avg: 345.4 / Max: 347.181. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparsifymanticmantic-no-omit-framepointernoble306090120150SE +/- 1.36, N = 5SE +/- 1.28, N = 5SE +/- 0.65, N = 3127.28125.44125.07-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparsifymanticmantic-no-omit-framepointernoble20406080100Min: 123.53 / Avg: 127.28 / Max: 130.13Min: 122.64 / Avg: 125.44 / Max: 129.11Min: 124.1 / Avg: 125.07 / Max: 126.311. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Wardmanticmantic-no-omit-framepointernoble1326395265SE +/- 0.21, N = 3SE +/- 0.22, N = 3SE +/- 0.20, N = 357.8257.5556.13-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Wardmanticmantic-no-omit-framepointernoble1122334455Min: 57.41 / Avg: 57.82 / Max: 58.12Min: 57.11 / Avg: 57.55 / Max: 57.81Min: 55.79 / Avg: 56.13 / Max: 56.51. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST Datasetmanticmantic-no-omit-framepointernoble1530456075SE +/- 0.82, N = 4SE +/- 0.47, N = 3SE +/- 0.67, N = 365.7665.8865.42-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST Datasetmanticmantic-no-omit-framepointernoble1326395265Min: 64.08 / Avg: 65.76 / Max: 67.77Min: 65.23 / Avg: 65.88 / Max: 66.79Min: 64.62 / Avg: 65.42 / Max: 66.741. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Neighborsmanticmantic-no-omit-framepointernoble306090120150SE +/- 1.34, N = 7SE +/- 0.59, N = 3SE +/- 1.09, N = 3147.75142.45142.16-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Neighborsmanticmantic-no-omit-framepointernoble306090120150Min: 144.27 / Avg: 147.75 / Max: 153.72Min: 141.28 / Avg: 142.45 / Max: 143.17Min: 139.99 / Avg: 142.16 / Max: 143.331. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD Regressionmanticmantic-no-omit-framepointernoble20406080100SE +/- 1.06, N = 6SE +/- 0.49, N = 3SE +/- 0.05, N = 3106.32107.5378.88-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD Regressionmanticmantic-no-omit-framepointernoble20406080100Min: 103.53 / Avg: 106.31 / Max: 109.64Min: 106.55 / Avg: 107.53 / Max: 108.11Min: 78.78 / Avg: 78.88 / Max: 78.961. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGDOneClassSVMmanticmantic-no-omit-framepointernoble80160240320400SE +/- 4.18, N = 3SE +/- 3.48, N = 7SE +/- 3.55, N = 3379.74382.61385.38-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGDOneClassSVMmanticmantic-no-omit-framepointernoble70140210280350Min: 373.94 / Avg: 379.74 / Max: 387.86Min: 373.2 / Avg: 382.61 / Max: 400.64Min: 378.71 / Avg: 385.38 / Max: 390.851. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isolation Forestmanticmantic-no-omit-framepointernoble70140210280350SE +/- 1.30, N = 3SE +/- 51.04, N = 9SE +/- 2.83, N = 3289.37336.37314.03-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isolation Forestmanticmantic-no-omit-framepointernoble60120180240300Min: 286.8 / Avg: 289.37 / Max: 290.97Min: 282.28 / Avg: 336.37 / Max: 744.55Min: 308.77 / Avg: 314.03 / Max: 318.491. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Text Vectorizersmanticmantic-no-omit-framepointernoble1530456075SE +/- 0.19, N = 3SE +/- 0.08, N = 3SE +/- 0.32, N = 360.8163.8866.39-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Text Vectorizersmanticmantic-no-omit-framepointernoble1326395265Min: 60.52 / Avg: 60.81 / Max: 61.16Min: 63.78 / Avg: 63.88 / Max: 64.03Min: 65.88 / Avg: 66.39 / Max: 66.981. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Hierarchicalmanticmantic-no-omit-framepointernoble50100150200250SE +/- 0.75, N = 3SE +/- 0.42, N = 3SE +/- 2.35, N = 3211.29208.39207.10-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Hierarchicalmanticmantic-no-omit-framepointernoble4080120160200Min: 209.87 / Avg: 211.29 / Max: 212.42Min: 207.76 / Avg: 208.39 / Max: 209.18Min: 204.75 / Avg: 207.1 / Max: 211.811. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSmanticmantic-no-omit-framepointernoble20406080100SE +/- 0.08, N = 3SE +/- 0.44, N = 3SE +/- 0.03, N = 391.5092.5868.17-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSmanticmantic-no-omit-framepointernoble20406080100Min: 91.4 / Avg: 91.5 / Max: 91.66Min: 91.84 / Avg: 92.58 / Max: 93.36Min: 68.13 / Avg: 68.17 / Max: 68.231. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature Expansionsmanticmantic-no-omit-framepointernoble306090120150SE +/- 0.86, N = 3SE +/- 1.22, N = 3SE +/- 1.21, N = 3131.28133.09133.14-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature Expansionsmanticmantic-no-omit-framepointernoble20406080100Min: 129.84 / Avg: 131.28 / Max: 132.82Min: 130.74 / Avg: 133.09 / Max: 134.82Min: 131.66 / Avg: 133.14 / Max: 135.551. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LocalOutlierFactormanticmantic-no-omit-framepointernoble1326395265SE +/- 0.18, N = 3SE +/- 0.74, N = 15SE +/- 0.02, N = 353.4656.7554.29-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LocalOutlierFactormanticmantic-no-omit-framepointernoble1122334455Min: 53.15 / Avg: 53.46 / Max: 53.76Min: 53.59 / Avg: 56.75 / Max: 59.84Min: 54.25 / Avg: 54.29 / Max: 54.331. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST Datasetmanticmantic-no-omit-framepointernoble60120180240300SE +/- 0.44, N = 3SE +/- 0.54, N = 3SE +/- 0.91, N = 3236.87236.79285.82-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST Datasetmanticmantic-no-omit-framepointernoble50100150200250Min: 236.11 / Avg: 236.87 / Max: 237.63Min: 235.7 / Avg: 236.79 / Max: 237.35Min: 284.77 / Avg: 285.82 / Max: 287.631. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Logisticmanticmantic-no-omit-framepointernoble400800120016002000SE +/- 12.29, N = 3SE +/- 14.46, N = 3SE +/- 9.43, N = 31470.811471.831684.55-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Logisticmanticmantic-no-omit-framepointernoble30060090012001500Min: 1455.62 / Avg: 1470.81 / Max: 1495.13Min: 1443.03 / Avg: 1471.83 / Max: 1488.44Min: 1665.69 / Avg: 1684.55 / Max: 1694.51. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Incremental PCAmanticmantic-no-omit-framepointernoble714212835SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 331.0131.0630.62-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Incremental PCAmanticmantic-no-omit-framepointernoble714212835Min: 30.98 / Avg: 31.01 / Max: 31.06Min: 30.92 / Avg: 31.06 / Max: 31.14Min: 30.5 / Avg: 30.62 / Max: 30.681. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boostingmanticmantic-no-omit-framepointernoble306090120150SE +/- 0.22, N = 3SE +/- 0.25, N = 3SE +/- 0.17, N = 3109.98111.26117.41-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boostingmanticmantic-no-omit-framepointernoble20406080100Min: 109.73 / Avg: 109.98 / Max: 110.43Min: 110.85 / Avg: 111.26 / Max: 111.71Min: 117.09 / Avg: 117.41 / Max: 117.671. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sample Without Replacementmanticmantic-no-omit-framepointernoble4080120160200SE +/- 0.60, N = 3SE +/- 0.62, N = 3SE +/- 2.21, N = 3158.26161.46179.64-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sample Without Replacementmanticmantic-no-omit-framepointernoble306090120150Min: 157.17 / Avg: 158.26 / Max: 159.25Min: 160.84 / Avg: 161.46 / Max: 162.7Min: 175.46 / Avg: 179.64 / Max: 1831. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Covertype Dataset Benchmarkmanticmantic-no-omit-framepointernoble80160240320400SE +/- 4.88, N = 3SE +/- 3.40, N = 3SE +/- 2.58, N = 3376.15370.69381.45-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Covertype Dataset Benchmarkmanticmantic-no-omit-framepointernoble70140210280350Min: 366.88 / Avg: 376.15 / Max: 383.45Min: 364.63 / Avg: 370.69 / Max: 376.39Min: 378.64 / Avg: 381.45 / Max: 386.61. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Adultmanticmantic-no-omit-framepointernoble306090120150SE +/- 0.70, N = 3SE +/- 0.59, N = 3SE +/- 0.52, N = 3103.50105.65112.71-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Adultmanticmantic-no-omit-framepointernoble20406080100Min: 102.5 / Avg: 103.5 / Max: 104.85Min: 104.73 / Avg: 105.65 / Max: 106.76Min: 112.1 / Avg: 112.71 / Max: 113.741. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Perturbed Logarithmmanticmantic-no-omit-framepointernoble400800120016002000SE +/- 24.41, N = 3SE +/- 16.46, N = 3SE +/- 1.48, N = 31788.261828.301963.77-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Perturbed Logarithmmanticmantic-no-omit-framepointernoble30060090012001500Min: 1757.82 / Avg: 1788.26 / Max: 1836.53Min: 1804.33 / Avg: 1828.3 / Max: 1859.82Min: 1961.44 / Avg: 1963.77 / Max: 1966.521. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Threadingmanticmantic-no-omit-framepointernoble20406080100SE +/- 0.13, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 3110.22110.37111.55-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Threadingmanticmantic-no-omit-framepointernoble20406080100Min: 109.96 / Avg: 110.21 / Max: 110.4Min: 110.07 / Avg: 110.37 / Max: 110.56Min: 111.35 / Avg: 111.55 / Max: 111.791. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: 20 Newsgroups / Logistic Regressionmanticmantic-no-omit-framepointernoble1020304050SE +/- 0.19, N = 3SE +/- 0.24, N = 3SE +/- 0.12, N = 341.5241.7341.91-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: 20 Newsgroups / Logistic Regressionmanticmantic-no-omit-framepointernoble918273645Min: 41.16 / Avg: 41.52 / Max: 41.82Min: 41.3 / Avg: 41.73 / Max: 42.13Min: 41.71 / Avg: 41.91 / Max: 42.11. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Polynomial Kernel Approximationmanticmantic-no-omit-framepointernoble306090120150SE +/- 1.22, N = 3SE +/- 1.20, N = 3SE +/- 1.46, N = 3150.73150.38145.36-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Polynomial Kernel Approximationmanticmantic-no-omit-framepointernoble306090120150Min: 148.3 / Avg: 150.73 / Max: 152.11Min: 147.99 / Avg: 150.38 / Max: 151.73Min: 142.61 / Avg: 145.36 / Max: 147.591. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Categorical Onlymanticmantic-no-omit-framepointernoble510152025SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 318.5818.8719.93-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Categorical Onlymanticmantic-no-omit-framepointernoble510152025Min: 18.48 / Avg: 18.58 / Max: 18.7Min: 18.73 / Avg: 18.87 / Max: 19.1Min: 19.78 / Avg: 19.93 / Max: 20.111. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Samplesmanticmantic-no-omit-framepointernoble1632486480SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.44, N = 372.5472.9170.02-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Samplesmanticmantic-no-omit-framepointernoble1428425670Min: 72.44 / Avg: 72.54 / Max: 72.63Min: 72.62 / Avg: 72.91 / Max: 73.16Min: 69.15 / Avg: 70.02 / Max: 70.531. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Componentsmanticmantic-no-omit-framepointernoble918273645SE +/- 0.21, N = 3SE +/- 0.36, N = 3SE +/- 0.43, N = 337.2437.8937.11-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Componentsmanticmantic-no-omit-framepointernoble816243240Min: 36.95 / Avg: 37.24 / Max: 37.65Min: 37.2 / Avg: 37.89 / Max: 38.4Min: 36.25 / Avg: 37.11 / Max: 37.621. (F9X) gfortran options: -O0

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparse Random Projections / 100 Iterationsmanticmantic-no-omit-framepointernoble140280420560700SE +/- 3.80, N = 3SE +/- 7.06, N = 4SE +/- 4.34, N = 3613.55631.07663.95-O21. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparse Random Projections / 100 Iterationsmanticmantic-no-omit-framepointernoble120240360480600Min: 609.57 / Avg: 613.55 / Max: 621.15Min: 613.57 / Avg: 631.07 / Max: 647.72Min: 658.36 / Avg: 663.95 / Max: 672.511. (F9X) gfortran options: -O0

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Device: GPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing

mantic-no-omit-framepointer: The test run did not produce a result.

noble: The test run did not produce a result.

Scikit-Learn

Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.

Benchmark: Glmnet

mantic-no-omit-framepointer: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'glmnet'

noble: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'glmnet'

Benchmark: Plot Lasso Path

mantic-no-omit-framepointer: The test quit with a non-zero exit status. E: AttributeError: type object 'Axis' has no attribute '_set_ticklabels'. Did you mean: 'set_ticklabels'?

noble: The test quit with a non-zero exit status. E: AttributeError: type object 'Axis' has no attribute '_set_ticklabels'. Did you mean: 'set_ticklabels'?

Benchmark: Plot Fast KMeans

mantic-no-omit-framepointer: The test quit with a non-zero exit status. E: AttributeError: type object 'Axis' has no attribute '_set_ticklabels'. Did you mean: 'set_ticklabels'?

noble: The test quit with a non-zero exit status. E: AttributeError: type object 'Axis' has no attribute '_set_ticklabels'. Did you mean: 'set_ticklabels'?

Benchmark: Plot Parallel Pairwise

mantic-no-omit-framepointer: The test quit with a non-zero exit status. E: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 74.5 GiB for an array with shape (100000, 100000) and data type float64

noble: The test quit with a non-zero exit status. E: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 74.5 GiB for an array with shape (100000, 100000) and data type float64

Benchmark: Isotonic / Pathological

mantic-no-omit-framepointer: The test quit with a non-zero exit status.

noble: The test quit with a non-zero exit status.

Benchmark: RCV1 Logreg Convergencet

mantic-no-omit-framepointer: The test quit with a non-zero exit status. E: IndexError: list index out of range

noble: The test quit with a non-zero exit status. E: IndexError: list index out of range

Benchmark: Plot Singular Value Decomposition

mantic-no-omit-framepointer: The test quit with a non-zero exit status. E: AttributeError: type object 'Axis' has no attribute '_set_ticklabels'. Did you mean: 'set_ticklabels'?

noble: The test quit with a non-zero exit status. E: AttributeError: type object 'Axis' has no attribute '_set_ticklabels'. Did you mean: 'set_ticklabels'?

102 Results Shown

Numpy Benchmark
PyTorch:
  CPU - 1 - ResNet-50
  CPU - 1 - ResNet-152
  CPU - 16 - ResNet-50
  CPU - 32 - ResNet-50
  CPU - 64 - ResNet-50
  CPU - 16 - ResNet-152
  CPU - 256 - ResNet-50
  CPU - 32 - ResNet-152
  CPU - 512 - ResNet-50
  CPU - 64 - ResNet-152
  CPU - 256 - ResNet-152
  CPU - 512 - ResNet-152
  CPU - 1 - Efficientnet_v2_l
  CPU - 16 - Efficientnet_v2_l
  CPU - 32 - Efficientnet_v2_l
  CPU - 64 - Efficientnet_v2_l
  CPU - 256 - Efficientnet_v2_l
  CPU - 512 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 1 - ResNet-50
  NVIDIA CUDA GPU - 1 - ResNet-152
  NVIDIA CUDA GPU - 16 - ResNet-50
  NVIDIA CUDA GPU - 32 - ResNet-50
  NVIDIA CUDA GPU - 64 - ResNet-50
  NVIDIA CUDA GPU - 16 - ResNet-152
  NVIDIA CUDA GPU - 256 - ResNet-50
  NVIDIA CUDA GPU - 32 - ResNet-152
  NVIDIA CUDA GPU - 512 - ResNet-50
  NVIDIA CUDA GPU - 64 - ResNet-152
  NVIDIA CUDA GPU - 256 - ResNet-152
  NVIDIA CUDA GPU - 512 - ResNet-152
  NVIDIA CUDA GPU - 1 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 16 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 32 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 64 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 256 - Efficientnet_v2_l
  NVIDIA CUDA GPU - 512 - Efficientnet_v2_l
PyBench
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
PyHPC Benchmarks:
  CPU - Numpy - 16384 - Equation of State
  CPU - Numpy - 16384 - Isoneutral Mixing
  CPU - Numpy - 65536 - Equation of State
  CPU - Numpy - 65536 - Isoneutral Mixing
  GPU - Numpy - 16384 - Equation of State
  GPU - Numpy - 16384 - Isoneutral Mixing
  GPU - Numpy - 65536 - Equation of State
  GPU - Numpy - 65536 - Isoneutral Mixing
  CPU - Numpy - 262144 - Equation of State
  CPU - Numpy - 262144 - Isoneutral Mixing
  GPU - Numpy - 262144 - Equation of State
  GPU - Numpy - 262144 - Isoneutral Mixing
  CPU - Numpy - 1048576 - Equation of State
  CPU - Numpy - 1048576 - Isoneutral Mixing
  CPU - Numpy - 4194304 - Equation of State
  CPU - Numpy - 4194304 - Isoneutral Mixing
  GPU - Numpy - 1048576 - Equation of State
  GPU - Numpy - 1048576 - Isoneutral Mixing
  GPU - Numpy - 4194304 - Equation of State
  GPU - Numpy - 4194304 - Isoneutral Mixing
Scikit-Learn:
  GLM
  SAGA
  Tree
  Lasso
  Sparsify
  Plot Ward
  MNIST Dataset
  Plot Neighbors
  SGD Regression
  SGDOneClassSVM
  Isolation Forest
  Text Vectorizers
  Plot Hierarchical
  Plot OMP vs. LARS
  Feature Expansions
  LocalOutlierFactor
  TSNE MNIST Dataset
  Isotonic / Logistic
  Plot Incremental PCA
  Hist Gradient Boosting
  Sample Without Replacement
  Covertype Dataset Benchmark
  Hist Gradient Boosting Adult
  Isotonic / Perturbed Logarithm
  Hist Gradient Boosting Threading
  20 Newsgroups / Logistic Regression
  Plot Polynomial Kernel Approximation
  Hist Gradient Boosting Categorical Only
  Kernel PCA Solvers / Time vs. N Samples
  Kernel PCA Solvers / Time vs. N Components
  Sparse Rand Projections / 100 Iterations