Desktop machine learning

AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2405015-VPA1-DESKTOP46
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 3 Tests
Programmer / Developer System Benchmarks 2 Tests
Python 5 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
mantic
February 23
  15 Hours, 54 Minutes
mantic-no-omit-framepointer
February 24
  19 Hours, 11 Minutes
noble
April 30
  14 Hours, 21 Minutes
Invert Hiding All Results Option
  16 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Desktop machine learning - Phoronix Test Suite

Desktop machine learning

AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2405015-VPA1-DESKTOP46&grr&sor&rro.

Desktop machine learningProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerDisplay DriverOpenCLCompilerFile-SystemScreen Resolutionmanticmantic-no-omit-framepointernobleAMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads)MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS)AMD Starship/Matisse2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310CNVIDIA GeForce RTX 3060 12GBNVIDIA GA104 HD AudioDELL P2314HRealtek RTL8111/8168/8411Ubuntu 23.106.5.0-9-generic (x86_64)X Server 1.21.1.7NVIDIAOpenCL 3.0 CUDA 12.2.146GCC 13.2.0 + CUDA 12.2ext41920x1080NVIDIA GeForce RTX 3060DELL P2314H + U32J59xRealtek RTL8111/8168/8211/8411Ubuntu 24.046.8.0-31-generic (x86_64)GCC 13.2.0OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- mantic: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - mantic-no-omit-framepointer: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - noble: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701013Python Details- mantic: Python 3.11.6- mantic-no-omit-framepointer: Python 3.11.6- noble: Python 3.12.3Security Details- mantic: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - mantic-no-omit-framepointer: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - noble: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected Environment Details- mantic-no-omit-framepointer: CXXFLAGS=-fno-omit-frame-pointer QMAKE_CFLAGS=-fno-omit-frame-pointer CFLAGS=-fno-omit-frame-pointer CFLAGS_OVERRIDE=-fno-omit-frame-pointer QMAKE_CXXFLAGS=-fno-omit-frame-pointer FFLAGS=-fno-omit-frame-pointer - noble: CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS_OVERRIDE="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" FFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2"

Desktop machine learningscikit-learn: Isotonic / Perturbed Logarithmscikit-learn: SAGAscikit-learn: Isotonic / Logisticscikit-learn: Isolation Forestscikit-learn: Sparse Rand Projections / 100 Iterationsscikit-learn: SGDOneClassSVMscikit-learn: Lassoscikit-learn: Covertype Dataset Benchmarkscikit-learn: GLMpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lscikit-learn: TSNE MNIST Datasetscikit-learn: Plot Hierarchicalscikit-learn: Plot Neighborspytorch: NVIDIA CUDA GPU - 64 - Efficientnet_v2_lscikit-learn: Sparsifyscikit-learn: Sample Without Replacementpytorch: CPU - 512 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 64 - ResNet-152pytorch: CPU - 16 - ResNet-152scikit-learn: Plot Polynomial Kernel Approximationpytorch: NVIDIA CUDA GPU - 256 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 32 - Efficientnet_v2_lnumpy: scikit-learn: Feature Expansionsscikit-learn: SGD Regressionscikit-learn: Plot Incremental PCApytorch: CPU - 1 - Efficientnet_v2_lscikit-learn: LocalOutlierFactorscikit-learn: Hist Gradient Boostingscikit-learn: Hist Gradient Boosting Threadingscikit-learn: Hist Gradient Boosting Adultscikit-learn: Treepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingpyhpc: GPU - Numpy - 4194304 - Isoneutral Mixingpytorch: NVIDIA CUDA GPU - 512 - Efficientnet_v2_lscikit-learn: Plot OMP vs. LARSscikit-learn: MNIST Datasetscikit-learn: Kernel PCA Solvers / Time vs. N Samplespytorch: CPU - 1 - ResNet-152scikit-learn: Text Vectorizerspytorch: CPU - 32 - ResNet-50pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 256 - ResNet-50pytorch: CPU - 16 - ResNet-50scikit-learn: Plot Wardpyperformance: python_startuppytorch: NVIDIA CUDA GPU - 16 - Efficientnet_v2_lscikit-learn: 20 Newsgroups / Logistic Regressionscikit-learn: Kernel PCA Solvers / Time vs. N Componentspyhpc: GPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Equation of Statepytorch: CPU - 1 - ResNet-50pytorch: NVIDIA CUDA GPU - 256 - ResNet-152pytorch: NVIDIA CUDA GPU - 16 - ResNet-152pytorch: NVIDIA CUDA GPU - 64 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-152pytorch: NVIDIA CUDA GPU - 32 - ResNet-152pytorch: NVIDIA CUDA GPU - 1 - Efficientnet_v2_lpyperformance: raytracepyperformance: gopyhpc: GPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingpyperformance: 2to3scikit-learn: Hist Gradient Boosting Categorical Onlypytorch: NVIDIA CUDA GPU - 1 - ResNet-50pyperformance: json_loadspyhpc: CPU - Numpy - 262144 - Isoneutral Mixingpyhpc: GPU - Numpy - 262144 - Isoneutral Mixingpyperformance: pathlibpybench: Total For Average Test Timespyperformance: pickle_pure_pythonpytorch: NVIDIA CUDA GPU - 1 - ResNet-152pyperformance: nbodypyperformance: django_templatepyperformance: floatpytorch: NVIDIA CUDA GPU - 32 - ResNet-50pyperformance: regex_compilepyperformance: crypto_pyaespyperformance: chaospytorch: NVIDIA CUDA GPU - 16 - ResNet-50pytorch: NVIDIA CUDA GPU - 512 - ResNet-50pytorch: NVIDIA CUDA GPU - 256 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-50pyhpc: CPU - Numpy - 16384 - Isoneutral Mixingpyhpc: GPU - Numpy - 16384 - Isoneutral Mixingpyhpc: GPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numpy - 1048576 - Equation of Statepyhpc: GPU - Numpy - 1048576 - Equation of Statepyhpc: GPU - Numpy - 262144 - Equation of Statepyhpc: CPU - Numpy - 262144 - Equation of Statepyhpc: GPU - Numpy - 65536 - Isoneutral Mixingpyhpc: CPU - Numpy - 65536 - Isoneutral Mixingpyhpc: CPU - Numpy - 65536 - Equation of Statepyhpc: GPU - Numpy - 65536 - Equation of Statepyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - JAX - 16384 - Isoneutral Mixingmanticmantic-no-omit-framepointernoble1788.259868.0181470.806289.371613.547379.739511.848376.145293.5985.635.635.615.615.62236.865211.286147.75237.88127.282158.2629.879.779.849.889.88150.73237.3637.71426.28131.277106.31531.0067.3153.464109.984110.215103.49748.3382.6702.66237.4391.49965.76372.54112.7260.81424.2924.1324.2424.4224.2857.8247.6138.9541.51937.2421.4221.40232.3671.7473.0171.8172.3174.1539.352621290.6310.61922118.579210.8819.50.1310.13119.777425973.9176.228.567.4199.4611665.162.8200.30203.18202.72201.410.0090.0090.0030.2630.2630.0620.0610.0330.0320.0150.0150.0031828.300873.8221471.834336.372631.071382.611509.537370.694295.0965.645.645.645.655.65236.786208.391142.45137.24125.442161.4609.809.9110.009.919.93150.37636.6037.16428.61133.092107.52731.0577.3256.754111.255110.374105.64752.9692.6262.62037.2292.58265.87772.90912.7863.87524.3524.2824.4024.3724.3857.5457.6436.1041.72837.8891.4111.40532.5472.9172.2473.6573.7573.3637.292741310.6220.61822418.865211.4620.80.1320.12820.279026372.2777.129.566.9202.6812066.663.6200.17201.14203.22205.950.0080.0080.0020.2620.2600.0580.0580.0330.0320.0150.0150.0031963.772869.3691684.546314.034663.953385.383345.400381.447269.8065.595.595.615.605.60285.823207.104142.159125.069179.6389.879.869.819.879.88145.363430.83133.14478.88030.6177.3154.288117.407111.554112.71347.0332.7202.66868.17265.41670.02212.8966.39324.1224.3024.1924.3324.4356.1328.7641.91437.1071.4461.43632.341210.6300.63119.93222.80.1330.1368390.0090.0080.0030.2610.2620.0610.0600.0340.0330.0160.0150.003OpenBenchmarking.org

Scikit-Learn

Benchmark: Isotonic / Perturbed Logarithm

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Perturbed Logarithmnoblemantic-no-omit-framepointermantic400800120016002000SE +/- 1.48, N = 3SE +/- 16.46, N = 3SE +/- 24.41, N = 31963.771828.301788.26-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SAGA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SAGAmantic-no-omit-framepointernoblemantic2004006008001000SE +/- 5.60, N = 3SE +/- 10.35, N = 3SE +/- 8.69, N = 6873.82869.37868.02-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Isotonic / Logistic

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Logisticnoblemantic-no-omit-framepointermantic400800120016002000SE +/- 9.43, N = 3SE +/- 14.46, N = 3SE +/- 12.29, N = 31684.551471.831470.81-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Isolation Forest

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isolation Forestmantic-no-omit-framepointernoblemantic70140210280350SE +/- 51.04, N = 9SE +/- 2.83, N = 3SE +/- 1.30, N = 3336.37314.03289.37-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sparse Random Projections / 100 Iterations

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparse Random Projections / 100 Iterationsnoblemantic-no-omit-framepointermantic140280420560700SE +/- 4.34, N = 3SE +/- 7.06, N = 4SE +/- 3.80, N = 3663.95631.07613.55-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SGDOneClassSVM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGDOneClassSVMnoblemantic-no-omit-framepointermantic80160240320400SE +/- 3.55, N = 3SE +/- 3.48, N = 7SE +/- 4.18, N = 3385.38382.61379.74-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Lasso

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Lassomanticmantic-no-omit-framepointernoble110220330440550SE +/- 3.22, N = 3SE +/- 3.50, N = 3SE +/- 1.37, N = 3511.85509.54345.40-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Covertype Dataset Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Covertype Dataset Benchmarknoblemanticmantic-no-omit-framepointer80160240320400SE +/- 2.58, N = 3SE +/- 4.88, N = 3SE +/- 3.40, N = 3381.45376.15370.69-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: GLM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMmantic-no-omit-framepointermanticnoble60120180240300SE +/- 1.07, N = 3SE +/- 1.06, N = 3SE +/- 0.93, N = 3295.10293.60269.81-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.2692.5383.8075.0766.345SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 35.595.635.64MIN: 5.31 / MAX: 5.65MIN: 5.39 / MAX: 5.71MIN: 5.45 / MAX: 5.68

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.2692.5383.8075.0766.345SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.595.635.64MIN: 5.46 / MAX: 5.64MIN: 5.31 / MAX: 5.68MIN: 5.52 / MAX: 5.69

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.2692.5383.8075.0766.345SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 35.615.615.64MIN: 5.46 / MAX: 5.67MIN: 5.44 / MAX: 5.65MIN: 5.29 / MAX: 5.68

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 35.605.615.65MIN: 5.37 / MAX: 5.66MIN: 5.45 / MAX: 5.66MIN: 5.36 / MAX: 5.93

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer1.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.605.625.65MIN: 5.32 / MAX: 5.64MIN: 5.35 / MAX: 5.66MIN: 5.45 / MAX: 5.7

Scikit-Learn

Benchmark: TSNE MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST Datasetnoblemanticmantic-no-omit-framepointer60120180240300SE +/- 0.91, N = 3SE +/- 0.44, N = 3SE +/- 0.54, N = 3285.82236.87236.79-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Hierarchical

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Hierarchicalmanticmantic-no-omit-framepointernoble50100150200250SE +/- 0.75, N = 3SE +/- 0.42, N = 3SE +/- 2.35, N = 3211.29208.39207.10-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Neighbors

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Neighborsmanticmantic-no-omit-framepointernoble306090120150SE +/- 1.34, N = 7SE +/- 0.59, N = 3SE +/- 1.09, N = 3147.75142.45142.16-O21. (F9X) gfortran options: -O0

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_lmantic-no-omit-framepointermantic918273645SE +/- 0.31, N = 15SE +/- 0.30, N = 937.2437.88MIN: 33.97 / MAX: 39.43MIN: 35.67 / MAX: 39.63

Scikit-Learn

Benchmark: Sparsify

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparsifymanticmantic-no-omit-framepointernoble306090120150SE +/- 1.36, N = 5SE +/- 1.28, N = 5SE +/- 0.65, N = 3127.28125.44125.07-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sample Without Replacement

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sample Without Replacementnoblemantic-no-omit-framepointermantic4080120160200SE +/- 2.21, N = 3SE +/- 0.62, N = 3SE +/- 0.60, N = 3179.64161.46158.26-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152mantic-no-omit-framepointernoblemantic3691215SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 39.809.879.87MIN: 9.12 / MAX: 9.98MIN: 9.21 / MAX: 10MIN: 9.09 / MAX: 9.96

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152manticnoblemantic-no-omit-framepointer3691215SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 39.779.869.91MIN: 9.17 / MAX: 10MIN: 8.69 / MAX: 9.99MIN: 9.19 / MAX: 10.05

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 39.819.8410.00MIN: 9.42 / MAX: 9.93MIN: 9.6 / MAX: 9.98MIN: 8.09 / MAX: 10.27

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 39.879.889.91MIN: 8.61 / MAX: 9.96MIN: 8.8 / MAX: 9.98MIN: 8.69 / MAX: 10.08

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152noblemanticmantic-no-omit-framepointer3691215SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 39.889.889.93MIN: 9.15 / MAX: 9.98MIN: 9.31 / MAX: 10.01MIN: 9.39 / MAX: 10.01

Scikit-Learn

Benchmark: Plot Polynomial Kernel Approximation

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Polynomial Kernel Approximationmanticmantic-no-omit-framepointernoble306090120150SE +/- 1.22, N = 3SE +/- 1.20, N = 3SE +/- 1.46, N = 3150.73150.38145.36-O21. (F9X) gfortran options: -O0

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_lmantic-no-omit-framepointermantic918273645SE +/- 0.30, N = 15SE +/- 0.15, N = 336.6037.36MIN: 33.07 / MAX: 39.53MIN: 35.47 / MAX: 37.85

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_lmantic-no-omit-framepointermantic918273645SE +/- 0.30, N = 15SE +/- 0.24, N = 337.1637.71MIN: 34.12 / MAX: 39.48MIN: 35.52 / MAX: 38.25

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkmanticmantic-no-omit-framepointernoble90180270360450SE +/- 1.20, N = 3SE +/- 0.90, N = 3SE +/- 1.01, N = 3426.28428.61430.83

Scikit-Learn

Benchmark: Feature Expansions

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature Expansionsnoblemantic-no-omit-framepointermantic306090120150SE +/- 1.21, N = 3SE +/- 1.22, N = 3SE +/- 0.86, N = 3133.14133.09131.28-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SGD Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD Regressionmantic-no-omit-framepointermanticnoble20406080100SE +/- 0.49, N = 3SE +/- 1.06, N = 6SE +/- 0.05, N = 3107.53106.3278.88-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Incremental PCA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Incremental PCAmantic-no-omit-framepointermanticnoble714212835SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 331.0631.0130.62-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lnoblemanticmantic-no-omit-framepointer246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 37.317.317.32MIN: 7.07 / MAX: 7.36MIN: 7.16 / MAX: 7.34MIN: 7.23 / MAX: 7.38

Scikit-Learn

Benchmark: LocalOutlierFactor

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LocalOutlierFactormantic-no-omit-framepointernoblemantic1326395265SE +/- 0.74, N = 15SE +/- 0.02, N = 3SE +/- 0.18, N = 356.7554.2953.46-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boostingnoblemantic-no-omit-framepointermantic306090120150SE +/- 0.17, N = 3SE +/- 0.25, N = 3SE +/- 0.22, N = 3117.41111.26109.98-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Threading

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Threadingnoblemantic-no-omit-framepointermantic20406080100SE +/- 0.13, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 3111.55110.37110.22-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Adult

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Adultnoblemantic-no-omit-framepointermantic306090120150SE +/- 0.52, N = 3SE +/- 0.59, N = 3SE +/- 0.70, N = 3112.71105.65103.50-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Tree

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Treemantic-no-omit-framepointermanticnoble1224364860SE +/- 0.48, N = 15SE +/- 0.59, N = 4SE +/- 0.52, N = 352.9748.3447.03-O21. (F9X) gfortran options: -O0

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingnoblemanticmantic-no-omit-framepointer0.6121.2241.8362.4483.06SE +/- 0.010, N = 3SE +/- 0.010, N = 3SE +/- 0.002, N = 32.7202.6702.626

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingnoblemanticmantic-no-omit-framepointer0.60031.20061.80092.40123.0015SE +/- 0.006, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 32.6682.6622.620

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_lmantic-no-omit-framepointermantic918273645SE +/- 0.33, N = 8SE +/- 0.03, N = 337.2237.43MIN: 34.99 / MAX: 39.08MIN: 35.81 / MAX: 38.02

Scikit-Learn

Benchmark: Plot OMP vs. LARS

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSmantic-no-omit-framepointermanticnoble20406080100SE +/- 0.44, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 392.5891.5068.17-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST Datasetmantic-no-omit-framepointermanticnoble1530456075SE +/- 0.47, N = 3SE +/- 0.82, N = 4SE +/- 0.67, N = 365.8865.7665.42-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Samples

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Samplesmantic-no-omit-framepointermanticnoble1632486480SE +/- 0.16, N = 3SE +/- 0.05, N = 3SE +/- 0.44, N = 372.9172.5470.02-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152manticmantic-no-omit-framepointernoble3691215SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 312.7212.7812.89MIN: 11.99 / MAX: 12.8MIN: 11.9 / MAX: 12.9MIN: 12.36 / MAX: 13.05

Scikit-Learn

Benchmark: Text Vectorizers

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Text Vectorizersnoblemantic-no-omit-framepointermantic1530456075SE +/- 0.32, N = 3SE +/- 0.08, N = 3SE +/- 0.19, N = 366.3963.8860.81-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 324.1224.2924.35MIN: 22.33 / MAX: 24.46MIN: 22.24 / MAX: 24.66MIN: 23.67 / MAX: 24.87

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50manticmantic-no-omit-framepointernoble612182430SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.14, N = 324.1324.2824.30MIN: 23.58 / MAX: 24.41MIN: 22.31 / MAX: 24.53MIN: 22.45 / MAX: 24.75

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50noblemanticmantic-no-omit-framepointer612182430SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 324.1924.2424.40MIN: 22.75 / MAX: 24.73MIN: 23.59 / MAX: 24.49MIN: 21.6 / MAX: 24.8

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50noblemantic-no-omit-framepointermantic612182430SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 324.3324.3724.42MIN: 22.79 / MAX: 24.66MIN: 23.76 / MAX: 24.81MIN: 20.15 / MAX: 24.74

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50manticmantic-no-omit-framepointernoble612182430SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.01, N = 324.2824.3824.43MIN: 20.22 / MAX: 24.56MIN: 22.2 / MAX: 24.87MIN: 22.57 / MAX: 24.72

Scikit-Learn

Benchmark: Plot Ward

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Wardmanticmantic-no-omit-framepointernoble1326395265SE +/- 0.21, N = 3SE +/- 0.22, N = 3SE +/- 0.20, N = 357.8257.5556.13-O21. (F9X) gfortran options: -O0

PyPerformance

Benchmark: python_startup

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupnoblemantic-no-omit-framepointermantic246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.767.647.61

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_lmantic-no-omit-framepointermantic918273645SE +/- 0.02, N = 3SE +/- 0.08, N = 336.1038.95MIN: 34.25 / MAX: 38.01MIN: 37.12 / MAX: 39.27

Scikit-Learn

Benchmark: 20 Newsgroups / Logistic Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: 20 Newsgroups / Logistic Regressionnoblemantic-no-omit-framepointermantic1020304050SE +/- 0.12, N = 3SE +/- 0.24, N = 3SE +/- 0.19, N = 341.9141.7341.52-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Components

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Componentsmantic-no-omit-framepointermanticnoble918273645SE +/- 0.36, N = 3SE +/- 0.21, N = 3SE +/- 0.43, N = 337.8937.2437.11-O21. (F9X) gfortran options: -O0

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statenoblemanticmantic-no-omit-framepointer0.32540.65080.97621.30161.627SE +/- 0.006, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 31.4461.4221.411

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statenoblemantic-no-omit-framepointermantic0.32310.64620.96931.29241.6155SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 31.4361.4051.402

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50noblemanticmantic-no-omit-framepointer816243240SE +/- 0.17, N = 3SE +/- 0.11, N = 3SE +/- 0.16, N = 332.3432.3632.54MIN: 28.9 / MAX: 32.83MIN: 31.89 / MAX: 32.7MIN: 31.64 / MAX: 32.94

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.24, N = 3SE +/- 0.83, N = 371.7472.91MIN: 67.87 / MAX: 72.6MIN: 68 / MAX: 75.45

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152mantic-no-omit-framepointermantic1632486480SE +/- 0.20, N = 3SE +/- 0.96, N = 372.2473.01MIN: 68.36 / MAX: 73.14MIN: 68.06 / MAX: 75.3

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.44, N = 3SE +/- 0.66, N = 371.8173.65MIN: 67.31 / MAX: 72.89MIN: 68.88 / MAX: 75.03

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.94, N = 3SE +/- 0.50, N = 372.3173.75MIN: 67.38 / MAX: 74.62MIN: 68.91 / MAX: 75.15

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152mantic-no-omit-framepointermantic1632486480SE +/- 0.74, N = 3SE +/- 0.96, N = 373.3674.15MIN: 68.19 / MAX: 74.63MIN: 68.27 / MAX: 75.61

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_lmantic-no-omit-framepointermantic918273645SE +/- 0.26, N = 3SE +/- 0.47, N = 337.2939.35MIN: 35.83 / MAX: 39.17MIN: 36.65 / MAX: 40.42

PyPerformance

Benchmark: raytrace

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytracemantic-no-omit-framepointermantic60120180240300SE +/- 0.33, N = 3SE +/- 0.33, N = 3274262

PyPerformance

Benchmark: go

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: gomantic-no-omit-framepointermanticnoble306090120150SE +/- 0.33, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3131129121

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingmanticnoblemantic-no-omit-framepointer0.1420.2840.4260.5680.71SE +/- 0.002, N = 3SE +/- 0.004, N = 3SE +/- 0.007, N = 30.6310.6300.622

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingnoblemanticmantic-no-omit-framepointer0.1420.2840.4260.5680.71SE +/- 0.006, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.6310.6190.618

PyPerformance

Benchmark: 2to3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3mantic-no-omit-framepointermantic50100150200250SE +/- 0.33, N = 3SE +/- 0.00, N = 3224221

Scikit-Learn

Benchmark: Hist Gradient Boosting Categorical Only

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Categorical Onlynoblemantic-no-omit-framepointermantic510152025SE +/- 0.10, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 319.9318.8718.58-O21. (F9X) gfortran options: -O0

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50manticmantic-no-omit-framepointer50100150200250SE +/- 2.67, N = 3SE +/- 1.46, N = 15210.88211.46MIN: 195.21 / MAX: 218.16MIN: 192.13 / MAX: 223.01

PyPerformance

Benchmark: json_loads

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsnoblemantic-no-omit-framepointermantic510152025SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 322.820.819.5

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingnoblemantic-no-omit-framepointermantic0.02990.05980.08970.11960.1495SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1330.1320.131

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingnoblemanticmantic-no-omit-framepointer0.03060.06120.09180.12240.153SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1360.1310.128

PyPerformance

Benchmark: pathlib

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibmantic-no-omit-framepointermantic510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 320.219.7

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesnoblemantic-no-omit-framepointermantic2004006008001000SE +/- 8.70, N = 4SE +/- 1.20, N = 3SE +/- 1.00, N = 3839790774

PyPerformance

Benchmark: pickle_pure_python

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonmantic-no-omit-framepointermantic60120180240300SE +/- 0.58, N = 3SE +/- 0.33, N = 3263259

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152mantic-no-omit-framepointermantic1632486480SE +/- 0.96, N = 3SE +/- 0.56, N = 372.2773.91MIN: 68.86 / MAX: 76.62MIN: 68.9 / MAX: 75.9

PyPerformance

Benchmark: nbody

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodymantic-no-omit-framepointermantic20406080100SE +/- 0.07, N = 3SE +/- 0.06, N = 377.176.2

PyPerformance

Benchmark: django_template

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templatemantic-no-omit-framepointermantic714212835SE +/- 0.06, N = 3SE +/- 0.03, N = 329.528.5

PyPerformance

Benchmark: float

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatmanticmantic-no-omit-framepointer1530456075SE +/- 0.03, N = 3SE +/- 0.10, N = 367.466.9

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 1.06, N = 3SE +/- 2.52, N = 4199.46202.68MIN: 182.77 / MAX: 206.03MIN: 182.69 / MAX: 211.53

PyPerformance

Benchmark: regex_compile

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compilemantic-no-omit-framepointermantic306090120150SE +/- 0.33, N = 3SE +/- 0.00, N = 3120116

PyPerformance

Benchmark: crypto_pyaes

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesmantic-no-omit-framepointermantic1530456075SE +/- 0.00, N = 3SE +/- 0.06, N = 366.665.1

PyPerformance

Benchmark: chaos

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosmantic-no-omit-framepointermantic1428425670SE +/- 0.20, N = 3SE +/- 0.03, N = 363.662.8

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50mantic-no-omit-framepointermantic4080120160200SE +/- 0.96, N = 3SE +/- 0.25, N = 3200.17200.30MIN: 183.43 / MAX: 203.55MIN: 182.88 / MAX: 202.36

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50mantic-no-omit-framepointermantic4080120160200SE +/- 0.33, N = 3SE +/- 1.69, N = 3201.14203.18MIN: 183.61 / MAX: 202.73MIN: 183.76 / MAX: 207.98

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 1.76, N = 3SE +/- 1.21, N = 3202.72203.22MIN: 183.1 / MAX: 207.93MIN: 185.88 / MAX: 206.71

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50manticmantic-no-omit-framepointer50100150200250SE +/- 0.58, N = 3SE +/- 1.98, N = 3201.41205.95MIN: 184.02 / MAX: 203.68MIN: 186.96 / MAX: 210.21

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingnoblemanticmantic-no-omit-framepointer0.0020.0040.0060.0080.01SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0090.0090.008

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingmanticnoblemantic-no-omit-framepointer0.0020.0040.0060.0080.01SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0090.0080.008

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statenoblemanticmantic-no-omit-framepointer0.00070.00140.00210.00280.0035SE +/- 0.000, N = 12SE +/- 0.000, N = 3SE +/- 0.000, N = 150.0030.0030.002

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.05920.11840.17760.23680.296SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.2630.2620.261

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statemanticnoblemantic-no-omit-framepointer0.05920.11840.17760.23680.296SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 30.2630.2620.260

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemanticnoblemantic-no-omit-framepointer0.0140.0280.0420.0560.07SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0620.0610.058

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemanticnoblemantic-no-omit-framepointer0.01370.02740.04110.05480.0685SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0610.0600.058

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingnoblemantic-no-omit-framepointermantic0.00770.01540.02310.03080.0385SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0340.0330.033

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingnoblemantic-no-omit-framepointermantic0.00740.01480.02220.02960.037SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0330.0320.032

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statenoblemantic-no-omit-framepointermantic0.00360.00720.01080.01440.018SE +/- 0.000, N = 15SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0160.0150.015

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statenoblemantic-no-omit-framepointermantic0.00340.00680.01020.01360.017SE +/- 0.000, N = 7SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0150.0150.015

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statenoblemantic-no-omit-framepointermantic0.00070.00140.00210.00280.0035SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0030.0030.003


Phoronix Test Suite v10.8.4