Desktop machine learning

AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2405015-VPA1-DESKTOP46
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 3 Tests
Programmer / Developer System Benchmarks 2 Tests
Python 5 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
mantic
February 23
  15 Hours, 54 Minutes
mantic-no-omit-framepointer
February 24
  19 Hours, 11 Minutes
noble
April 30
  14 Hours, 21 Minutes
Invert Hiding All Results Option
  16 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Desktop machine learning - Phoronix Test Suite

Desktop machine learning

AMD Ryzen 9 3900X 12-Core testing with a MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS) and NVIDIA GeForce RTX 3060 12GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2405015-VPA1-DESKTOP46&grr&sor.

Desktop machine learningProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerDisplay DriverOpenCLCompilerFile-SystemScreen Resolutionmanticmantic-no-omit-framepointernobleAMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads)MSI X570-A PRO (MS-7C37) v3.0 (H.70 BIOS)AMD Starship/Matisse2 x 16GB DDR4-3200MT/s F4-3200C16-16GVK2000GB Seagate ST2000DM006-2DM1 + 2000GB Western Digital WD20EZAZ-00G + 500GB Samsung SSD 860 + 8002GB Seagate ST8000DM004-2CX1 + 1000GB CT1000BX500SSD1 + 512GB TS512GESD310CNVIDIA GeForce RTX 3060 12GBNVIDIA GA104 HD AudioDELL P2314HRealtek RTL8111/8168/8411Ubuntu 23.106.5.0-9-generic (x86_64)X Server 1.21.1.7NVIDIAOpenCL 3.0 CUDA 12.2.146GCC 13.2.0 + CUDA 12.2ext41920x1080NVIDIA GeForce RTX 3060DELL P2314H + U32J59xRealtek RTL8111/8168/8211/8411Ubuntu 24.046.8.0-31-generic (x86_64)GCC 13.2.0OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- mantic: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - mantic-no-omit-framepointer: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-b9QCDx/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - noble: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701013Python Details- mantic: Python 3.11.6- mantic-no-omit-framepointer: Python 3.11.6- noble: Python 3.12.3Security Details- mantic: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - mantic-no-omit-framepointer: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - noble: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected Environment Details- mantic-no-omit-framepointer: CXXFLAGS=-fno-omit-frame-pointer QMAKE_CFLAGS=-fno-omit-frame-pointer CFLAGS=-fno-omit-frame-pointer CFLAGS_OVERRIDE=-fno-omit-frame-pointer QMAKE_CXXFLAGS=-fno-omit-frame-pointer FFLAGS=-fno-omit-frame-pointer - noble: CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" CFLAGS_OVERRIDE="-fno-omit-frame-pointer -frecord-gcc-switches -O2" QMAKE_CXXFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2" FFLAGS="-fno-omit-frame-pointer -frecord-gcc-switches -O2"

Desktop machine learningscikit-learn: Isotonic / Perturbed Logarithmscikit-learn: SAGAscikit-learn: Isotonic / Logisticscikit-learn: Isolation Forestscikit-learn: Sparse Rand Projections / 100 Iterationsscikit-learn: SGDOneClassSVMscikit-learn: Lassoscikit-learn: Covertype Dataset Benchmarkscikit-learn: GLMpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lscikit-learn: TSNE MNIST Datasetscikit-learn: Plot Hierarchicalscikit-learn: Plot Neighborspytorch: NVIDIA CUDA GPU - 64 - Efficientnet_v2_lscikit-learn: Sparsifyscikit-learn: Sample Without Replacementpytorch: CPU - 512 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 32 - ResNet-152pytorch: CPU - 64 - ResNet-152pytorch: CPU - 16 - ResNet-152scikit-learn: Plot Polynomial Kernel Approximationpytorch: NVIDIA CUDA GPU - 256 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 32 - Efficientnet_v2_lnumpy: scikit-learn: Feature Expansionsscikit-learn: SGD Regressionscikit-learn: Plot Incremental PCApytorch: CPU - 1 - Efficientnet_v2_lscikit-learn: LocalOutlierFactorscikit-learn: Hist Gradient Boostingscikit-learn: Hist Gradient Boosting Threadingscikit-learn: Hist Gradient Boosting Adultscikit-learn: Treepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingpyhpc: GPU - Numpy - 4194304 - Isoneutral Mixingpytorch: NVIDIA CUDA GPU - 512 - Efficientnet_v2_lscikit-learn: Plot OMP vs. LARSscikit-learn: MNIST Datasetscikit-learn: Kernel PCA Solvers / Time vs. N Samplespytorch: CPU - 1 - ResNet-152scikit-learn: Text Vectorizerspytorch: CPU - 32 - ResNet-50pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 256 - ResNet-50pytorch: CPU - 16 - ResNet-50scikit-learn: Plot Wardpyperformance: python_startuppytorch: NVIDIA CUDA GPU - 16 - Efficientnet_v2_lscikit-learn: 20 Newsgroups / Logistic Regressionscikit-learn: Kernel PCA Solvers / Time vs. N Componentspyhpc: GPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Equation of Statepytorch: CPU - 1 - ResNet-50pytorch: NVIDIA CUDA GPU - 256 - ResNet-152pytorch: NVIDIA CUDA GPU - 16 - ResNet-152pytorch: NVIDIA CUDA GPU - 64 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-152pytorch: NVIDIA CUDA GPU - 32 - ResNet-152pytorch: NVIDIA CUDA GPU - 1 - Efficientnet_v2_lpyperformance: raytracepyperformance: gopyhpc: GPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingpyperformance: 2to3scikit-learn: Hist Gradient Boosting Categorical Onlypytorch: NVIDIA CUDA GPU - 1 - ResNet-50pyperformance: json_loadspyhpc: CPU - Numpy - 262144 - Isoneutral Mixingpyhpc: GPU - Numpy - 262144 - Isoneutral Mixingpyperformance: pathlibpybench: Total For Average Test Timespyperformance: pickle_pure_pythonpytorch: NVIDIA CUDA GPU - 1 - ResNet-152pyperformance: nbodypyperformance: django_templatepyperformance: floatpytorch: NVIDIA CUDA GPU - 32 - ResNet-50pyperformance: regex_compilepyperformance: crypto_pyaespyperformance: chaospytorch: NVIDIA CUDA GPU - 16 - ResNet-50pytorch: NVIDIA CUDA GPU - 512 - ResNet-50pytorch: NVIDIA CUDA GPU - 256 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-50pyhpc: CPU - Numpy - 16384 - Isoneutral Mixingpyhpc: GPU - Numpy - 16384 - Isoneutral Mixingpyhpc: GPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numpy - 1048576 - Equation of Statepyhpc: GPU - Numpy - 1048576 - Equation of Statepyhpc: GPU - Numpy - 262144 - Equation of Statepyhpc: CPU - Numpy - 262144 - Equation of Statepyhpc: GPU - Numpy - 65536 - Isoneutral Mixingpyhpc: CPU - Numpy - 65536 - Isoneutral Mixingpyhpc: CPU - Numpy - 65536 - Equation of Statepyhpc: GPU - Numpy - 65536 - Equation of Statepyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - JAX - 16384 - Isoneutral Mixingmanticmantic-no-omit-framepointernoble1788.259868.0181470.806289.371613.547379.739511.848376.145293.5985.635.635.615.615.62236.865211.286147.75237.88127.282158.2629.879.779.849.889.88150.73237.3637.71426.28131.277106.31531.0067.3153.464109.984110.215103.49748.3382.6702.66237.4391.49965.76372.54112.7260.81424.2924.1324.2424.4224.2857.8247.6138.9541.51937.2421.4221.40232.3671.7473.0171.8172.3174.1539.352621290.6310.61922118.579210.8819.50.1310.13119.777425973.9176.228.567.4199.4611665.162.8200.30203.18202.72201.410.0090.0090.0030.2630.2630.0620.0610.0330.0320.0150.0150.0031828.300873.8221471.834336.372631.071382.611509.537370.694295.0965.645.645.645.655.65236.786208.391142.45137.24125.442161.4609.809.9110.009.919.93150.37636.6037.16428.61133.092107.52731.0577.3256.754111.255110.374105.64752.9692.6262.62037.2292.58265.87772.90912.7863.87524.3524.2824.4024.3724.3857.5457.6436.1041.72837.8891.4111.40532.5472.9172.2473.6573.7573.3637.292741310.6220.61822418.865211.4620.80.1320.12820.279026372.2777.129.566.9202.6812066.663.6200.17201.14203.22205.950.0080.0080.0020.2620.2600.0580.0580.0330.0320.0150.0150.0031963.772869.3691684.546314.034663.953385.383345.400381.447269.8065.595.595.615.605.60285.823207.104142.159125.069179.6389.879.869.819.879.88145.363430.83133.14478.88030.6177.3154.288117.407111.554112.71347.0332.7202.66868.17265.41670.02212.8966.39324.1224.3024.1924.3324.4356.1328.7641.91437.1071.4461.43632.341210.6300.63119.93222.80.1330.1368390.0090.0080.0030.2610.2620.0610.0600.0340.0330.0160.0150.003OpenBenchmarking.org

Scikit-Learn

Benchmark: Isotonic / Perturbed Logarithm

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Perturbed Logarithmmanticmantic-no-omit-framepointernoble400800120016002000SE +/- 24.41, N = 3SE +/- 16.46, N = 3SE +/- 1.48, N = 31788.261828.301963.77-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SAGA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SAGAmanticnoblemantic-no-omit-framepointer2004006008001000SE +/- 8.69, N = 6SE +/- 10.35, N = 3SE +/- 5.60, N = 3868.02869.37873.82-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Isotonic / Logistic

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Logisticmanticmantic-no-omit-framepointernoble400800120016002000SE +/- 12.29, N = 3SE +/- 14.46, N = 3SE +/- 9.43, N = 31470.811471.831684.55-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Isolation Forest

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isolation Forestmanticnoblemantic-no-omit-framepointer70140210280350SE +/- 1.30, N = 3SE +/- 2.83, N = 3SE +/- 51.04, N = 9289.37314.03336.37-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sparse Random Projections / 100 Iterations

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparse Random Projections / 100 Iterationsmanticmantic-no-omit-framepointernoble140280420560700SE +/- 3.80, N = 3SE +/- 7.06, N = 4SE +/- 4.34, N = 3613.55631.07663.95-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SGDOneClassSVM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGDOneClassSVMmanticmantic-no-omit-framepointernoble80160240320400SE +/- 4.18, N = 3SE +/- 3.48, N = 7SE +/- 3.55, N = 3379.74382.61385.38-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Lasso

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Lassonoblemantic-no-omit-framepointermantic110220330440550SE +/- 1.37, N = 3SE +/- 3.50, N = 3SE +/- 3.22, N = 3345.40509.54511.85-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Covertype Dataset Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Covertype Dataset Benchmarkmantic-no-omit-framepointermanticnoble80160240320400SE +/- 3.40, N = 3SE +/- 4.88, N = 3SE +/- 2.58, N = 3370.69376.15381.45-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: GLM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMnoblemanticmantic-no-omit-framepointer60120180240300SE +/- 0.93, N = 3SE +/- 1.06, N = 3SE +/- 1.07, N = 3269.81293.60295.10-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lmantic-no-omit-framepointermanticnoble1.2692.5383.8075.0766.345SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 35.645.635.59MIN: 5.45 / MAX: 5.68MIN: 5.39 / MAX: 5.71MIN: 5.31 / MAX: 5.65

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lmantic-no-omit-framepointermanticnoble1.2692.5383.8075.0766.345SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.645.635.59MIN: 5.52 / MAX: 5.69MIN: 5.31 / MAX: 5.68MIN: 5.46 / MAX: 5.64

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lmantic-no-omit-framepointermanticnoble1.2692.5383.8075.0766.345SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 35.645.615.61MIN: 5.29 / MAX: 5.68MIN: 5.44 / MAX: 5.65MIN: 5.46 / MAX: 5.67

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lmantic-no-omit-framepointermanticnoble1.27132.54263.81395.08526.3565SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.655.615.60MIN: 5.36 / MAX: 5.93MIN: 5.45 / MAX: 5.66MIN: 5.37 / MAX: 5.66

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lmantic-no-omit-framepointermanticnoble1.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.655.625.60MIN: 5.45 / MAX: 5.7MIN: 5.35 / MAX: 5.66MIN: 5.32 / MAX: 5.64

Scikit-Learn

Benchmark: TSNE MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST Datasetmantic-no-omit-framepointermanticnoble60120180240300SE +/- 0.54, N = 3SE +/- 0.44, N = 3SE +/- 0.91, N = 3236.79236.87285.82-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Hierarchical

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Hierarchicalnoblemantic-no-omit-framepointermantic50100150200250SE +/- 2.35, N = 3SE +/- 0.42, N = 3SE +/- 0.75, N = 3207.10208.39211.29-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Neighbors

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Neighborsnoblemantic-no-omit-framepointermantic306090120150SE +/- 1.09, N = 3SE +/- 0.59, N = 3SE +/- 1.34, N = 7142.16142.45147.75-O21. (F9X) gfortran options: -O0

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.30, N = 9SE +/- 0.31, N = 1537.8837.24MIN: 35.67 / MAX: 39.63MIN: 33.97 / MAX: 39.43

Scikit-Learn

Benchmark: Sparsify

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparsifynoblemantic-no-omit-framepointermantic306090120150SE +/- 0.65, N = 3SE +/- 1.28, N = 5SE +/- 1.36, N = 5125.07125.44127.28-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Sample Without Replacement

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sample Without Replacementmanticmantic-no-omit-framepointernoble4080120160200SE +/- 0.60, N = 3SE +/- 0.62, N = 3SE +/- 2.21, N = 3158.26161.46179.64-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152manticnoblemantic-no-omit-framepointer3691215SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 39.879.879.80MIN: 9.09 / MAX: 9.96MIN: 9.21 / MAX: 10MIN: 9.12 / MAX: 9.98

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152mantic-no-omit-framepointernoblemantic3691215SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 39.919.869.77MIN: 9.19 / MAX: 10.05MIN: 8.69 / MAX: 9.99MIN: 9.17 / MAX: 10

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152mantic-no-omit-framepointermanticnoble3691215SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 310.009.849.81MIN: 8.09 / MAX: 10.27MIN: 9.6 / MAX: 9.98MIN: 9.42 / MAX: 9.93

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152mantic-no-omit-framepointermanticnoble3691215SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 39.919.889.87MIN: 8.69 / MAX: 10.08MIN: 8.8 / MAX: 9.98MIN: 8.61 / MAX: 9.96

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152mantic-no-omit-framepointermanticnoble3691215SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 39.939.889.88MIN: 9.39 / MAX: 10.01MIN: 9.31 / MAX: 10.01MIN: 9.15 / MAX: 9.98

Scikit-Learn

Benchmark: Plot Polynomial Kernel Approximation

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Polynomial Kernel Approximationnoblemantic-no-omit-framepointermantic306090120150SE +/- 1.46, N = 3SE +/- 1.20, N = 3SE +/- 1.22, N = 3145.36150.38150.73-O21. (F9X) gfortran options: -O0

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.15, N = 3SE +/- 0.30, N = 1537.3636.60MIN: 35.47 / MAX: 37.85MIN: 33.07 / MAX: 39.53

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.24, N = 3SE +/- 0.30, N = 1537.7137.16MIN: 35.52 / MAX: 38.25MIN: 34.12 / MAX: 39.48

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarknoblemantic-no-omit-framepointermantic90180270360450SE +/- 1.01, N = 3SE +/- 0.90, N = 3SE +/- 1.20, N = 3430.83428.61426.28

Scikit-Learn

Benchmark: Feature Expansions

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature Expansionsmanticmantic-no-omit-framepointernoble306090120150SE +/- 0.86, N = 3SE +/- 1.22, N = 3SE +/- 1.21, N = 3131.28133.09133.14-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: SGD Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD Regressionnoblemanticmantic-no-omit-framepointer20406080100SE +/- 0.05, N = 3SE +/- 1.06, N = 6SE +/- 0.49, N = 378.88106.32107.53-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Plot Incremental PCA

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Incremental PCAnoblemanticmantic-no-omit-framepointer714212835SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 330.6231.0131.06-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lmantic-no-omit-framepointermanticnoble246810SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 37.327.317.31MIN: 7.23 / MAX: 7.38MIN: 7.16 / MAX: 7.34MIN: 7.07 / MAX: 7.36

Scikit-Learn

Benchmark: LocalOutlierFactor

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: LocalOutlierFactormanticnoblemantic-no-omit-framepointer1326395265SE +/- 0.18, N = 3SE +/- 0.02, N = 3SE +/- 0.74, N = 1553.4654.2956.75-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boostingmanticmantic-no-omit-framepointernoble306090120150SE +/- 0.22, N = 3SE +/- 0.25, N = 3SE +/- 0.17, N = 3109.98111.26117.41-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Threading

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Threadingmanticmantic-no-omit-framepointernoble20406080100SE +/- 0.13, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 3110.22110.37111.55-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Hist Gradient Boosting Adult

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Adultmanticmantic-no-omit-framepointernoble306090120150SE +/- 0.70, N = 3SE +/- 0.59, N = 3SE +/- 0.52, N = 3103.50105.65112.71-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Tree

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Treenoblemanticmantic-no-omit-framepointer1224364860SE +/- 0.52, N = 3SE +/- 0.59, N = 4SE +/- 0.48, N = 1547.0348.3452.97-O21. (F9X) gfortran options: -O0

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingmantic-no-omit-framepointermanticnoble0.6121.2241.8362.4483.06SE +/- 0.002, N = 3SE +/- 0.010, N = 3SE +/- 0.010, N = 32.6262.6702.720

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixingmantic-no-omit-framepointermanticnoble0.60031.20061.80092.40123.0015SE +/- 0.006, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 32.6202.6622.668

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.03, N = 3SE +/- 0.33, N = 837.4337.22MIN: 35.81 / MAX: 38.02MIN: 34.99 / MAX: 39.08

Scikit-Learn

Benchmark: Plot OMP vs. LARS

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSnoblemanticmantic-no-omit-framepointer20406080100SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.44, N = 368.1791.5092.58-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST Datasetnoblemanticmantic-no-omit-framepointer1530456075SE +/- 0.67, N = 3SE +/- 0.82, N = 4SE +/- 0.47, N = 365.4265.7665.88-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Samples

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Samplesnoblemanticmantic-no-omit-framepointer1632486480SE +/- 0.44, N = 3SE +/- 0.05, N = 3SE +/- 0.16, N = 370.0272.5472.91-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152noblemantic-no-omit-framepointermantic3691215SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 312.8912.7812.72MIN: 12.36 / MAX: 13.05MIN: 11.9 / MAX: 12.9MIN: 11.99 / MAX: 12.8

Scikit-Learn

Benchmark: Text Vectorizers

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Text Vectorizersmanticmantic-no-omit-framepointernoble1530456075SE +/- 0.19, N = 3SE +/- 0.08, N = 3SE +/- 0.32, N = 360.8163.8866.39-O21. (F9X) gfortran options: -O0

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50mantic-no-omit-framepointermanticnoble612182430SE +/- 0.16, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 324.3524.2924.12MIN: 23.67 / MAX: 24.87MIN: 22.24 / MAX: 24.66MIN: 22.33 / MAX: 24.46

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50noblemantic-no-omit-framepointermantic612182430SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 324.3024.2824.13MIN: 22.45 / MAX: 24.75MIN: 22.31 / MAX: 24.53MIN: 23.58 / MAX: 24.41

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50mantic-no-omit-framepointermanticnoble612182430SE +/- 0.15, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 324.4024.2424.19MIN: 21.6 / MAX: 24.8MIN: 23.59 / MAX: 24.49MIN: 22.75 / MAX: 24.73

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50manticmantic-no-omit-framepointernoble612182430SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 324.4224.3724.33MIN: 20.15 / MAX: 24.74MIN: 23.76 / MAX: 24.81MIN: 22.79 / MAX: 24.66

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50noblemantic-no-omit-framepointermantic612182430SE +/- 0.01, N = 3SE +/- 0.16, N = 3SE +/- 0.05, N = 324.4324.3824.28MIN: 22.57 / MAX: 24.72MIN: 22.2 / MAX: 24.87MIN: 20.22 / MAX: 24.56

Scikit-Learn

Benchmark: Plot Ward

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Wardnoblemantic-no-omit-framepointermantic1326395265SE +/- 0.20, N = 3SE +/- 0.22, N = 3SE +/- 0.21, N = 356.1357.5557.82-O21. (F9X) gfortran options: -O0

PyPerformance

Benchmark: python_startup

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupmanticmantic-no-omit-framepointernoble246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.617.648.76

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.08, N = 3SE +/- 0.02, N = 338.9536.10MIN: 37.12 / MAX: 39.27MIN: 34.25 / MAX: 38.01

Scikit-Learn

Benchmark: 20 Newsgroups / Logistic Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: 20 Newsgroups / Logistic Regressionmanticmantic-no-omit-framepointernoble1020304050SE +/- 0.19, N = 3SE +/- 0.24, N = 3SE +/- 0.12, N = 341.5241.7341.91-O21. (F9X) gfortran options: -O0

Scikit-Learn

Benchmark: Kernel PCA Solvers / Time vs. N Components

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Kernel PCA Solvers / Time vs. N Componentsnoblemanticmantic-no-omit-framepointer918273645SE +/- 0.43, N = 3SE +/- 0.21, N = 3SE +/- 0.36, N = 337.1137.2437.89-O21. (F9X) gfortran options: -O0

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statemantic-no-omit-framepointermanticnoble0.32540.65080.97621.30161.627SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.006, N = 31.4111.4221.446

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.32310.64620.96931.29241.6155SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 31.4021.4051.436

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50mantic-no-omit-framepointermanticnoble816243240SE +/- 0.16, N = 3SE +/- 0.11, N = 3SE +/- 0.17, N = 332.5432.3632.34MIN: 31.64 / MAX: 32.94MIN: 31.89 / MAX: 32.7MIN: 28.9 / MAX: 32.83

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-152mantic-no-omit-framepointermantic1632486480SE +/- 0.83, N = 3SE +/- 0.24, N = 372.9171.74MIN: 68 / MAX: 75.45MIN: 67.87 / MAX: 72.6

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.96, N = 3SE +/- 0.20, N = 373.0172.24MIN: 68.06 / MAX: 75.3MIN: 68.36 / MAX: 73.14

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-152mantic-no-omit-framepointermantic1632486480SE +/- 0.66, N = 3SE +/- 0.44, N = 373.6571.81MIN: 68.88 / MAX: 75.03MIN: 67.31 / MAX: 72.89

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-152mantic-no-omit-framepointermantic1632486480SE +/- 0.50, N = 3SE +/- 0.94, N = 373.7572.31MIN: 68.91 / MAX: 75.15MIN: 67.38 / MAX: 74.62

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.96, N = 3SE +/- 0.74, N = 374.1573.36MIN: 68.27 / MAX: 75.61MIN: 68.19 / MAX: 74.63

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_lmanticmantic-no-omit-framepointer918273645SE +/- 0.47, N = 3SE +/- 0.26, N = 339.3537.29MIN: 36.65 / MAX: 40.42MIN: 35.83 / MAX: 39.17

PyPerformance

Benchmark: raytrace

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytracemanticmantic-no-omit-framepointer60120180240300SE +/- 0.33, N = 3SE +/- 0.33, N = 3262274

PyPerformance

Benchmark: go

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: gonoblemanticmantic-no-omit-framepointer306090120150SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.33, N = 3121129131

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingmantic-no-omit-framepointernoblemantic0.1420.2840.4260.5680.71SE +/- 0.007, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 30.6220.6300.631

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixingmantic-no-omit-framepointermanticnoble0.1420.2840.4260.5680.71SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.006, N = 30.6180.6190.631

PyPerformance

Benchmark: 2to3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3manticmantic-no-omit-framepointer50100150200250SE +/- 0.00, N = 3SE +/- 0.33, N = 3221224

Scikit-Learn

Benchmark: Hist Gradient Boosting Categorical Only

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boosting Categorical Onlymanticmantic-no-omit-framepointernoble510152025SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 318.5818.8719.93-O21. (F9X) gfortran options: -O0

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-50mantic-no-omit-framepointermantic50100150200250SE +/- 1.46, N = 15SE +/- 2.67, N = 3211.46210.88MIN: 192.13 / MAX: 223.01MIN: 195.21 / MAX: 218.16

PyPerformance

Benchmark: json_loads

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsmanticmantic-no-omit-framepointernoble510152025SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 319.520.822.8

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.02990.05980.08970.11960.1495SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.1310.1320.133

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixingmantic-no-omit-framepointermanticnoble0.03060.06120.09180.12240.153SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1280.1310.136

PyPerformance

Benchmark: pathlib

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibmanticmantic-no-omit-framepointer510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 319.720.2

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesmanticmantic-no-omit-framepointernoble2004006008001000SE +/- 1.00, N = 3SE +/- 1.20, N = 3SE +/- 8.70, N = 4774790839

PyPerformance

Benchmark: pickle_pure_python

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonmanticmantic-no-omit-framepointer60120180240300SE +/- 0.33, N = 3SE +/- 0.58, N = 3259263

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-152manticmantic-no-omit-framepointer1632486480SE +/- 0.56, N = 3SE +/- 0.96, N = 373.9172.27MIN: 68.9 / MAX: 75.9MIN: 68.86 / MAX: 76.62

PyPerformance

Benchmark: nbody

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodymanticmantic-no-omit-framepointer20406080100SE +/- 0.06, N = 3SE +/- 0.07, N = 376.277.1

PyPerformance

Benchmark: django_template

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templatemanticmantic-no-omit-framepointer714212835SE +/- 0.03, N = 3SE +/- 0.06, N = 328.529.5

PyPerformance

Benchmark: float

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatmantic-no-omit-framepointermantic1530456075SE +/- 0.10, N = 3SE +/- 0.03, N = 366.967.4

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-50mantic-no-omit-framepointermantic4080120160200SE +/- 2.52, N = 4SE +/- 1.06, N = 3202.68199.46MIN: 182.69 / MAX: 211.53MIN: 182.77 / MAX: 206.03

PyPerformance

Benchmark: regex_compile

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compilemanticmantic-no-omit-framepointer306090120150SE +/- 0.00, N = 3SE +/- 0.33, N = 3116120

PyPerformance

Benchmark: crypto_pyaes

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesmanticmantic-no-omit-framepointer1530456075SE +/- 0.06, N = 3SE +/- 0.00, N = 365.166.6

PyPerformance

Benchmark: chaos

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosmanticmantic-no-omit-framepointer1428425670SE +/- 0.03, N = 3SE +/- 0.20, N = 362.863.6

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 0.25, N = 3SE +/- 0.96, N = 3200.30200.17MIN: 182.88 / MAX: 202.36MIN: 183.43 / MAX: 203.55

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-50manticmantic-no-omit-framepointer4080120160200SE +/- 1.69, N = 3SE +/- 0.33, N = 3203.18201.14MIN: 183.76 / MAX: 207.98MIN: 183.61 / MAX: 202.73

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-50mantic-no-omit-framepointermantic4080120160200SE +/- 1.21, N = 3SE +/- 1.76, N = 3203.22202.72MIN: 185.88 / MAX: 206.71MIN: 183.1 / MAX: 207.93

PyTorch

Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-50mantic-no-omit-framepointermantic50100150200250SE +/- 1.98, N = 3SE +/- 0.58, N = 3205.95201.41MIN: 186.96 / MAX: 210.21MIN: 184.02 / MAX: 203.68

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingmantic-no-omit-framepointermanticnoble0.0020.0040.0060.0080.01SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0080.0090.009

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixingmantic-no-omit-framepointernoblemantic0.0020.0040.0060.0080.01SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0080.0080.009

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statemantic-no-omit-framepointermanticnoble0.00070.00140.00210.00280.0035SE +/- 0.000, N = 15SE +/- 0.000, N = 3SE +/- 0.000, N = 120.0020.0030.003

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statenoblemantic-no-omit-framepointermantic0.05920.11840.17760.23680.296SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.2610.2620.263

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of Statemantic-no-omit-framepointernoblemantic0.05920.11840.17760.23680.296SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.2600.2620.263

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemantic-no-omit-framepointernoblemantic0.0140.0280.0420.0560.07SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.0580.0610.062

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of Statemantic-no-omit-framepointernoblemantic0.01370.02740.04110.05480.0685SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.0580.0600.061

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.00770.01540.02310.03080.0385SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0330.0330.034

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral Mixingmanticmantic-no-omit-framepointernoble0.00740.01480.02220.02960.037SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0320.0320.033

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.00360.00720.01080.01440.018SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 150.0150.0150.016

PyHPC Benchmarks

Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: GPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.00340.00680.01020.01360.017SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 70.0150.0150.015

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of Statemanticmantic-no-omit-framepointernoble0.00070.00140.00210.00280.0035SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0030.0030.003


Phoronix Test Suite v10.8.4