dfhj

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401107-NE-DFHJ8749984&grr&sor.

dfhjProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)Apple Silicon8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom BRCM4387 BluetoothArch rolling6.3.0-asahi-13-1-ARCH (aarch64)KDE Plasma 5.27.6X Server 1.21.1.84.5 Mesa 23.1.3 (LLVM 15.0.7 128 bits)GCC 12.1.0 + Clang 15.0.7ext42560x1600OpenBenchmarking.orgCompiler Details- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu Processor Details- Scaling Governor: apple-cpufreq schedutil (Boost: Enabled)Python Details- Python 3.11.3Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

dfhjpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 1 - Efficientnet_v2_lxmrig: GhostRider - 1Mquicksilver: CTS2pytorch: CPU - 16 - ResNet-152quicksilver: CORAL2 P2xmrig: Monero - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: KawPow - 1Mxmrig: CryptoNight-Heavy - 1Mpytorch: CPU - 16 - ResNet-50xmrig: Wownero - 1Mlczero: Eigenpytorch: CPU - 1 - ResNet-152deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamquicksilver: CORAL2 P1pytorch: CPU - 1 - ResNet-50rav1e: 1rav1e: 5rav1e: 10rav1e: 6deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamlczero: BLASabc1.510.56505.543300002.3277200002168.22134.62232.72319.24.792437.7433.1010.4731190.645148320006.300.5533.15711.7954.17536.430954.86691452.62571.3721215.41259.273995.217820.98771.500.57505.443100002.3976860002128.42179.52169.42302.64.822391.3443.159.384212.761147910006.260.5523.1411.8974.19732.738461.02311306.10711.5268194.821610.238686.208423.17981.510.56504.345000002.2884400002484.22553.12515.624944.852746.9433.119.6063207.848150940006.290.6123.37212.0164.45633.498159.66051320.48551.5115198.87410.03987.880222.7129OpenBenchmarking.org

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lcab0.33980.67961.01941.35921.6991.511.511.50MIN: 1.44 / MAX: 1.58MIN: 1.34 / MAX: 1.59MIN: 1.35 / MAX: 1.58

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lbca0.12830.25660.38490.51320.64150.570.560.56MIN: 0.39 / MAX: 1.01MIN: 0.33 / MAX: 0.99MIN: 0.28 / MAX: 1

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mabc110220330440550505.5505.4504.31. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2cab1000K2000K3000K4000K5000K4500000433000043100001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152bac0.53781.07561.61342.15122.6892.392.322.28MIN: 1.95 / MAX: 2.56MIN: 1.85 / MAX: 2.46MIN: 2.07 / MAX: 2.43

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2cab2M4M6M8M10M8440000772000076860001. (CXX) g++ options: -fopenmp -O3 -march=native

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mcab50010001500200025002484.22168.22128.41. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mcba50010001500200025002553.12179.52134.61. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mcab50010001500200025002515.62232.72169.41. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mcab50010001500200025002494.02319.22302.61. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50cba1.09132.18263.27394.36525.45654.854.824.79MIN: 4.45 / MAX: 5.22MIN: 4.38 / MAX: 5.12MIN: 3.74 / MAX: 5.16

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mcab60012001800240030002746.92437.72391.31. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

LeelaChessZero

Backend: Eigen

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenbca10203040504443431. (CXX) g++ options: -flto -pthread

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152bca0.70881.41762.12642.83523.5443.153.113.10MIN: 2.54 / MAX: 3.61MIN: 2.77 / MAX: 3.51MIN: 2.5 / MAX: 3.53

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streambca36912159.38409.606310.4731

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streambca50100150200250212.76207.85190.65

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1cab1.1M2.2M3.3M4.4M5.5M5094000483200047910001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50acb2468106.306.296.26MIN: 4.73 / MAX: 6.86MIN: 5.79 / MAX: 6.78MIN: 5.68 / MAX: 6.79

rav1e

Speed: 1

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 1cab0.13770.27540.41310.55080.68850.6120.5530.552

rav1e

Speed: 5

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 5cab0.75871.51742.27613.03483.79353.3723.1573.140

rav1e

Speed: 10

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 10cba369121512.0211.9011.80

rav1e

Speed: 6

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 6cba1.00262.00523.00784.01045.0134.4564.1974.175

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streambca81624324032.7433.5036.43

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streambca142842567061.0259.6654.87

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambca300600900120015001306.111320.491452.63

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambca0.34350.6871.03051.3741.71751.52681.51151.3721

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambca50100150200250194.82198.87215.41

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambca369121510.238610.03909.2739

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streambca2040608010086.2187.8895.22

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streambca61218243023.1822.7120.99


Phoronix Test Suite v10.8.4