dfhj

Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401107-NE-DFHJ8749984&grr&sro.

dfhjProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcApple M2 @ 2.42GHz (4 Cores / 8 Threads)Apple MacBook Air (13 h M2 2022)Apple Silicon8GB251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256ZllvmpipeBroadcom Device 4433 + Broadcom BRCM4387 BluetoothArch rolling6.3.0-asahi-13-1-ARCH (aarch64)KDE Plasma 5.27.6X Server 1.21.1.84.5 Mesa 23.1.3 (LLVM 15.0.7 128 bits)GCC 12.1.0 + Clang 15.0.7ext42560x1600OpenBenchmarking.orgCompiler Details- --build=aarch64-unknown-linux-gnu --disable-libssp --disable-libstdcxx-pch --disable-multilib --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-fix-cortex-a53-835769 --enable-fix-cortex-a53-843419 --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=c,c++,fortran,go,lto,objc,obj-c++ --enable-lto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-unknown-linux-gnu --mandir=/usr/share/man --with-arch=armv8-a --with-linker-hash-style=gnu Processor Details- Scaling Governor: apple-cpufreq schedutil (Boost: Enabled)Python Details- Python 3.11.3Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

dfhjpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 1 - Efficientnet_v2_lxmrig: GhostRider - 1Mquicksilver: CTS2pytorch: CPU - 16 - ResNet-152quicksilver: CORAL2 P2xmrig: Monero - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: KawPow - 1Mxmrig: CryptoNight-Heavy - 1Mpytorch: CPU - 16 - ResNet-50xmrig: Wownero - 1Mlczero: Eigenpytorch: CPU - 1 - ResNet-152deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamquicksilver: CORAL2 P1pytorch: CPU - 1 - ResNet-50rav1e: 1rav1e: 5rav1e: 10rav1e: 6deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamlczero: BLASabc1.510.56505.543300002.3277200002168.22134.62232.72319.24.792437.7433.1010.4731190.645148320006.300.5533.15711.7954.17536.430954.86691452.62571.3721215.41259.273995.217820.98771.500.57505.443100002.3976860002128.42179.52169.42302.64.822391.3443.159.384212.761147910006.260.5523.1411.8974.19732.738461.02311306.10711.5268194.821610.238686.208423.17981.510.56504.345000002.2884400002484.22553.12515.624944.852746.9433.119.6063207.848150940006.290.6123.37212.0164.45633.498159.66051320.48551.5115198.87410.03987.880222.7129OpenBenchmarking.org

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_labc0.33980.67961.01941.35921.6991.511.501.51MIN: 1.34 / MAX: 1.59MIN: 1.35 / MAX: 1.58MIN: 1.44 / MAX: 1.58

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_labc0.12830.25660.38490.51320.64150.560.570.56MIN: 0.28 / MAX: 1MIN: 0.39 / MAX: 1.01MIN: 0.33 / MAX: 0.99

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1Mabc110220330440550505.5505.4504.31. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2abc1000K2000K3000K4000K5000K4330000431000045000001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152abc0.53781.07561.61342.15122.6892.322.392.28MIN: 1.85 / MAX: 2.46MIN: 1.95 / MAX: 2.56MIN: 2.07 / MAX: 2.43

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2abc2M4M6M8M10M7720000768600084400001. (CXX) g++ options: -fopenmp -O3 -march=native

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1Mabc50010001500200025002168.22128.42484.21. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1Mabc50010001500200025002134.62179.52553.11. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1Mabc50010001500200025002232.72169.42515.61. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1Mabc50010001500200025002319.22302.62494.01. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50abc1.09132.18263.27394.36525.45654.794.824.85MIN: 3.74 / MAX: 5.16MIN: 4.38 / MAX: 5.12MIN: 4.45 / MAX: 5.22

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1Mabc60012001800240030002437.72391.32746.91. (CXX) g++ options: -fexceptions -fno-rtti -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

LeelaChessZero

Backend: Eigen

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: Eigenabc10203040504344431. (CXX) g++ options: -flto -pthread

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152abc0.70881.41762.12642.83523.5443.103.153.11MIN: 2.5 / MAX: 3.53MIN: 2.54 / MAX: 3.61MIN: 2.77 / MAX: 3.51

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc369121510.47319.38409.6063

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc50100150200250190.65212.76207.85

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1abc1.1M2.2M3.3M4.4M5.5M4832000479100050940001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50abc2468106.306.266.29MIN: 4.73 / MAX: 6.86MIN: 5.68 / MAX: 6.79MIN: 5.79 / MAX: 6.78

rav1e

Speed: 1

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 1abc0.13770.27540.41310.55080.68850.5530.5520.612

rav1e

Speed: 5

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 5abc0.75871.51742.27613.03483.79353.1573.1403.372

rav1e

Speed: 10

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 10abc369121511.8011.9012.02

rav1e

Speed: 6

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.7Speed: 6abc1.00262.00523.00784.01045.0134.1754.1974.456

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc81624324036.4332.7433.50

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc142842567054.8761.0259.66

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc300600900120015001452.631306.111320.49

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc0.34350.6871.03051.3741.71751.37211.52681.5115

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc50100150200250215.41194.82198.87

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc36912159.273910.238610.0390

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc2040608010095.2286.2187.88

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc61218243020.9923.1822.71


Phoronix Test Suite v10.8.4