ddd Apple M2 testing with a Apple MacBook Air (13 h M2 2022) and llvmpipe on Arch rolling via the Phoronix Test Suite. a: Processor: Apple M2 @ 2.42GHz (4 Cores / 8 Threads), Motherboard: Apple MacBook Air (13 h M2 2022), Chipset: Apple Silicon, Memory: 8GB, Disk: 251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256Z, Graphics: llvmpipe, Network: Broadcom Device 4433 + Broadcom BRCM4387 Bluetooth OS: Arch rolling, Kernel: 6.3.0-asahi-13-1-ARCH (aarch64), Desktop: KDE Plasma 5.27.6, Display Server: X Server 1.21.1.8, OpenGL: 4.5 Mesa 23.1.3 (LLVM 15.0.7 128 bits), Compiler: GCC 12.1.0 + Clang 15.0.7, File-System: ext4, Screen Resolution: 2560x1600 b: Processor: Apple M2 @ 2.42GHz (4 Cores / 8 Threads), Motherboard: Apple MacBook Air (13 h M2 2022), Chipset: Apple Silicon, Memory: 8GB, Disk: 251GB APPLE SSD AP0256Z + 2 x 0GB APPLE SSD AP0256Z, Graphics: llvmpipe, Network: Broadcom Device 4433 + Broadcom BRCM4387 Bluetooth OS: Arch rolling, Kernel: 6.3.0-asahi-13-1-ARCH (aarch64), Desktop: KDE Plasma 5.27.6, Display Server: X Server 1.21.1.8, OpenGL: 4.5 Mesa 23.1.3 (LLVM 15.0.7 128 bits), Compiler: GCC 12.1.0 + Clang 15.0.7, File-System: ext4, Screen Resolution: 2560x1600 PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better a . 6.31 |===================================================================== b . 6.32 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better a . 3.14 |===================================================================== b . 3.11 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better a . 4.76 |===================================================================== b . 4.76 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better a . 4.83 |===================================================================== b . 4.79 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better a . 4.83 |===================================================================== b . 4.75 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better a . 2.27 |===================================================================== b . 2.28 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better a . 4.79 |===================================================================== b . 4.80 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better a . 2.33 |===================================================================== b . 2.34 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better a . 4.76 |===================================================================== b . 4.76 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better a . 2.37 |===================================================================== b . 2.31 |=================================================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better a . 2.35 |==================================================================== b . 2.40 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better a . 2.35 |===================================================================== b . 2.36 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 0.57 |===================================================================== b . 0.57 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 1.51 |===================================================================== b . 1.51 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 1.51 |===================================================================== b . 1.51 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 1.51 |===================================================================== b . 1.44 |================================================================== PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 1.51 |===================================================================== b . 1.49 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 1.51 |===================================================================== b . 1.50 |===================================================================== FFmpeg 6.1 Encoder: libx264 - Scenario: Live FPS > Higher Is Better a . 226.50 |=================================================================== b . 226.83 |=================================================================== FFmpeg 6.1 Encoder: libx265 - Scenario: Live FPS > Higher Is Better a . 37.09 |==================================================================== b . 36.79 |=================================================================== FFmpeg 6.1 Encoder: libx264 - Scenario: Upload FPS > Higher Is Better a . 13.92 |==================================================================== b . 13.94 |==================================================================== FFmpeg 6.1 Encoder: libx265 - Scenario: Upload FPS > Higher Is Better a . 11.46 |==================================================================== b . 11.46 |==================================================================== FFmpeg 6.1 Encoder: libx264 - Scenario: Platform FPS > Higher Is Better a . 54.54 |==================================================================== b . 54.08 |=================================================================== FFmpeg 6.1 Encoder: libx265 - Scenario: Platform FPS > Higher Is Better a . 20.62 |==================================================================== b . 20.73 |==================================================================== FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand FPS > Higher Is Better a . 54.36 |==================================================================== b . 54.51 |==================================================================== FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand FPS > Higher Is Better a . 20.57 |==================================================================== b . 20.52 |==================================================================== Java SciMark 2.2 Computational Test: Composite Mflops > Higher Is Better Java SciMark 2.2 Computational Test: Monte Carlo Mflops > Higher Is Better Java SciMark 2.2 Computational Test: Fast Fourier Transform Mflops > Higher Is Better Java SciMark 2.2 Computational Test: Sparse Matrix Multiply Mflops > Higher Is Better Java SciMark 2.2 Computational Test: Dense LU Matrix Factorization Mflops > Higher Is Better Java SciMark 2.2 Computational Test: Jacobi Successive Over-Relaxation Mflops > Higher Is Better Timed FFmpeg Compilation 6.1 Time To Compile Seconds < Lower Is Better