test lxc testing on Debian GNU/Linux 12 via the Phoronix Test Suite. r1: Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 52 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc 2 x Intel Xeon E5-2680 v4: Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 52 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc 2 x Intel Xeon E5-2680 v4 - mgag200drmfb - Dell: Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 56 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc CacheBench Test: Read MB/s > Higher Is Better r1 . 8361.63 |================================================================= CacheBench Test: Write MB/s > Higher Is Better r1 . 36891.33 |================================================================ CacheBench Test: Read / Modify / Write MB/s > Higher Is Better r1 . 65744.18 |================================================================ NAS Parallel Benchmarks 3.4 Test / Class: EP.C Total Mop/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 2736.22 |========================================== NAS Parallel Benchmarks 3.4 Test / Class: LU.C Total Mop/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 46814.61 |========================================= Rodinia 3.1 Test: OpenMP LavaMD Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 133.48 |=========================================== CP2K Molecular Dynamics 2024.3 Fayalite-FIST Data Seconds < Lower Is Better NAMD 3.0 ATPase Simulation - 327,506 Atoms ns/day > Higher Is Better DaCapo Benchmark 23.11 Java Test: Jython msec < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 8144 |============================================= DaCapo Benchmark 23.11 Java Test: Tradebeans msec < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 19157 |============================================ Renaissance 0.14 Test: Scala Dotty ms < Lower Is Better Renaissance 0.14 Test: Savina Reactors.IO ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 11385.6 |========================================== Renaissance 0.14 Test: Apache Spark PageRank ms < Lower Is Better John The Ripper 2023.03.14 Test: Blowfish Real C/S > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 30661 |============================================ SVT-AV1 2.3 1080p 8-bit YUV To AV1 Video Encode Frames Per Second > Higher Is Better SVT-HEVC 1.5.0 1080p 8-bit YUV To HEVC Video Encode Frames Per Second > Higher Is Better SVT-VP9 0.3 1080p 8-bit YUV To VP9 Video Encode Frames Per Second > Higher Is Better x264 2022-02-22 H.264 Video Encoding Frames Per Second > Higher Is Better Himeno Benchmark 3.0 Poisson Pressure Solver MFLOPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 3324.24 |========================================== 7-Zip Compression 24.05 Test: Compression Rating MIPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 137422 |=========================================== 7-Zip Compression 24.05 Test: Decompression Rating MIPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 122760 |=========================================== Stockfish 17 Total Time Nodes Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 43261094 |========================================= asmFish 2018-07-23 1024 Hash Memory, 26 Depth Nodes/second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 62366573 |========================================= Timed GCC Compilation 13.2 Time To Compile Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1491.16 |========================================== Timed Linux Kernel Compilation 6.8 Time To Compile Seconds < Lower Is Better Timed LLVM Compilation 16.0 Time To Compile Seconds < Lower Is Better Timed PHP Compilation 8.3.4 Time To Compile Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 78.38 |============================================ C-Ray 2.0 Total Time - 4K, 16 Rays Per Pixel Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 0.618 |============================================ POV-Ray 3.7.0.7 Trace Time Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 23.76 |============================================ Rust Mandelbrot Time To Complete Serial/Parallel Mandelbrot Seconds < Lower Is Better oneDNN 3.6 Harness: Deconvolution Batch deconv_1d - Data Type: f32 ms < Lower Is Better oneDNN 3.6 Harness: Convolution Batch conv_alexnet - Data Type: f32 ms < Lower Is Better oneDNN 3.6 Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32 ms < Lower Is Better Cython Benchmark 0.29.21 Test: N-Queens Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 32.49 |============================================ Hackbench Count: 32 - Type: Process Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 57.43 |============================================ m-queens 1.2 Time To Solve Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 36.86 |============================================ Radiance Benchmark 5.0 Test: Serial Seconds < Lower Is Better Radiance Benchmark 5.0 Test: SMP Parallel Seconds < Lower Is Better ctx_clock Context Switch Time Clocks < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1063 |============================================= PyBench 2018-02-16 Total For Average Test Times Milliseconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1161 |============================================= LeelaChessZero 0.31.1 Backend: BLAS Nodes Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 65 |=============================================== LeelaChessZero 0.31.1 Backend: Eigen Nodes Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 54 |=============================================== miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 GFInst/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 487.66 |=========================================== miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 Billion Interactions/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 19.51 |============================================ miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 GFInst/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 515.58 |=========================================== miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 Billion Interactions/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 20.62 |============================================ Rodinia 3.1 Test: OpenMP HotSpot3D Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 141.43 |=========================================== Rodinia 3.1 Test: OpenMP Leukocyte Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 66.19 |============================================ Rodinia 3.1 Test: OpenMP CFD Solver Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 11.16 |============================================ Rodinia 3.1 Test: OpenMP Streamcluster Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 13.75 |============================================ CLOMP 1.2 Static OMP Speedup Speedup > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 25.6 |============================================= NAMD 3.0 Input: ATPase with 327,506 Atoms ns/day > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1.09268 |========================================== NAMD 3.0 Input: STMV with 1,066,628 Atoms ns/day > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 0.32870 |========================================== QuantLib 1.35-dev Size: S tasks/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 10.44 |============================================ QuantLib 1.35-dev Size: XXS tasks/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 10.95 |============================================ Algebraic Multi-Grid Benchmark 1.2 Figure Of Merit > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 702647233 |======================================== Pennant 1.0.1 Test: sedovbig Hydro Cycle Time - Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 38.22 |============================================ Pennant 1.0.1 Test: leblancbig Hydro Cycle Time - Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 21.82 |============================================ Xcompact3d Incompact3d 2021-03-11 Input: X3D-benchmarking input.i3d Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1163.96 |========================================== Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 10.65 |============================================ Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 46.69 |============================================ OpenFOAM 10 Input: drivaerFastback, Small Mesh Size Seconds < Lower Is Better OpenRadioss 2023.09.15 Model: Bumper Beam Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 143.24 |=========================================== OpenRadioss 2023.09.15 Model: Ford Taurus 10M Seconds < Lower Is Better OpenRadioss 2023.09.15 Model: Chrysler Neon 1M Seconds < Lower Is Better OpenRadioss 2023.09.15 Model: Cell Phone Drop Test Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 75.12 |============================================ OpenRadioss 2023.09.15 Model: Bird Strike on Windshield Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 230.48 |=========================================== OpenFOAM 10 Input: motorBike Seconds < Lower Is Better Aircrack-ng 1.7 k/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 72471.82 |========================================= Hashcat 6.2.4 Benchmark: MD5 H/s > Higher Is Better AOM AV1 3.9 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 0.21 |============================================= AOM AV1 3.9 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 5.36 |============================================= AOM AV1 3.9 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 31.26 |============================================ AOM AV1 3.9 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 12.00 |============================================ AOM AV1 3.9 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 29.29 |============================================ AOM AV1 3.9 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 31.83 |============================================ AOM AV1 3.9 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 32.85 |============================================ AOM AV1 3.9 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 32.91 |============================================ AOM AV1 3.9 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 0.65 |============================================= AOM AV1 3.9 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 11.71 |============================================ AOM AV1 3.9 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 65.12 |============================================ AOM AV1 3.9 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 31.70 |============================================ AOM AV1 3.9 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 66.04 |============================================ AOM AV1 3.9 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 69.78 |============================================ AOM AV1 3.9 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 71.48 |============================================ AOM AV1 3.9 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 71.63 |============================================ Blender 4.2 Blend File: BMW27 - Compute: CPU-Only Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 72.25 |============================================ Blender 4.2 Blend File: Junkshop - Compute: CPU-Only Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 99.22 |============================================ Blender 4.2 Blend File: Classroom - Compute: CPU-Only Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 215.31 |=========================================== Blender 4.2 Blend File: Fishy Cat - Compute: CPU-Only Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 105.76 |=========================================== Blender 4.2 Blend File: Barbershop - Compute: CPU-Only Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 736.44 |=========================================== Blender 4.2 Blend File: Pabellon Barcelona - Compute: CPU-Only Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 233.47 |=========================================== Timed Wasmer Compilation 2.3 Time To Compile Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 76.41 |============================================ Timed Node.js Compilation 21.7.2 Time To Compile Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 2052.80 |========================================== Zstd Compression 1.5.4 Compression Level: 3 - Compression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1436.4 |=========================================== Zstd Compression 1.5.4 Compression Level: 3 - Decompression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 682.6 |============================================ Zstd Compression 1.5.4 Compression Level: 8 - Compression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 521.7 |============================================ Zstd Compression 1.5.4 Compression Level: 8 - Decompression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 668.3 |============================================ Zstd Compression 1.5.4 Compression Level: 12 - Compression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 133.6 |============================================ Zstd Compression 1.5.4 Compression Level: 12 - Decompression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 626.5 |============================================ Zstd Compression 1.5.4 Compression Level: 19 - Compression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 11.3 |============================================= Zstd Compression 1.5.4 Compression Level: 19 - Decompression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 578.7 |============================================ Zstd Compression 1.5.4 Compression Level: 3, Long Mode - Compression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 585.4 |============================================ Zstd Compression 1.5.4 Compression Level: 3, Long Mode - Decompression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 730.5 |============================================ Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Compression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 495.7 |============================================ Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Decompression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 686.6 |============================================ Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Compression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 6.19 |============================================= Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed MB/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 592.4 |============================================ libavif avifenc 1.0 Encoder Speed: 0 Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 150.33 |=========================================== libavif avifenc 1.0 Encoder Speed: 2 Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 79.42 |============================================ libavif avifenc 1.0 Encoder Speed: 6 Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 6.731 |============================================ libavif avifenc 1.0 Encoder Speed: 6, Lossless Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 11.93 |============================================ libavif avifenc 1.0 Encoder Speed: 10, Lossless Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 8.388 |============================================ nginx 1.23.2 Connections: 1 Requests Per Second > Higher Is Better nginx 1.23.2 Connections: 20 Requests Per Second > Higher Is Better LevelDB 1.23 Benchmark: Hot Read Microseconds Per Op < Lower Is Better LevelDB 1.23 Benchmark: Fill Sync Microseconds Per Op < Lower Is Better LevelDB 1.23 Benchmark: Overwrite Microseconds Per Op < Lower Is Better LevelDB 1.23 Benchmark: Random Fill Microseconds Per Op < Lower Is Better LevelDB 1.23 Benchmark: Random Read Microseconds Per Op < Lower Is Better LevelDB 1.23 Benchmark: Seek Random Microseconds Per Op < Lower Is Better LevelDB 1.23 Benchmark: Random Delete Microseconds Per Op < Lower Is Better LevelDB 1.23 Benchmark: Sequential Fill Microseconds Per Op < Lower Is Better Memcached 1.6.19 Set To Get Ratio: 1:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1268136.64 |======================================= Memcached 1.6.19 Set To Get Ratio: 1:5 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 2992050.82 |======================================= Memcached 1.6.19 Set To Get Ratio: 5:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 809259.92 |======================================== Memcached 1.6.19 Set To Get Ratio: 1:10 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 3191700.60 |======================================= Memcached 1.6.19 Set To Get Ratio: 1:100 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 3195806.54 |======================================= Redis 7.0.4 Test: GET - Parallel Connections: 50 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 2246300.92 |======================================= Redis 7.0.4 Test: SET - Parallel Connections: 50 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1574262.85 |======================================= Redis 7.0.4 Test: GET - Parallel Connections: 500 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1769970.88 |======================================= Redis 7.0.4 Test: LPOP - Parallel Connections: 50 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 2203333.82 |======================================= Redis 7.0.4 Test: SADD - Parallel Connections: 50 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1719230.85 |======================================= Redis 7.0.4 Test: SET - Parallel Connections: 500 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1403511.36 |======================================= Redis 7.0.4 Test: GET - Parallel Connections: 1000 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1783987.96 |======================================= Redis 7.0.4 Test: LPOP - Parallel Connections: 500 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1955391.54 |======================================= Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1390421.08 |======================================= Redis 7.0.4 Test: SADD - Parallel Connections: 500 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1542038.12 |======================================= Redis 7.0.4 Test: SET - Parallel Connections: 1000 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1407706.52 |======================================= Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1785408.10 |======================================= Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1251818.18 |======================================= Redis 7.0.4 Test: SADD - Parallel Connections: 1000 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1571264.33 |======================================= Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 Requests Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1236129.42 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1524635.99 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1609507.71 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1361024.38 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1365973.23 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1480526.55 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1303976.38 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1331731.41 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1517459.59 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:1 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1277786.02 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 Ops/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1481597.00 |======================================= Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 10:1 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 Apache Siege 2.4.62 Concurrent Users: 10 Transactions Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 23642.07 |========================================= Apache Siege 2.4.62 Concurrent Users: 50 Transactions Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 22369.70 |========================================= Apache Siege 2.4.62 Concurrent Users: 100 Transactions Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 21617.09 |========================================= Apache Siege 2.4.62 Concurrent Users: 200 Transactions Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 20413.98 |========================================= Apache Siege 2.4.62 Concurrent Users: 500 Transactions Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 20425.82 |========================================= Apache Siege 2.4.62 Concurrent Users: 1000 Transactions Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 20260.73 |========================================= InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 val/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 281287.4 |========================================= InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 val/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 938427.0 |========================================= InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 val/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 966559.2 |========================================= Xmrig 6.21 Variant: KawPow - Hash Count: 1M H/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8243.6 |=========================================== Xmrig 6.21 Variant: Monero - Hash Count: 1M H/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8225.4 |=========================================== Xmrig 6.21 Variant: Wownero - Hash Count: 1M H/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 12688.7 |========================================== Xmrig 6.21 Variant: GhostRider - Hash Count: 1M H/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1582.9 |=========================================== Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M H/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8230.4 |=========================================== Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M H/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8252.7 |=========================================== Timed LLVM Compilation 16.0 Build System: Ninja Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 497.46 |=========================================== Timed LLVM Compilation 16.0 Build System: Unix Makefiles Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 604.37 |=========================================== Cpuminer-Opt 24.3 Algorithm: Magi kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 474.91 |=========================================== Cpuminer-Opt 24.3 Algorithm: x20r kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 5936.61 |========================================== Cpuminer-Opt 24.3 Algorithm: scrypt kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 208.91 |=========================================== Cpuminer-Opt 24.3 Algorithm: Deepcoin kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 6558.28 |========================================== Cpuminer-Opt 24.3 Algorithm: Ringcoin kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 2590.17 |========================================== Cpuminer-Opt 24.3 Algorithm: Blake-2 S kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 124240 |=========================================== Cpuminer-Opt 24.3 Algorithm: Garlicoin kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 2995.14 |========================================== Cpuminer-Opt 24.3 Algorithm: Skeincoin kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 29780 |============================================ Cpuminer-Opt 24.3 Algorithm: Myriad-Groestl kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8846.55 |========================================== Cpuminer-Opt 24.3 Algorithm: LBC, LBRY Credits kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 10883 |============================================ Cpuminer-Opt 24.3 Algorithm: Quad SHA-256, Pyrite kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 45963 |============================================ Cpuminer-Opt 24.3 Algorithm: Triple SHA-256, Onecoin kH/s > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 64980 |============================================ oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3.36355 |========================================== oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 12.04 |============================================ oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 14.00 |============================================ oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 13.17 |============================================ oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 5.21706 |========================================== oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1845.75 |========================================== oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1018.67 |========================================== Numpy Benchmark Score > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 270.37 |=========================================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 193.78 |=========================================== R Benchmark Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 0.2279 |=========================================== RNNoise 0.2 Input: 26 Minute Long Talking Sample Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 18.19 |============================================ TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3967.06 |========================================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 34692.4 |========================================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 41794.0 |========================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 2491.03 |========================================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3904.20 |========================================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 42979.1 |========================================== PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 27.49 |============================================ PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 10.32 |============================================ PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 22.13 |============================================ PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 22.26 |============================================ PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 22.00 |============================================ PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8.30 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 21.98 |============================================ PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8.34 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 22.13 |============================================ PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8.40 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8.51 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8.46 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 5.64 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 4.35 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 4.29 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 4.32 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 4.45 |============================================= PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 4.21 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 2.05 |============================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 0.79 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 7.49 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 5.99 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 6.34 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 6.49 |============================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 3.76 |============================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 0.85 |============================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 0.90 |============================================= TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 0.92 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 71.30 |============================================ TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 6.75 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 88.37 |============================================ TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 6.76 |============================================= TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 102.69 |=========================================== TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 10.83 |============================================ TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 17.22 |============================================ Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 806.62 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8.2109 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 121.77 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 345.53 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 40.48 |============================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 100.25 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 9.9602 |=========================================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 194.67 |=========================================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 71.85 |============================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 78.43 |============================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 12.73 |============================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1022.06 |========================================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 13.68 |============================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 420.97 |=========================================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 2.3685 |=========================================== Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1.2827 |=========================================== Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 9614.47 |========================================== Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 3.1765 |=========================================== Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 314.78 |=========================================== Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 195.06 |=========================================== Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 71.71 |============================================ Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 78.16 |============================================ Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 12.78 |============================================ Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 95.59 |============================================ Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 146.34 |=========================================== Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 41.56 |============================================ Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 24.04 |============================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 150.08 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 93.16 |============================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 57.94 |============================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 17.25 |============================================ Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 15.85 |============================================ Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 880.92 |=========================================== Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 7.8773 |=========================================== Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 126.92 |=========================================== Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 170.65 |=========================================== Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 81.97 |============================================ Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 44.91 |============================================ Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 22.25 |============================================ Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 17.29 |============================================ Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 805.37 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 8.1818 |=========================================== Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 122.21 |=========================================== spaCy 3.4.1 tokens/sec > Higher Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 22.03 |============================================ Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3.504 |============================================ Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 5.825 |============================================ Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 26.96 |============================================ Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 7.976 |============================================ Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 4.947 |============================================ Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3.772 |============================================ Mobile Neural Network 2.9.b11b7037d Model: inception-v3 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 38.38 |============================================ NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 19.56 |============================================ NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 7.93 |============================================= NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 7.49 |============================================= NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 8.79 |============================================= NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 7.06 |============================================= NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 11.22 |============================================ NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3.76 |============================================= NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 18.36 |============================================ NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 41.46 |============================================ NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 11.58 |============================================ NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 9.48 |============================================= NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 23.76 |============================================ NCNN 20230517 Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 19.56 |============================================ NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 32.22 |============================================ NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 19.85 |============================================ NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 33.52 |============================================ NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 82.40 |============================================ NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 10.09 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 19.60 |============================================ NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 7.67 |============================================= NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 7.68 |============================================= NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 8.66 |============================================= NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 7.07 |============================================= NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 11.31 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3.90 |============================================= NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 18.75 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 42.45 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 11.47 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 9.81 |============================================= NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 24.06 |============================================ NCNN 20230517 Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 19.60 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 32.07 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 20.46 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 33.70 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 82.07 |============================================ NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 9.56 |============================================= TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3758.32 |========================================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 386.78 |=========================================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 93.73 |============================================ TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 351.57 |=========================================== XNNPACK b7b048 Model: FP32MobileNetV1 us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 2299 |============================================= XNNPACK b7b048 Model: FP32MobileNetV2 us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3725 |============================================= XNNPACK b7b048 Model: FP32MobileNetV3Large us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 4933 |============================================= XNNPACK b7b048 Model: FP32MobileNetV3Small us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3674 |============================================= XNNPACK b7b048 Model: FP16MobileNetV1 us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 2663 |============================================= XNNPACK b7b048 Model: FP16MobileNetV2 us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3685 |============================================= XNNPACK b7b048 Model: FP16MobileNetV3Large us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 5119 |============================================= XNNPACK b7b048 Model: FP16MobileNetV3Small us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3477 |============================================= XNNPACK b7b048 Model: QS8MobileNetV2 us < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 3293 |============================================= PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU Examples Per Second > Higher Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU Examples Per Second > Higher Is Better OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 5.29 |============================================= OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1675.23 |========================================== OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 49.33 |============================================ OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 182.19 |=========================================== OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 50.09 |============================================ OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 179.42 |=========================================== OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 343.83 |=========================================== OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 26.15 |============================================ OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 6.58 |============================================= OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1352.53 |========================================== OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1379.83 |========================================== OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 6.50 |============================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 130.77 |=========================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 68.76 |============================================ OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 435.20 |=========================================== OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 20.66 |============================================ OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 484.39 |=========================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 18.55 |============================================ OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 1299.65 |========================================== OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 6.91 |============================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 202.26 |=========================================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 44.46 |============================================ OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 55.26 |============================================ OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 162.59 |=========================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 691.42 |=========================================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 40.46 |============================================ OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 422.96 |=========================================== OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 21.23 |============================================ OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 567.32 |=========================================== OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 15.74 |============================================ OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 168.09 |=========================================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 53.49 |============================================ OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 699.99 |=========================================== OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 12.84 |============================================ OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 15819.82 |========================================= OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1.76 |============================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 214.88 |=========================================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 130.16 |=========================================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 21265.92 |========================================= OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1.30 |============================================= Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 169.28 |=========================================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 21.62 |============================================ Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 8.543 |============================================ Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 110.01 |=========================================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 45.70 |============================================ Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 43.62 |============================================ ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better AI Benchmark Alpha 0.1.2 Score > Higher Is Better Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 202.23 |=========================================== Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 543.89 |=========================================== Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 1539.31 |========================================== Llama.cpp b3067 Model: llama-2-7b.Q4_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b3067 Model: llama-2-13b.Q4_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b3067 Model: llama-2-70b-chat.Q5_0.gguf Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: llava-v1.5-7b-q4 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU Tokens Per Second > Higher Is Better OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better 2 x Intel Xeon E5-2680 v4 . 50567 |============================================ Llama.cpp b3067 Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf Tokens Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 5.17 |============================================= Llamafile 0.8.6 Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU Tokens Per Second > Higher Is Better 2 x Intel Xeon E5-2680 v4 . 5.80 |=============================================