lxc testing on Debian GNU/Linux 12 via the Phoronix Test Suite.
Redis 7.0.12 + memtier_benchmark Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 10:1
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.07, N = 12 10.32 MIN: 6.9 / MAX: 11.18
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.07, N = 3 22.13 MIN: 19.61 / MAX: 22.61
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.28, N = 4 22.26 MIN: 18.7 / MAX: 23.31
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.23, N = 5 22.00 MIN: 15.47 / MAX: 22.89
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.06, N = 3 8.30 MIN: 6.51 / MAX: 8.57
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.23, N = 12 21.98 MIN: 15.73 / MAX: 23.11
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.01, N = 3 8.34 MIN: 6.58 / MAX: 8.48
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.21, N = 3 22.13 MIN: 19.45 / MAX: 22.71
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.08, N = 9 8.40 MIN: 6.58 / MAX: 8.89
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.12, N = 3 8.51 MIN: 6.24 / MAX: 8.82
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.09, N = 9 8.46 MIN: 1.24 / MAX: 8.92
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l 2 x Intel Xeon E5-2680 v4 1.269 2.538 3.807 5.076 6.345 SE +/- 0.04, N = 3 5.64 MIN: 4.26 / MAX: 6.2
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l 2 x Intel Xeon E5-2680 v4 0.9788 1.9576 2.9364 3.9152 4.894 SE +/- 0.04, N = 9 4.35 MIN: 3.6 / MAX: 4.55
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l 2 x Intel Xeon E5-2680 v4 0.9653 1.9306 2.8959 3.8612 4.8265 SE +/- 0.04, N = 9 4.29 MIN: 3.33 / MAX: 4.53
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l 2 x Intel Xeon E5-2680 v4 0.972 1.944 2.916 3.888 4.86 SE +/- 0.06, N = 9 4.32 MIN: 3.4 / MAX: 4.6
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l 2 x Intel Xeon E5-2680 v4 1.0013 2.0026 3.0039 4.0052 5.0065 SE +/- 0.04, N = 3 4.45 MIN: 3.4 / MAX: 4.59
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l 2 x Intel Xeon E5-2680 v4 0.9473 1.8946 2.8419 3.7892 4.7365 SE +/- 0.04, N = 3 4.21 MIN: 3.28 / MAX: 4.35
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.09, N = 3 19.51 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.02, N = 3 20.62 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
PlaidML This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.11/collections/__init__.py)
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.11/collections/__init__.py)
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 2 x Intel Xeon E5-2680 v4 150M 300M 450M 600M 750M SE +/- 3355643.09, N = 3 702647233 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 1.1903 2.3806 3.5709 4.7612 5.9515 SE +/- 0.01, N = 3 5.29 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 11 22 33 44 55 SE +/- 0.27, N = 3 49.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU 2 x Intel Xeon E5-2680 v4 11 22 33 44 55 SE +/- 0.33, N = 3 50.09 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 70 140 210 280 350 SE +/- 4.27, N = 3 343.83 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.01, N = 3 6.58 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 300 600 900 1200 1500 SE +/- 4.45, N = 3 1379.83 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 0.72, N = 3 130.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 90 180 270 360 450 SE +/- 0.12, N = 3 435.20 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 100 200 300 400 500 SE +/- 0.11, N = 3 484.39 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 300 600 900 1200 1500 SE +/- 0.44, N = 3 1299.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 0.29, N = 3 202.26 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 12 24 36 48 60 SE +/- 0.22, N = 3 55.26 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 150 300 450 600 750 SE +/- 0.53, N = 3 691.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 90 180 270 360 450 SE +/- 0.06, N = 3 422.96 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 120 240 360 480 600 SE +/- 0.35, N = 3 567.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 0.13, N = 3 168.09 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 150 300 450 600 750 SE +/- 0.71, N = 3 699.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 3K 6K 9K 12K 15K SE +/- 56.67, N = 3 15819.82 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 50 100 150 200 250 SE +/- 0.61, N = 3 214.88 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 5K 10K 15K 20K 25K SE +/- 19.67, N = 3 21265.92 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
SVT-AV1 1080p 8-bit YUV To AV1 Video Encode
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
1080p 8-bit YUV To HEVC Video Encode
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
1080p 8-bit YUV To VP9 Video Encode
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
x264 This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.
H.264 Video Encoding
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
AOM AV1 OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 0.0473 0.0946 0.1419 0.1892 0.2365 SE +/- 0.00, N = 3 0.21 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 1.206 2.412 3.618 4.824 6.03 SE +/- 0.03, N = 3 5.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 7 14 21 28 35 SE +/- 0.24, N = 15 31.26 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.04, N = 3 12.00 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 7 14 21 28 35 SE +/- 0.19, N = 3 29.29 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 7 14 21 28 35 SE +/- 0.31, N = 3 31.83 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 8 16 24 32 40 SE +/- 0.34, N = 3 32.85 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K 2 x Intel Xeon E5-2680 v4 8 16 24 32 40 SE +/- 0.13, N = 3 32.91 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 0.1463 0.2926 0.4389 0.5852 0.7315 SE +/- 0.01, N = 3 0.65 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.08, N = 3 11.71 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 15 30 45 60 75 SE +/- 0.49, N = 15 65.12 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 7 14 21 28 35 SE +/- 0.28, N = 15 31.70 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 15 30 45 60 75 SE +/- 0.85, N = 15 66.04 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 16 32 48 64 80 SE +/- 1.11, N = 15 69.78 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 16 32 48 64 80 SE +/- 0.76, N = 15 71.48 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.9 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p 2 x Intel Xeon E5-2680 v4 16 32 48 64 80 SE +/- 0.80, N = 15 71.63 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 2 x Intel Xeon E5-2680 v4 110 220 330 440 550 SE +/- 2.27, N = 3 487.66 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 2 x Intel Xeon E5-2680 v4 110 220 330 440 550 SE +/- 0.62, N = 3 515.58 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Hashcat Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.
Benchmark: MD5
2 x Intel Xeon E5-2680 v4 - mgag200drmfb - Dell: The test quit with a non-zero exit status.
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M 2 x Intel Xeon E5-2680 v4 2K 4K 6K 8K 10K SE +/- 18.01, N = 3 8243.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Monero - Hash Count: 1M 2 x Intel Xeon E5-2680 v4 2K 4K 6K 8K 10K SE +/- 33.81, N = 3 8225.4 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Wownero - Hash Count: 1M 2 x Intel Xeon E5-2680 v4 3K 6K 9K 12K 15K SE +/- 82.76, N = 3 12688.7 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: GhostRider - Hash Count: 1M 2 x Intel Xeon E5-2680 v4 300 600 900 1200 1500 SE +/- 0.69, N = 3 1582.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M 2 x Intel Xeon E5-2680 v4 2K 4K 6K 8K 10K SE +/- 29.52, N = 3 8230.4 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M 2 x Intel Xeon E5-2680 v4 2K 4K 6K 8K 10K SE +/- 30.75, N = 3 8252.7 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: VGG-16 2 x Intel Xeon E5-2680 v4 0.4613 0.9226 1.3839 1.8452 2.3065 SE +/- 0.02, N = 3 2.05
Device: GPU - Batch Size: 256 - Model: VGG-16
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. The test quit with a non-zero exit status.
Device: GPU - Batch Size: 32 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 512 - Model: VGG-16
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 64 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 1 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 1 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 256 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 512 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 1 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 1 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 256 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 512 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 16 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 16 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 32 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 32 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 64 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 64 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 16 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 16 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 32 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 32 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 64 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 64 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 256 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 256 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 512 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 512 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 256 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 256 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 512 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 512 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: GPT-2 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: yolov4 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: yolov4 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ZFNet-512 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ZFNet-512 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: T5 Encoder - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: T5 Encoder - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: super-resolution-10 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: super-resolution-10 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream 2 x Intel Xeon E5-2680 v4 80 160 240 320 400 SE +/- 0.63, N = 3 345.53
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.28, N = 3 100.25
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample high resolution (currently 15400 x 6940) JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.43 Operation: Swirl 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 3.41, N = 12 154 1. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lwebpmux -lwebp -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lz -lm -lpthread -lgomp
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.43 Operation: Rotate 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.00, N = 3 82 1. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lwebpmux -lwebp -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lz -lm -lpthread -lgomp
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.43 Operation: Sharpen 2 x Intel Xeon E5-2680 v4 12 24 36 48 60 SE +/- 0.33, N = 3 55 1. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lwebpmux -lwebp -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lz -lm -lpthread -lgomp
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.43 Operation: Enhanced 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.00, N = 3 81 1. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lwebpmux -lwebp -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lz -lm -lpthread -lgomp
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.43 Operation: Resizing 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 2.05, N = 15 139 1. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lwebpmux -lwebp -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lz -lm -lpthread -lgomp
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.43 Operation: Noise-Gaussian 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.33, N = 3 75 1. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lwebpmux -lwebp -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lz -lm -lpthread -lgomp
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.43 Operation: HWB Color Space 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 0.33, N = 3 131 1. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lwebpmux -lwebp -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lz -lm -lpthread -lgomp
Cpuminer-Opt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Magi 2 x Intel Xeon E5-2680 v4 100 200 300 400 500 SE +/- 0.82, N = 3 474.91 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: x20r 2 x Intel Xeon E5-2680 v4 1300 2600 3900 5200 6500 SE +/- 2.22, N = 3 5936.61 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: scrypt 2 x Intel Xeon E5-2680 v4 50 100 150 200 250 SE +/- 0.14, N = 3 208.91 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Deepcoin 2 x Intel Xeon E5-2680 v4 1400 2800 4200 5600 7000 SE +/- 2.36, N = 3 6558.28 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Ringcoin 2 x Intel Xeon E5-2680 v4 600 1200 1800 2400 3000 SE +/- 9.31, N = 3 2590.17 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Blake-2 S 2 x Intel Xeon E5-2680 v4 30K 60K 90K 120K 150K SE +/- 210.08, N = 3 124240 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Garlicoin 2 x Intel Xeon E5-2680 v4 600 1200 1800 2400 3000 SE +/- 17.79, N = 3 2995.14 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Skeincoin 2 x Intel Xeon E5-2680 v4 6K 12K 18K 24K 30K SE +/- 215.02, N = 3 29780 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Myriad-Groestl 2 x Intel Xeon E5-2680 v4 2K 4K 6K 8K 10K SE +/- 7.21, N = 3 8846.55 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: LBC, LBRY Credits 2 x Intel Xeon E5-2680 v4 2K 4K 6K 8K 10K SE +/- 3.33, N = 3 10883 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Quad SHA-256, Pyrite 2 x Intel Xeon E5-2680 v4 10K 20K 30K 40K 50K SE +/- 459.94, N = 3 45963 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 24.3 Algorithm: Triple SHA-256, Onecoin 2 x Intel Xeon E5-2680 v4 14K 28K 42K 56K 70K SE +/- 5.77, N = 3 64980 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp
OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Write r1 8K 16K 24K 32K 40K SE +/- 391.50, N = 5 36891.33 MIN: 22723.54 / MAX: 48193.4 1. (CC) gcc options: -O3 -lrt
OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Read / Modify / Write r1 14K 28K 42K 56K 70K SE +/- 147.90, N = 3 65744.18 MIN: 46193.67 / MAX: 78582.68 1. (CC) gcc options: -O3 -lrt
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 3 - Decompression Speed 2 x Intel Xeon E5-2680 v4 150 300 450 600 750 SE +/- 1.71, N = 3 682.6 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 8 - Decompression Speed 2 x Intel Xeon E5-2680 v4 140 280 420 560 700 SE +/- 1.84, N = 3 668.3 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 12 - Decompression Speed 2 x Intel Xeon E5-2680 v4 140 280 420 560 700 SE +/- 0.86, N = 3 626.5 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19 - Decompression Speed 2 x Intel Xeon E5-2680 v4 130 260 390 520 650 SE +/- 0.74, N = 3 578.7 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 3, Long Mode - Compression Speed 2 x Intel Xeon E5-2680 v4 130 260 390 520 650 SE +/- 3.88, N = 3 585.4 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 3, Long Mode - Decompression Speed 2 x Intel Xeon E5-2680 v4 160 320 480 640 800 SE +/- 2.39, N = 3 730.5 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Compression Speed 2 x Intel Xeon E5-2680 v4 110 220 330 440 550 SE +/- 1.63, N = 3 495.7 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 8, Long Mode - Decompression Speed 2 x Intel Xeon E5-2680 v4 150 300 450 600 750 SE +/- 0.21, N = 3 686.6 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Compression Speed 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.08, N = 3 6.19 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed 2 x Intel Xeon E5-2680 v4 130 260 390 520 650 SE +/- 0.40, N = 3 592.4 1. (CC) gcc options: -O3 -pthread -lz -llzma
7-Zip Compression OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 24.05 Test: Compression Rating 2 x Intel Xeon E5-2680 v4 30K 60K 90K 120K 150K SE +/- 904.38, N = 3 137422 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Stockfish OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 17 Total Time 2 x Intel Xeon E5-2680 v4 9M 18M 27M 36M 45M SE +/- 1759062.68, N = 6 43261094 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver
NAMD ATPase Simulation - 327,506 Atoms
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: FATAL ERROR: No simulation config file specified on command line.
OpenBenchmarking.org ns/day, More Is Better NAMD 3.0 Input: ATPase with 327,506 Atoms 2 x Intel Xeon E5-2680 v4 0.2459 0.4918 0.7377 0.9836 1.2295 SE +/- 0.00795, N = 3 1.09268
OpenBenchmarking.org ns/day, More Is Better NAMD 3.0 Input: STMV with 1,066,628 Atoms 2 x Intel Xeon E5-2680 v4 0.074 0.148 0.222 0.296 0.37 SE +/- 0.00061, N = 3 0.32870
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:1 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 16311.34, N = 3 1268136.64 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 2 x Intel Xeon E5-2680 v4 600K 1200K 1800K 2400K 3000K SE +/- 13235.65, N = 3 2992050.82 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 5:1 2 x Intel Xeon E5-2680 v4 200K 400K 600K 800K 1000K SE +/- 3234.70, N = 3 809259.92 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 2 x Intel Xeon E5-2680 v4 700K 1400K 2100K 2800K 3500K SE +/- 2765.51, N = 3 3191700.60 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 2 x Intel Xeon E5-2680 v4 700K 1400K 2100K 2800K 3500K SE +/- 26780.92, N = 3 3195806.54 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 3714.54, N = 3 1609507.71 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 2137.82, N = 3 1361024.38 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 4068.22, N = 3 1365973.23 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 7788.01, N = 3 1480526.55 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 3475.92, N = 3 1303976.38 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 2849.14, N = 3 1331731.41 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 4611.65, N = 3 1517459.59 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:1 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 4259.56, N = 3 1277786.02 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 8411.62, N = 3 1481597.00 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
Connections: 1
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Connections: 20
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 15716.01, N = 5 1574262.85 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 2 x Intel Xeon E5-2680 v4 400K 800K 1200K 1600K 2000K SE +/- 19833.99, N = 3 1769970.88 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 50 2 x Intel Xeon E5-2680 v4 500K 1000K 1500K 2000K 2500K SE +/- 17716.01, N = 15 2203333.82 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 50 2 x Intel Xeon E5-2680 v4 400K 800K 1200K 1600K 2000K SE +/- 13772.36, N = 9 1719230.85 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 22841.41, N = 12 1403511.36 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 2 x Intel Xeon E5-2680 v4 400K 800K 1200K 1600K 2000K SE +/- 7775.96, N = 3 1783987.96 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 500 2 x Intel Xeon E5-2680 v4 400K 800K 1200K 1600K 2000K SE +/- 30966.88, N = 15 1955391.54 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 17610.01, N = 3 1390421.08 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 500 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 8955.86, N = 3 1542038.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 13555.82, N = 6 1407706.52 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 2 x Intel Xeon E5-2680 v4 400K 800K 1200K 1600K 2000K SE +/- 116725.42, N = 12 1785408.10 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 10915.30, N = 7 1251818.18 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 1000 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 10619.90, N = 3 1571264.33 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 2 x Intel Xeon E5-2680 v4 300K 600K 900K 1200K 1500K SE +/- 7237.07, N = 3 1236129.42 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
AI Benchmark Alpha AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
CLOMP CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Speedup, More Is Better CLOMP 1.2 Static OMP Speedup 2 x Intel Xeon E5-2680 v4 6 12 18 24 30 SE +/- 0.29, N = 15 25.6 1. (CC) gcc options: -fopenmp -O3 -lm
Llama.cpp Model: llama-2-7b.Q4_0.gguf
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: llama-2-13b.Q4_0.gguf
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: llama-2-70b-chat.Q5_0.gguf
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model
Llamafile Test: llava-v1.5-7b-q4 - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory
Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-mistral: line 2: ./mistral-7b-instruct-v0.2.Q5_K_M.llamafile.86: No such file or directory
Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-wizardcoder: line 2: ./wizardcoder-python-34b-v1.0.Q6_K.llamafile.86: No such file or directory
Llama.cpp OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b3067 Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf 2 x Intel Xeon E5-2680 v4 1.1633 2.3266 3.4899 4.6532 5.8165 SE +/- 0.13, N = 12 5.17 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
Llamafile Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory
OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.6 Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU 2 x Intel Xeon E5-2680 v4 1.305 2.61 3.915 5.22 6.525 SE +/- 0.17, N = 12 5.80
Llama.cpp Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-7b.Q4_0.gguf 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.04, N = 3 7.48 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-13b.Q4_0.gguf 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.24, N = 12 6.41 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-70b-chat.Q5_0.gguf 2 x Intel Xeon E5-2680 v4 0.2025 0.405 0.6075 0.81 1.0125 SE +/- 0.03, N = 9 0.90 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
spaCy The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ValueError: 'in' is not a valid parameter name
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.C 2 x Intel Xeon E5-2680 v4 600 1200 1800 2400 3000 SE +/- 10.98, N = 3 2736.22 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C 2 x Intel Xeon E5-2680 v4 10K 20K 30K 40K 50K SE +/- 40.59, N = 3 46814.61 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
Apache Siege OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 10 2 x Intel Xeon E5-2680 v4 5K 10K 15K 20K 25K SE +/- 129.08, N = 3 23642.07 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 50 2 x Intel Xeon E5-2680 v4 5K 10K 15K 20K 25K SE +/- 8.83, N = 3 22369.70 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 100 2 x Intel Xeon E5-2680 v4 5K 10K 15K 20K 25K SE +/- 38.51, N = 3 21617.09 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 200 2 x Intel Xeon E5-2680 v4 4K 8K 12K 16K 20K SE +/- 52.17, N = 3 20413.98 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 500 2 x Intel Xeon E5-2680 v4 4K 8K 12K 16K 20K SE +/- 71.12, N = 3 20425.82 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 1000 2 x Intel Xeon E5-2680 v4 4K 8K 12K 16K 20K SE +/- 81.59, N = 3 20260.73 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 2 x Intel Xeon E5-2680 v4 60K 120K 180K 240K 300K SE +/- 3039.46, N = 3 281287.4
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 2 x Intel Xeon E5-2680 v4 200K 400K 600K 800K 1000K SE +/- 6591.71, N = 12 938427.0
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 2 x Intel Xeon E5-2680 v4 200K 400K 600K 800K 1000K SE +/- 2360.15, N = 3 966559.2
OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: leblancbig 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.04, N = 3 21.82 1. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
LevelDB LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
Benchmark: Hot Read
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Fill Sync
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Overwrite
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Fill
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Read
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Seek Random
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Delete
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Sequential Fill
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet 2 x Intel Xeon E5-2680 v4 900 1800 2700 3600 4500 SE +/- 67.64, N = 12 3967.06
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: AlexNet - Acceleration: CPU - Iterations: 100
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 200
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 1000
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 100
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 200
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times 2 x Intel Xeon E5-2680 v4 200 400 600 800 1000 SE +/- 1.20, N = 3 1161
Renaissance Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.
Test: Scala Dotty
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Test: Apache Spark PageRank
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
oneDNN Harness: Deconvolution Batch deconv_1d - Data Type: f32
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: deconv '--cfg=f32'
Harness: Convolution Batch conv_alexnet - Data Type: f32
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: conv '--cfg=f32'
Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: conv '--cfg=f32'
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU 2 x Intel Xeon E5-2680 v4 0.7568 1.5136 2.2704 3.0272 3.784 SE +/- 0.00512, N = 3 3.36355 MIN: 3.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.03, N = 3 12.04 MIN: 11.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU 2 x Intel Xeon E5-2680 v4 4 8 12 16 20 SE +/- 0.02, N = 3 14.00 MIN: 13.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.13, N = 3 13.17 MIN: 9.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU 2 x Intel Xeon E5-2680 v4 1.1738 2.3476 3.5214 4.6952 5.869 SE +/- 0.04255, N = 15 5.21706 MIN: 4.91 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU 2 x Intel Xeon E5-2680 v4 400 800 1200 1600 2000 SE +/- 10.64, N = 3 1845.75 MIN: 1764.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU 2 x Intel Xeon E5-2680 v4 200 400 600 800 1000 SE +/- 7.40, N = 3 1018.67 MIN: 952.1 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
Mobile Neural Network OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.35, N = 9 22.03 MIN: 16.48 / MAX: 47.88 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 2 x Intel Xeon E5-2680 v4 0.7884 1.5768 2.3652 3.1536 3.942 SE +/- 0.036, N = 9 3.504 MIN: 2.93 / MAX: 5.55 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 2 x Intel Xeon E5-2680 v4 1.3106 2.6212 3.9318 5.2424 6.553 SE +/- 0.151, N = 9 5.825 MIN: 4.27 / MAX: 16.91 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 2 x Intel Xeon E5-2680 v4 6 12 18 24 30 SE +/- 0.18, N = 9 26.96 MIN: 24.27 / MAX: 169.42 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.154, N = 9 7.976 MIN: 6.03 / MAX: 23.01 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 2 x Intel Xeon E5-2680 v4 1.1131 2.2262 3.3393 4.4524 5.5655 SE +/- 0.071, N = 9 4.947 MIN: 3.92 / MAX: 8.48 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 2 x Intel Xeon E5-2680 v4 0.8487 1.6974 2.5461 3.3948 4.2435 SE +/- 0.024, N = 9 3.772 MIN: 3.26 / MAX: 10.66 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 2 x Intel Xeon E5-2680 v4 9 18 27 36 45 SE +/- 0.24, N = 9 38.38 MIN: 29.94 / MAX: 109.16 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.24, N = 12 7.93 MIN: 7.1 / MAX: 581.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.08, N = 12 7.49 MIN: 6.65 / MAX: 69.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.11, N = 12 8.79 MIN: 7.42 / MAX: 103.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.07, N = 12 7.06 MIN: 5.98 / MAX: 52.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.11, N = 12 11.22 MIN: 10.09 / MAX: 196.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface 2 x Intel Xeon E5-2680 v4 0.846 1.692 2.538 3.384 4.23 SE +/- 0.06, N = 12 3.76 MIN: 3.22 / MAX: 97.74 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.14, N = 12 18.36 MIN: 16.63 / MAX: 156.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 2 x Intel Xeon E5-2680 v4 9 18 27 36 45 SE +/- 0.41, N = 12 41.46 MIN: 36.72 / MAX: 331.43 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.08, N = 12 11.58 MIN: 10.7 / MAX: 110.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.10, N = 12 9.48 MIN: 8.34 / MAX: 111.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 2 x Intel Xeon E5-2680 v4 6 12 18 24 30 SE +/- 0.35, N = 12 23.76 MIN: 20.94 / MAX: 184.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.27, N = 12 19.56 MIN: 17.72 / MAX: 164.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny 2 x Intel Xeon E5-2680 v4 7 14 21 28 35 SE +/- 0.25, N = 12 32.22 MIN: 29.25 / MAX: 192.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.19, N = 12 19.85 MIN: 16.83 / MAX: 168.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m 2 x Intel Xeon E5-2680 v4 8 16 24 32 40 SE +/- 0.27, N = 12 33.52 MIN: 30.18 / MAX: 387.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.25, N = 12 82.40 MIN: 76.17 / MAX: 439.37 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.12, N = 12 10.09 MIN: 8.51 / MAX: 179.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mobilenet 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.27, N = 3 19.60 MIN: 18.34 / MAX: 116.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.20, N = 3 7.67 MIN: 7.11 / MAX: 73.78 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.37, N = 3 7.68 MIN: 6.95 / MAX: 135.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.12, N = 3 8.66 MIN: 7.95 / MAX: 54.97 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mnasnet 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.19, N = 3 7.07 MIN: 6.55 / MAX: 55.91 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.13, N = 3 11.31 MIN: 10.54 / MAX: 129.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: blazeface 2 x Intel Xeon E5-2680 v4 0.8775 1.755 2.6325 3.51 4.3875 SE +/- 0.19, N = 3 3.90 MIN: 3.45 / MAX: 32.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.09, N = 3 18.75 MIN: 17.58 / MAX: 61.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vgg16 2 x Intel Xeon E5-2680 v4 10 20 30 40 50 SE +/- 0.15, N = 3 42.45 MIN: 38.84 / MAX: 131.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet18 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.21, N = 3 11.47 MIN: 10.74 / MAX: 65.89 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: alexnet 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.26, N = 3 9.81 MIN: 9.08 / MAX: 107.57 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet50 2 x Intel Xeon E5-2680 v4 6 12 18 24 30 SE +/- 0.32, N = 3 24.06 MIN: 22.21 / MAX: 124.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.27, N = 3 19.60 MIN: 18.34 / MAX: 116.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny 2 x Intel Xeon E5-2680 v4 7 14 21 28 35 SE +/- 0.18, N = 3 32.07 MIN: 29.66 / MAX: 113.51 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.44, N = 3 20.46 MIN: 18.81 / MAX: 75.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m 2 x Intel Xeon E5-2680 v4 8 16 24 32 40 SE +/- 0.16, N = 3 33.70 MIN: 31.52 / MAX: 136.78 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.29, N = 3 82.07 MIN: 77.72 / MAX: 211.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.10, N = 3 9.56 MIN: 8.52 / MAX: 49.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet 2 x Intel Xeon E5-2680 v4 800 1600 2400 3200 4000 SE +/- 13.71, N = 3 3758.32 MIN: 3547 / MAX: 3972.77 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 2 x Intel Xeon E5-2680 v4 80 160 240 320 400 SE +/- 1.50, N = 3 386.78 MIN: 377.08 / MAX: 410.1 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.05, N = 3 93.73 MIN: 92.89 / MAX: 102.53 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 2 x Intel Xeon E5-2680 v4 80 160 240 320 400 SE +/- 0.19, N = 3 351.57 MIN: 348.06 / MAX: 361.44 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 400 800 1200 1600 2000 SE +/- 0.53, N = 3 1675.23 MIN: 1524.28 / MAX: 1770.4 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 0.99, N = 3 182.19 MIN: 143.44 / MAX: 226.91 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 1.19, N = 3 179.42 MIN: 153.31 / MAX: 227.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 6 12 18 24 30 SE +/- 0.32, N = 3 26.15 MIN: 19.52 / MAX: 45.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 300 600 900 1200 1500 SE +/- 0.25, N = 3 1352.53 MIN: 1293.5 / MAX: 1386.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.02, N = 3 6.50 MIN: 5.8 / MAX: 20.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 15 30 45 60 75 SE +/- 0.38, N = 3 68.76 MIN: 43.42 / MAX: 111.07 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.01, N = 3 20.66 MIN: 20.17 / MAX: 30.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.00, N = 3 18.55 MIN: 17.41 / MAX: 36.64 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 2 4 6 8 10 SE +/- 0.00, N = 3 6.91 MIN: 6.78 / MAX: 13.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 10 20 30 40 50 SE +/- 0.06, N = 3 44.46 MIN: 42.03 / MAX: 58.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 0.63, N = 3 162.59 MIN: 131.74 / MAX: 222.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 9 18 27 36 45 SE +/- 0.03, N = 3 40.46 MIN: 39.84 / MAX: 47.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 5 10 15 20 25 SE +/- 0.00, N = 3 21.23 MIN: 17.51 / MAX: 33.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 4 8 12 16 20 SE +/- 0.01, N = 3 15.74 MIN: 13.82 / MAX: 29.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 12 24 36 48 60 SE +/- 0.04, N = 3 53.49 MIN: 49.19 / MAX: 73.08 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.01, N = 3 12.84 MIN: 12.15 / MAX: 23.49 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU 2 x Intel Xeon E5-2680 v4 0.396 0.792 1.188 1.584 1.98 SE +/- 0.01, N = 3 1.76 MIN: 1.68 / MAX: 9.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 0.36, N = 3 130.16 MIN: 116.02 / MAX: 172.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU 2 x Intel Xeon E5-2680 v4 0.2925 0.585 0.8775 1.17 1.4625 SE +/- 0.00, N = 3 1.30 MIN: 1.27 / MAX: 10.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenCV This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: DNN - Deep Neural Network 2 x Intel Xeon E5-2680 v4 11K 22K 33K 44K 55K SE +/- 2440.79, N = 15 50567 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
Neural Magic DeepSparse This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream 2 x Intel Xeon E5-2680 v4 200 400 600 800 1000 SE +/- 2.74, N = 3 806.62
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.0282, N = 3 9.9602
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream 2 x Intel Xeon E5-2680 v4 200 400 600 800 1000 SE +/- 1.38, N = 3 805.37
Glibc Benchmarks OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks 2.39 Benchmark: cos 2 x Intel Xeon E5-2680 v4 15 30 45 60 75 SE +/- 0.33, N = 3 65.51 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks 2.39 Benchmark: ffs 2 x Intel Xeon E5-2680 v4 0.9872 1.9744 2.9616 3.9488 4.936 SE +/- 0.00101, N = 3 4.38770 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks 2.39 Benchmark: sqrt 2 x Intel Xeon E5-2680 v4 1.3168 2.6336 3.9504 5.2672 6.584 SE +/- 0.00191, N = 3 5.85246 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks 2.39 Benchmark: ffsll 2 x Intel Xeon E5-2680 v4 0.9869 1.9738 2.9607 3.9476 4.9345 SE +/- 0.00074, N = 3 4.38612 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks 2.39 Benchmark: pthread_once 2 x Intel Xeon E5-2680 v4 1.1526 2.3052 3.4578 4.6104 5.763 SE +/- 0.01136, N = 3 5.12272 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP LavaMD 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 0.28, N = 3 133.48 1. (CXX) g++ options: -O2 -lOpenCL
CP2K Molecular Dynamics Fayalite-FIST Data
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ERROR: At least one command line argument must be specified
Timed Linux Kernel Compilation This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.
Time To Compile
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: kernel/rcu/tree.c:5174: fatal error: error writing to /tmp/cc31CL9j.s: Success
Timed LLVM Compilation This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.
Time To Compile
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: llvm-16.0.0.src/tools/llvm-readobj/ELFDumper.cpp:7556:1: fatal error: error writing to /tmp/ccjmapwn.s: No space left on device
C-Ray OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 2.0 Total Time - 4K, 16 Rays Per Pixel 2 x Intel Xeon E5-2680 v4 0.1391 0.2782 0.4173 0.5564 0.6955 SE +/- 0.005, N = 3 0.618 1. (CC) gcc options: -lpthread -lm
POV-Ray This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray 3.7.0.7 Trace Time 2 x Intel Xeon E5-2680 v4 6 12 18 24 30 SE +/- 0.08, N = 3 23.76 1. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -R/usr/lib -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
Rust Mandelbrot This test profile is of the combined time for the serial and parallel Mandelbrot sets written in Rustlang via willi-kappler/mandel-rust. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rust Mandelbrot Time To Complete Serial/Parallel Mandelbrot 2 x Intel Xeon E5-2680 v4 10 20 30 40 50 SE +/- 0.13, N = 3 45.52 1. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs
Radiance Benchmark This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.
Test: Serial
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: make: time: No such file or directory
Test: SMP Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: make: time: No such file or directory
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP HotSpot3D 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 0.06, N = 3 141.43 1. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Leukocyte 2 x Intel Xeon E5-2680 v4 15 30 45 60 75 SE +/- 0.72, N = 3 66.19 1. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP CFD Solver 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.11, N = 3 11.16 1. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Streamcluster 2 x Intel Xeon E5-2680 v4 4 8 12 16 20 SE +/- 0.13, N = 15 13.75 1. (CXX) g++ options: -O2 -lOpenCL
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: X3D-benchmarking input.i3d 2 x Intel Xeon E5-2680 v4 300 600 900 1200 1500 SE +/- 16.45, N = 9 1163.96 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction 2 x Intel Xeon E5-2680 v4 3 6 9 12 15 SE +/- 0.10, N = 6 10.65 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction 2 x Intel Xeon E5-2680 v4 11 22 33 44 55 SE +/- 0.11, N = 3 46.69 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
Input: drivaerFastback, Small Mesh Size
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: [0] --> FOAM FATAL ERROR:
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bumper Beam 2 x Intel Xeon E5-2680 v4 30 60 90 120 150 SE +/- 1.00, N = 3 143.24
Model: Ford Taurus 10M
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Model: Chrysler Neon 1M
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ** ERROR: INPUT FILE /NEON1M11_0001.rad NOT FOUND
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
Input: motorBike
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Blender OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: BMW27 - Compute: CPU-Only 2 x Intel Xeon E5-2680 v4 16 32 48 64 80 SE +/- 0.32, N = 3 72.25
OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Classroom - Compute: CPU-Only 2 x Intel Xeon E5-2680 v4 50 100 150 200 250 SE +/- 2.43, N = 4 215.31
OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Fishy Cat - Compute: CPU-Only 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.32, N = 3 105.76
OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Barbershop - Compute: CPU-Only 2 x Intel Xeon E5-2680 v4 160 320 480 640 800 SE +/- 0.76, N = 3 736.44
OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Pabellon Barcelona - Compute: CPU-Only 2 x Intel Xeon E5-2680 v4 50 100 150 200 250 SE +/- 0.28, N = 3 233.47
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile 2 x Intel Xeon E5-2680 v4 20 40 60 80 100 SE +/- 0.63, N = 3 76.41 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
DeepSpeech Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better DeepSpeech 0.6 Acceleration: CPU 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 0.69, N = 3 193.78
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 0.2 Input: 26 Minute Long Talking Sample 2 x Intel Xeon E5-2680 v4 4 8 12 16 20 SE +/- 0.26, N = 3 18.19 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 1.37, N = 3 169.28
Mlpack Benchmark Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.
Benchmark: scikit_ica
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Benchmark: scikit_qda
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Benchmark: scikit_svm
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Benchmark: scikit_linearridgeregression
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Scikit-Learn Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: GLM
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: SAGA
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Tree
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Lasso
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Glmnet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Sparsify
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Ward
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: MNIST Dataset
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Neighbors
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: SGD Regression
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: SGDOneClassSVM
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Lasso Path
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isolation Forest
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Fast KMeans
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Text Vectorizers
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Hierarchical
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot OMP vs. LARS
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Feature Expansions
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: LocalOutlierFactor
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: TSNE MNIST Dataset
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isotonic / Logistic
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Incremental PCA
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Parallel Pairwise
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isotonic / Pathological
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: RCV1 Logreg Convergencet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Sample Without Replacement
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Covertype Dataset Benchmark
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Adult
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isotonic / Perturbed Logarithm
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Threading
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Singular Value Decomposition
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Higgs Boson
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: 20 Newsgroups / Logistic Regression
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Polynomial Kernel Approximation
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Non-Negative Matrix Factorization
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Categorical Only
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Kernel PCA Solvers / Time vs. N Samples
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Kernel PCA Solvers / Time vs. N Components
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Sparse Random Projections / 100 Iterations
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Whisper.cpp OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union 2 x Intel Xeon E5-2680 v4 40 80 120 160 200 SE +/- 1.87, N = 3 202.23 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union 2 x Intel Xeon E5-2680 v4 120 240 360 480 600 SE +/- 3.14, N = 3 543.89 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union 2 x Intel Xeon E5-2680 v4 300 600 900 1200 1500 SE +/- 3.76, N = 3 1539.31 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2
XNNPACK OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV1 2 x Intel Xeon E5-2680 v4 500 1000 1500 2000 2500 SE +/- 19.22, N = 3 2299 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV2 2 x Intel Xeon E5-2680 v4 800 1600 2400 3200 4000 SE +/- 275.46, N = 3 3725 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Large 2 x Intel Xeon E5-2680 v4 1100 2200 3300 4400 5500 SE +/- 109.39, N = 3 4933 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Small 2 x Intel Xeon E5-2680 v4 800 1600 2400 3200 4000 SE +/- 120.36, N = 3 3674 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV1 2 x Intel Xeon E5-2680 v4 600 1200 1800 2400 3000 SE +/- 54.55, N = 3 2663 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV2 2 x Intel Xeon E5-2680 v4 800 1600 2400 3200 4000 SE +/- 209.77, N = 3 3685 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Large 2 x Intel Xeon E5-2680 v4 1100 2200 3300 4400 5500 SE +/- 3.84, N = 3 5119 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Small 2 x Intel Xeon E5-2680 v4 700 1400 2100 2800 3500 SE +/- 44.00, N = 3 3477 1. (CXX) g++ options: -O3 -lrt -lm
OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: QS8MobileNetV2 2 x Intel Xeon E5-2680 v4 700 1400 2100 2800 3500 SE +/- 33.39, N = 3 3293 1. (CXX) g++ options: -O3 -lrt -lm
r1 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 4 November 2024 03:29 by user root.
2 x Intel Xeon E5-2680 v4 Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 52 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb
OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040Java Notes: OpenJDK Runtime Environment (build 17.0.13+11-Debian-2deb12u1)Python Notes: Python 3.11.2Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 5 November 2024 09:47 by user root.
2 x Intel Xeon E5-2680 v4 - mgag200drmfb - Dell Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 56 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb
OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc
Kernel Notes: Transparent Huge Pages: madviseProcessor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 6 November 2024 15:41 by user root.