lxc testing on Debian GNU/Linux 12 via the Phoronix Test Suite.
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040
Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 52 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb
OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040
Java Notes: OpenJDK Runtime Environment (build 17.0.13+11-Debian-2deb12u1)
Python Notes: Python 3.11.2
Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Changed Processor to 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 56 Threads).
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: GPU - Batch Size: 256 - Model: VGG-16
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. The test quit with a non-zero exit status.
OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Model: Ford Taurus 10M
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
Input: motorBike
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.
This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.
Time To Compile
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: llvm-16.0.0.src/tools/llvm-readobj/ELFDumper.cpp:7556:1: fatal error: error writing to /tmp/ccjmapwn.s: No space left on device
Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.
OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.
Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.
Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 10:1
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5
2 x Intel Xeon E5-2680 v4: The test run did not produce a result.
Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
This test profile is of the combined time for the serial and parallel Mandelbrot sets written in Rustlang via willi-kappler/mandel-rust. Learn more via the OpenBenchmarking.org test page.
NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.
This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.
Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.
This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.
Time To Compile
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: kernel/rcu/tree.c:5174: fatal error: error writing to /tmp/cc31CL9j.s: Success
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: CPU - Batch Size: 64 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 512 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: GPU - Batch Size: 64 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: GPU - Batch Size: 1 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: CPU - Batch Size: 32 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: CPU - Batch Size: 512 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: GPU - Batch Size: 512 - Model: VGG-16
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 32 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 512 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 32 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 32 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: CPU - Batch Size: 1 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: CPU - Batch Size: 256 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 64 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 16 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 512 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 512 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
Input: drivaerFastback, Small Mesh Size
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: [0] --> FOAM FATAL ERROR:
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: GPU - Batch Size: 16 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 256 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 64 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 256 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 512 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 256 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 16 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 64 - Model: AlexNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 32 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.11/collections/__init__.py)
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Device: GPU - Batch Size: 1 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 1 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: GPU - Batch Size: 256 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 16 - Model: GoogLeNet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Device: CPU - Batch Size: 256 - Model: ResNet-50
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
Model: Chrysler Neon 1M
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ** ERROR: INPUT FILE /NEON1M11_0001.rad NOT FOUND
Fayalite-FIST Data
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ERROR: At least one command line argument must be specified
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: Tree
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Text Vectorizers
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Non-Negative Matrix Factorization
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.
H.264 Video Encoding
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ValueError: 'in' is not a valid parameter name
ATPase Simulation - 327,506 Atoms
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: FATAL ERROR: No simulation config file specified on command line.
Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.
Test: Scala Dotty
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.11/collections/__init__.py)
Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.
Test: Apache Spark PageRank
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: RCV1 Logreg Convergencet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Higgs Boson
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Kernel PCA Solvers / Time vs. N Samples
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Kernel PCA Solvers / Time vs. N Components
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Incremental PCA
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: MNIST Dataset
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: LocalOutlierFactor
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Neighbors
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: SGD Regression
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Parallel Pairwise
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Sample Without Replacement
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Feature Expansions
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isolation Forest
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: GLM
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Covertype Dataset Benchmark
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: TSNE MNIST Dataset
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Adult
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.
Benchmark: scikit_ica
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Benchmark: scikit_svm
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: Plot Ward
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.
Benchmark: scikit_linearridgeregression
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Benchmark: scikit_qda
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: Lasso
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Threading
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Hierarchical
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Singular Value Decomposition
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Fast KMeans
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: SGDOneClassSVM
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Hist Gradient Boosting Categorical Only
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Polynomial Kernel Approximation
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: SAGA
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Sparse Random Projections / 100 Iterations
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isotonic / Logistic
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot OMP vs. LARS
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Plot Lasso Path
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Glmnet
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: 20 Newsgroups / Logistic Regression
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isotonic / Perturbed Logarithm
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Isotonic / Pathological
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
Benchmark: Sparsify
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas
This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.
Test: SMP Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: make: time: No such file or directory
Test: Serial
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: make: time: No such file or directory
Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory
This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
Connections: 1
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Model: llama-2-70b-chat.Q5_0.gguf
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model
AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
Model: llama-2-7b.Q4_0.gguf
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: llama-2-13b.Q4_0.gguf
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model
Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: conv '--cfg=f32'
Harness: Deconvolution Batch deconv_1d - Data Type: f32
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: deconv '--cfg=f32'
Harness: Convolution Batch conv_alexnet - Data Type: f32
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: conv '--cfg=f32'
1080p 8-bit YUV To AV1 Video Encode
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: AlexNet - Acceleration: CPU - Iterations: 100
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GPT-2 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-wizardcoder: line 2: ./wizardcoder-python-34b-v1.0.Q6_K.llamafile.86: No such file or directory
Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-mistral: line 2: ./mistral-7b-instruct-v0.2.Q5_K_M.llamafile.86: No such file or directory
Test: llava-v1.5-7b-q4 - Acceleration: CPU
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: super-resolution-10 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: T5 Encoder - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ZFNet-512 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: GoogleNet - Acceleration: CPU - Iterations: 100
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.
Benchmark: MD5
2 x Intel Xeon E5-2680 v4 - mgag200drmfb - Dell: The test quit with a non-zero exit status.
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: super-resolution-10 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: T5 Encoder - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ZFNet-512 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: yolov4 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: yolov4 - Device: CPU - Executor: Parallel
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: GPT-2 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 1000
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
Connections: 20
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
1080p 8-bit YUV To VP9 Video Encode
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: GoogleNet - Acceleration: CPU - Iterations: 200
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 200
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
Benchmark: Sequential Fill
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Delete
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Seek Random
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Read
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Fill
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Overwrite
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Fill Sync
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Hot Read
2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
1080p 8-bit YUV To HEVC Video Encode
2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040
Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 4 November 2024 03:29 by user root.
Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 52 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb
OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040
Java Notes: OpenJDK Runtime Environment (build 17.0.13+11-Debian-2deb12u1)
Python Notes: Python 3.11.2
Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 5 November 2024 09:47 by user root.
Processor: 2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 56 Threads), Motherboard: Dell PowerEdge R630 02C2CP (2.19.0 BIOS), Memory: 98GB, Disk: 1000GB TOSHIBA MQ01ABD1, Graphics: mgag200drmfb
OS: Debian GNU/Linux 12, Kernel: 6.8.12-3-pve (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1600x1200, System Layer: lxc
Kernel Notes: Transparent Huge Pages: madvise
Processor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040
Security Notes: gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 6 November 2024 15:41 by user root.