AMD Ryzen Threadripper 7960X 24-Cores testing with a Gigabyte TRX50 AERO D (FA BIOS) and Sapphire AMD Radeon RX 7900 XTX 24GB on Ubuntu 24.04 via the Phoronix Test Suite.
Processor: AMD Ryzen Threadripper 7960X 24-Cores @ 7.79GHz (24 Cores / 48 Threads), Motherboard: Gigabyte TRX50 AERO D (FA BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-5200MT/s Micron MTC20F1045S1RC56BG1, Disk: 1000GB GIGABYTE AG512K1TB, Graphics: Sapphire AMD Radeon RX 7900 XTX 24GB, Audio: AMD Device 14cc, Monitor: HP E273, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Qualcomm WCN785x Wi-Fi 7
OS: Ubuntu 24.04, Kernel: 6.8.0-48-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2.0-devel (LLVM 18.1.7 DRM 3.58), OpenCL: OpenCL 2.1 AMD-APP (3625.0), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa108105
Graphics Notes: BAR1 / Visible vRAM Size: 24560 MB
Python Notes: Python 3.12.3
Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: Plot Non-Negative Matrix Factorization
phoronix-ml.txt: The test quit with a non-zero exit status. E: KeyError:
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: RCV1 Logreg Convergencet
phoronix-ml.txt: The test quit with a non-zero exit status. E: IndexError: list index out of range
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: Plot Fast KMeans
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'matplotlib.tri.triangulation'
Benchmark: Plot Lasso Path
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'matplotlib.tri.triangulation'
Benchmark: Plot Singular Value Decomposition
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'matplotlib.tri.triangulation'
Benchmark: Glmnet
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'glmnet'
This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.
Benchmark: scikit_svm
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_linearridgeregression
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_qda
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_ica
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
Detector: Contextual Anomaly Detector OSE
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Bayesian Changepoint
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Earthgecko Skyline
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Windowed Gaussian
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Relative Entropy
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: KNN CAD
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tqdm'
AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory
Model: GPT-2 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory
Model: GPT-2 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory
Model: llama-2-70b-chat.Q5_0.gguf
phoronix-ml.txt: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: llama-2-13b.Q4_0.gguf
phoronix-ml.txt: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: llama-2-7b.Q4_0.gguf
phoronix-ml.txt: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory
Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./run-wizardcoder: line 2: ./wizardcoder-python-34b-v1.0.Q6_K.llamafile.86: No such file or directory
Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./run-mistral: line 2: ./mistral-7b-instruct-v0.2.Q5_K_M.llamafile.86: No such file or directory
Test: llava-v1.5-7b-q4 - Acceleration: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory
TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
Target: CPU - Model: SqueezeNet v1.1
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
Target: CPU - Model: SqueezeNet v2
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
Target: CPU - Model: MobileNet v2
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
Target: CPU - Model: DenseNet
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 200
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 100
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 1000
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 200
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 100
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Processor: AMD Ryzen Threadripper 7960X 24-Cores @ 7.79GHz (24 Cores / 48 Threads), Motherboard: Gigabyte TRX50 AERO D (FA BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-5200MT/s Micron MTC20F1045S1RC56BG1, Disk: 1000GB GIGABYTE AG512K1TB, Graphics: Sapphire AMD Radeon RX 7900 XTX 24GB, Audio: AMD Device 14cc, Monitor: HP E273, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Qualcomm WCN785x Wi-Fi 7
OS: Ubuntu 24.04, Kernel: 6.8.0-48-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2.0-devel (LLVM 18.1.7 DRM 3.58), OpenCL: OpenCL 2.1 AMD-APP (3625.0), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa108105
Graphics Notes: BAR1 / Visible vRAM Size: 24560 MB
Python Notes: Python 3.12.3
Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 November 2024 13:29 by user amol.