AMD Ryzen Threadripper 7960X 24-Cores testing with a Gigabyte TRX50 AERO D (FA BIOS) and Sapphire AMD Radeon RX 7900 XTX 24GB on Ubuntu 24.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2411137-NE-PHORONIXM28
Kernel Notes: Transparent Huge Pages: madvise Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Notes: Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa108105 Graphics Notes: BAR1 / Visible vRAM Size: 24560 MB Python Notes: Python 3.12.3 Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
SHOC Scalable HeterOgeneous Computing
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
RNNoise
RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./run-wizardcoder: line 2: ./wizardcoder-python-34b-v1.0.Q6_K.llamafile.86: No such file or directory
Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./run-mistral: line 2: ./mistral-7b-instruct-v0.2.Q5_K_M.llamafile.86: No such file or directory
Test: llava-v1.5-7b-q4 - Acceleration: CPU
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory
Llama.cpp
Model: llama-2-70b-chat.Q5_0.gguf
phoronix-ml.txt: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: llama-2-13b.Q4_0.gguf
phoronix-ml.txt: The test quit with a non-zero exit status. E: main: error: unable to load model
Model: llama-2-7b.Q4_0.gguf
phoronix-ml.txt: The test quit with a non-zero exit status. E: main: error: unable to load model
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_svm
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_qda
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_ica
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
AI Benchmark Alpha
AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory
Model: GPT-2 - Device: CPU - Executor: Standard
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory
Model: GPT-2 - Device: CPU - Executor: Parallel
phoronix-ml.txt: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory
Numenta Anomaly Benchmark
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
Detector: Contextual Anomaly Detector OSE
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Bayesian Changepoint
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Earthgecko Skyline
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Windowed Gaussian
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Relative Entropy
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: KNN CAD
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
OpenVINO
This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 200
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 100
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 1000
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 200
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 100
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
spaCy
The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
phoronix-ml.txt: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tqdm'
Neural Magic DeepSparse
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
phoronix-ml.txt: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
TensorFlow Lite
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.