AMD Eng Sample 100-000000897-03 testing with a Supermicro Super Server H13SSL-N v2.00 (3.0 BIOS) and llvmpipe on Ubuntu 24.04 via the Phoronix Test Suite.
Processor: AMD Eng Sample 100-000000897-03 @ 2.55GHz (32 Cores / 64 Threads), Motherboard: Supermicro Super Server H13SSL-N v2.00 (3.0 BIOS), Chipset: AMD Device 14a4, Memory: 32 GB + 32 GB + 32 GB + 16 GB + 16 GB + 16 GB + 32 GB + 32 GB + 32 GB + 16 GB + 16 GB + 16 GB DDR5-4800MT/s, Disk: 512GB INTEL SSDPEKKF512G8L, Graphics: llvmpipe (405/715MHz), Network: 2 x Broadcom NetXtreme BCM5720 PCIe
OS: Ubuntu 24.04, Kernel: 6.8.0-50-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11, Display Driver: NVIDIA 535.183.01, OpenGL: 4.5 Mesa 24.0.9-0ubuntu0.3 (LLVM 17.0.6 256 bits), OpenCL: OpenCL 3.0 CUDA 12.2.148, Compiler: GCC 13.3.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fG75Ri/gcc-13-13.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fG75Ri/gcc-13-13.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101020
Graphics Notes: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 86.00.4d.00.01
OpenCL Notes: GPU Compute Cores: 3584
Python Notes: Python 3.12.3
Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.
This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.
FP16: No - Mode: Inference - Network: VGG16 - Device: CPU
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
Model: GPT-2 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: GPT-2 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: yolov4 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: yolov4 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ZFNet-512 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ZFNet-512 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: T5 Encoder - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: T5 Encoder - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: bertsquad-12 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: super-resolution-10 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: super-resolution-10 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
hurricane-server: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory
This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
hurricane-server: The test quit with a non-zero exit status. E: ./deepsparse: 2: /.local/bin/deepsparse.benchmark: not found
This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.
AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'
Model: llama-2-7b.Q4_0.gguf
hurricane-server: The test quit with a non-zero exit status. E: ./llama-cpp: 4: ./llama-bench: not found
Model: llama-2-13b.Q4_0.gguf
hurricane-server: The test quit with a non-zero exit status. E: ./llama-cpp: 4: ./llama-bench: not found
Model: llama-2-70b-chat.Q5_0.gguf
hurricane-server: The test quit with a non-zero exit status. E: ./llama-cpp: 4: ./llama-bench: not found
Test: llava-v1.5-7b-q4 - Acceleration: CPU
hurricane-server: The test quit with a non-zero exit status.
Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU
hurricane-server: The test quit with a non-zero exit status.
Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU
hurricane-server: The test quit with a non-zero exit status.
The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tqdm'
This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
Model: AlexNet - Acceleration: CPU - Iterations: 100
hurricane-server: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 200
hurricane-server: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: AlexNet - Acceleration: CPU - Iterations: 1000
hurricane-server: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 100
hurricane-server: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 200
hurricane-server: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
Model: GoogleNet - Acceleration: CPU - Iterations: 1000
hurricane-server: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found
NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
Target: CPU - Model: DenseNet
hurricane-server: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
Target: CPU - Model: MobileNet v2
hurricane-server: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
Target: CPU - Model: SqueezeNet v2
hurricane-server: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
Target: CPU - Model: SqueezeNet v1.1
hurricane-server: The test quit with a non-zero exit status. E: ./tnn: 3: ./test/TNNTest: not found
This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.
RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
Detector: KNN CAD
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Relative Entropy
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Windowed Gaussian
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Earthgecko Skyline
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Bayesian Changepoint
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Detector: Contextual Anomaly Detector OSE
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'pandas'
Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.
Benchmark: scikit_ica
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_qda
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_svm
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Benchmark: scikit_linearridgeregression
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'imp'
Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.
Benchmark: Glmnet
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'glmnet'
Benchmark: Plot Lasso Path
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'matplotlib.tri.triangulation'
Benchmark: Plot Fast KMeans
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'matplotlib.tri.triangulation'
Benchmark: RCV1 Logreg Convergencet
hurricane-server: The test quit with a non-zero exit status. E: IndexError: list index out of range
Benchmark: Plot Singular Value Decomposition
hurricane-server: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'matplotlib.tri.triangulation'
Benchmark: Plot Non-Negative Matrix Factorization
hurricane-server: The test quit with a non-zero exit status. E: KeyError:
Processor: AMD Eng Sample 100-000000897-03 @ 2.55GHz (32 Cores / 64 Threads), Motherboard: Supermicro Super Server H13SSL-N v2.00 (3.0 BIOS), Chipset: AMD Device 14a4, Memory: 32 GB + 32 GB + 32 GB + 16 GB + 16 GB + 16 GB + 32 GB + 32 GB + 32 GB + 16 GB + 16 GB + 16 GB DDR5-4800MT/s, Disk: 512GB INTEL SSDPEKKF512G8L, Graphics: llvmpipe (405/715MHz), Network: 2 x Broadcom NetXtreme BCM5720 PCIe
OS: Ubuntu 24.04, Kernel: 6.8.0-50-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11, Display Driver: NVIDIA 535.183.01, OpenGL: 4.5 Mesa 24.0.9-0ubuntu0.3 (LLVM 17.0.6 256 bits), OpenCL: OpenCL 3.0 CUDA 12.2.148, Compiler: GCC 13.3.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fG75Ri/gcc-13-13.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fG75Ri/gcc-13-13.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101020
Graphics Notes: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 86.00.4d.00.01
OpenCL Notes: GPU Compute Cores: 3584
Python Notes: Python 3.12.3
Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 14 December 2024 23:25 by user root.