gh200
ARMv8 Neoverse-V2 testing with a Pegatron JIMBO P4352 (00022432 BIOS) and NVIDIA GH200 144G HBM3e 143GB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410120-NE-G2008653578&grt&sro.
7-Zip Compression
Test: Compression Rating
7-Zip Compression
Test: Decompression Rating
7-Zip Compression
Test: Compression Rating
7-Zip Compression
Test: Decompression Rating
Blender
Blend File: BMW27 - Compute: CPU-Only
Blender
Blend File: Classroom - Compute: CPU-Only
Blender
Blend File: Fishy Cat - Compute: CPU-Only
Blender
Blend File: Barbershop - Compute: CPU-Only
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
Build2
Time To Compile
BYTE Unix Benchmark
Computational Test: Pipe
BYTE Unix Benchmark
Computational Test: Dhrystone 2
BYTE Unix Benchmark
Computational Test: System Call
BYTE Unix Benchmark
Computational Test: Whetstone Double
C-Ray
Resolution: 4K - Rays Per Pixel: 16
C-Ray
Resolution: 5K - Rays Per Pixel: 16
C-Ray
Resolution: 1080p - Rays Per Pixel: 16
Epoch
Epoch3D Deck: Cone
Etcpak
Benchmark: Multi-Threaded - Configuration: ETC2
GraphicsMagick
Operation: Swirl
GraphicsMagick
Operation: Rotate
GraphicsMagick
Operation: Sharpen
GraphicsMagick
Operation: Enhanced
GraphicsMagick
Operation: Resizing
GraphicsMagick
Operation: Noise-Gaussian
GraphicsMagick
Operation: HWB Color Space
GraphicsMagick
Operation: Swirl
GraphicsMagick
Operation: Rotate
GraphicsMagick
Operation: Sharpen
GraphicsMagick
Operation: Enhanced
GraphicsMagick
Operation: Resizing
GraphicsMagick
Operation: Noise-Gaussian
GraphicsMagick
Operation: HWB Color Space
GROMACS
Input: water_GMX50_bare
GROMACS
Implementation: MPI CPU - Input: water_GMX50_bare
LeelaChessZero
Backend: Eigen
Mobile Neural Network
Model: nasnet
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: squeezenetv1.1
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ZFNet-512 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ZFNet-512 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ZFNet-512 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ZFNet-512 - Device: CPU - Executor: Standard
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Parallel
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Parallel
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Standard
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
POV-Ray
Trace Time
PyPerformance
Benchmark: go
PyPerformance
Benchmark: chaos
PyPerformance
Benchmark: float
PyPerformance
Benchmark: nbody
PyPerformance
Benchmark: pathlib
PyPerformance
Benchmark: raytrace
PyPerformance
Benchmark: xml_etree
PyPerformance
Benchmark: gc_collect
PyPerformance
Benchmark: json_loads
PyPerformance
Benchmark: crypto_pyaes
PyPerformance
Benchmark: async_tree_io
PyPerformance
Benchmark: regex_compile
PyPerformance
Benchmark: python_startup
PyPerformance
Benchmark: asyncio_tcp_ssl
PyPerformance
Benchmark: django_template
PyPerformance
Benchmark: asyncio_websockets
PyPerformance
Benchmark: pickle_pure_python
simdjson
Throughput Test: Kostya
simdjson
Throughput Test: TopTweet
simdjson
Throughput Test: LargeRandom
simdjson
Throughput Test: PartialTweets
simdjson
Throughput Test: DistinctUserID
Stockfish
Chess Benchmark
Stockfish
Chess Benchmark
Timed Linux Kernel Compilation
Build: defconfig
Timed Linux Kernel Compilation
Build: allmodconfig
Timed LLVM Compilation
Build System: Ninja
Timed LLVM Compilation
Build System: Unix Makefiles
WarpX
Input: Uniform Plasma
WarpX
Input: Plasma Acceleration
x265
Video Input: Bosphorus 4K
x265
Video Input: Bosphorus 1080p
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: QU8MobileNetV2
XNNPACK
Model: QU8MobileNetV3Large
XNNPACK
Model: QU8MobileNetV3Small
Phoronix Test Suite v10.8.5