gh200

ARMv8 Neoverse-V2 testing with a Pegatron JIMBO P4352 (00022432 BIOS) and NVIDIA GH200 144G HBM3e 143GB on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2410120-NE-G2008653578
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

Chess Test Suite 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 7 Tests
CPU Massive 8 Tests
Creator Workloads 3 Tests
Database Test Suite 2 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 4 Tests
Multi-Core 10 Tests
NVIDIA GPU Compute 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 3 Tests
Server 3 Tests
Server CPU Tests 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 12
  12 Hours, 26 Minutes
b
October 12
  3 Hours, 37 Minutes
Invert Hiding All Results Option
  8 Hours, 2 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


gh200OpenBenchmarking.orgPhoronix Test SuiteARMv8 Neoverse-V2 @ 3.47GHz (72 Cores)Pegatron JIMBO P4352 (00022432 BIOS)1 x 480GB LPDDR5-6400MT/s NVIDIA 699-2G530-0236-RC11000GB CT1000T700SSD3NVIDIA GH200 144G HBM3e 143GB2 x Intel X550Ubuntu 24.046.8.0-45-generic-64k (aarch64)NVIDIAOpenCL 3.0 CUDA 12.6.65GCC 13.2.0ext41920x1200ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDisplay DriverOpenCLCompilerFile-SystemScreen ResolutionGh200 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-dIwDw0/gcc-13-13.2.0/debian/tmp-nvptx/usr --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto --without-cuda-driver -v - Scaling Governor: cppc_cpufreq ondemand (Boost: Disabled)- OpenJDK Runtime Environment (build 21.0.4+7-Ubuntu-1ubuntu224.04)- Python 3.12.3- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+4.7%+4.7%+9.4%+9.4%+14.1%+14.1%18.8%11.8%4.5%4.3%3.5%3.4%2.8%2.3%2.3%2.3%Chess BenchmarkChess Benchmarkdefconfig10%ZFNet-512 - CPU - Parallel7%HWB Color Space5.4%mobilenet-v1-1.05%FP32MobileNetV2SwirlT5 Encoder - CPU - Standard3.9%yolov4 - CPU - Standard3.8%asyncio_tcp_sslNoise-Gaussian3.4%mobilenetV3Time To Compile3.2%Noise-GaussianSwirlQU8MobileNetV3Small2.3%HWB Color SpaceFP16MobileNetV3LargeBosphorus 1080p2.1%QU8MobileNetV3Large2%ZFNet-512 - CPU - Parallel6.8%yolov4 - CPU - Standard3.8%T5 Encoder - CPU - Standard3.9%StockfishStockfishTimed Linux Kernel CompilationONNX RuntimeGraphicsMagickMobile Neural NetworkXNNPACKGraphicsMagickONNX RuntimeONNX RuntimePyPerformanceGraphicsMagickMobile Neural NetworkBuild2GraphicsMagickGraphicsMagickXNNPACKGraphicsMagickXNNPACKx265XNNPACKONNX RuntimeONNX RuntimeONNX Runtimeab

gh200xnnpack: QU8MobileNetV3Smallxnnpack: QU8MobileNetV3Largexnnpack: QU8MobileNetV2xnnpack: FP16MobileNetV3Smallxnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV2xnnpack: FP32MobileNetV3Smallxnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV2blender: Barbershop - CPU-Onlystockfish: Chess Benchmarklczero: Eigenstockfish: Chess Benchmarkbyte: Whetstone Doublebuild-linux-kernel: allmodconfigepoch: Conebuild-llvm: Unix Makefilesbyte: System Callbyte: Pipebyte: Dhrystone 2pyperformance: asyncio_tcp_sslonnx: ZFNet-512 - CPU - Parallelonnx: ZFNet-512 - CPU - Parallelgraphics-magick: Noise-Gaussianbuild-linux-kernel: defconfigpyperformance: gc_collectgromacs: water_GMX50_barebuild-llvm: Ninjablender: Pabellon Barcelona - CPU-Onlypyperformance: async_tree_iopyperformance: xml_etreepyperformance: python_startuppyperformance: asyncio_websocketsbuild2: Time To Compilemnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlysimdjson: PartialTweetssimdjson: DistinctUserIDsimdjson: TopTweetx265: Bosphorus 4Konnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Parallelonnx: ResNet101_DUC_HDC-12 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelsimdjson: Kostyaonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: T5 Encoder - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: T5 Encoder - CPU - Parallelonnx: T5 Encoder - CPU - Parallelgraphics-magick: Noise-Gaussiangraphics-magick: Rotateonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardgraphics-magick: Sharpenonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelgraphics-magick: Resizingonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardgraphics-magick: Enhancedgraphics-magick: Sharpengraphics-magick: Resizinggraphics-magick: HWB Color Spacegraphics-magick: Rotategraphics-magick: Swirlgraphics-magick: Enhancedgraphics-magick: Swirlgraphics-magick: HWB Color Spacegromacs: MPI CPU - water_GMX50_baresimdjson: LargeRandx265: Bosphorus 1080ppyperformance: raytraceblender: BMW27 - CPU-Onlypyperformance: goetcpak: Multi-Threaded - ETC2c-ray: 5K - 16pyperformance: chaospyperformance: json_loadspyperformance: regex_compilepyperformance: django_templatepyperformance: pathlibwarpx: Plasma Accelerationwarpx: Uniform Plasmapyperformance: pickle_pure_pythonpyperformance: nbodypyperformance: floatpyperformance: crypto_pyaesc-ray: 4K - 16compress-7zip: Decompression Ratingcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingpovray: Trace Timec-ray: 1080p - 16onnx: bertsquad-12 - CPU - Standardab1083148494588112268409451426967381.4558496753360168428763721978.0285.132188.20276.926145868649.3202565282.24998587529.81.4922.145145.231430166.7101.087.156175.030154.4674845.818.751084.78613.6931.7931.5023.39611.3351.8241.1345.00878.3773.024.064.164.148.813298.190.3032222776.330.3601962160.110.4629431715.640.5829023.11194.3645.14685172.2255.807112.56003390.0674.75865210.1179.15193109.2412172093.11158321.2370.7916241262.141716.19726161.33244253.625218.64873.14523317.8686.28553159.0683514112824303316573596056566.0011.1512.6121738.0698.2471.19336.20847.417.582.326.315.520.3805382116.9020207620564.556.854.820.3574205243847754188193935237.7865.1951108151396386611998299301426925381.5569473429362188288587721932.3289.825187.79277.745145872070.3202436523.64994389993.21.4423.643742.290929173.4131.077.159175.039153.7375045.818.850887.51513.8521.8831.5023.46111.2691.8531.0974.99678.3672.184.064.184.118.863259.10.3068332789.320.3585092186.920.4572631741.970.5740633.13201.7464.95662170.2655.873092.66013375.3824.72548211.5389.20593108.5962232093.15037317.2770.7856431271.751706.15294162.48943854.310918.41223.14788317.66.24111160.1663504082844083266853616196715.991.1412.3521838.4397.8469.58836.12547.417.482.126.215.420.4111646616.8869484620464.956.85520.3424181623844214183653988217.8695.197OpenBenchmarking.org

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV3Smallab2004006008001000SE +/- 9.82, N = 3108311081. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV3Largeab30060090012001500SE +/- 8.97, N = 3148415131. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: QU8MobileNetV2ab2004006008001000SE +/- 6.69, N = 39459631. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV3Smallab2004006008001000SE +/- 20.00, N = 38818661. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV3Largeab30060090012001500SE +/- 21.31, N = 3122611991. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP16MobileNetV2ab2004006008001000SE +/- 15.62, N = 38408291. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV3Smallab2004006008001000SE +/- 16.38, N = 39459301. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV3Largeab30060090012001500SE +/- 6.51, N = 3142614261. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK 2cd86bModel: FP32MobileNetV2ab2004006008001000SE +/- 8.41, N = 39679251. (CXX) g++ options: -O3 -lrt -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0.2Blend File: Barbershop - Compute: CPU-Onlyab80160240320400SE +/- 0.54, N = 3381.45381.55

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfishChess Benchmarkab15M30M45M60M75MSE +/- 959000.15, N = 1558496753694734291. Stockfish 16 by the Stockfish developers (see AUTHORS file)

LeelaChessZero

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.31.1Backend: Eigenab80160240320400SE +/- 4.26, N = 33603621. (CXX) g++ options: -flto -pthread

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmarkab40M80M120M160M200MSE +/- 6156005.01, N = 151684287631882885871. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -flto -flto-partition=one -flto=jobserver

BYTE Unix Benchmark

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doubleab150K300K450K600K750KSE +/- 19.25, N = 3721978.0721932.31. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigab60120180240300SE +/- 2.59, N = 3285.13289.83

Epoch

OpenBenchmarking.orgSeconds, Fewer Is BetterEpoch 4.19.4Epoch3D Deck: Coneab4080120160200SE +/- 2.18, N = 4188.20187.791. (F9X) gfortran options: -O3 -std=f2003 -Jobj -lsdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesab60120180240300SE +/- 0.32, N = 3276.93277.75

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Callab30M60M90M120M150MSE +/- 15202.21, N = 3145868649.3145872070.31. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipeab40M80M120M160M200MSE +/- 32087.94, N = 3202565282.2202436523.61. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2ab1100M2200M3300M4400M5500MSE +/- 2591819.88, N = 34998587529.84994389993.21. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_tcp_sslab0.33530.67061.00591.34121.6765SE +/- 0.00, N = 31.491.44

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Parallelab612182430SE +/- 0.25, N = 1522.1523.641. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Parallelab1020304050SE +/- 0.50, N = 1545.2342.291. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample high resolution (currently 15400 x 6940) JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: Noise-Gaussianab70140210280350SE +/- 2.18, N = 153012911. (CC) gcc options: -fopenmp -O2 -ljpeg -lSM -lICE -lX11 -lz -lm -lpthread -lgomp

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigab1632486480SE +/- 0.55, N = 1366.7173.41

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: gc_collectab0.2430.4860.7290.9721.215SE +/- 0.01, N = 151.081.07

GROMACS

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACSInput: water_GMX50_bareab246810SE +/- 0.004, N = 37.1567.1591. GROMACS version: 2023.3-Ubuntu_2023.3_1ubuntu3

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaab4080120160200SE +/- 1.19, N = 3175.03175.04

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0.2Blend File: Pabellon Barcelona - Compute: CPU-Onlyab306090120150SE +/- 0.37, N = 3154.46153.73

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: async_tree_ioab160320480640800SE +/- 2.96, N = 3748750

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: xml_etreeab1020304050SE +/- 0.03, N = 345.845.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: python_startupab510152025SE +/- 0.06, N = 318.718.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_websocketsab110220330440550SE +/- 0.33, N = 3510508

Build2

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.17Time To Compileab20406080100SE +/- 0.22, N = 384.7987.52

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: inception-v3ab48121620SE +/- 0.02, N = 313.6913.85MIN: 11.51 / MAX: 42.34MIN: 11.64 / MAX: 411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenet-v1-1.0ab0.42370.84741.27111.69482.1185SE +/- 0.005, N = 31.7931.883MIN: 1.34 / MAX: 22.05MIN: 1.35 / MAX: 21.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: MobileNetV2_224ab0.3380.6761.0141.3521.69SE +/- 0.019, N = 31.5021.502MIN: 1.12 / MAX: 13.52MIN: 1.15 / MAX: 9.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: SqueezeNetV1.0ab0.77871.55742.33613.11483.8935SE +/- 0.027, N = 33.3963.461MIN: 2.14 / MAX: 29.88MIN: 2.15 / MAX: 23.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: resnet-v2-50ab3691215SE +/- 0.10, N = 311.3411.27MIN: 8.54 / MAX: 42.16MIN: 8.57 / MAX: 39.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: squeezenetv1.1ab0.41690.83381.25071.66762.0845SE +/- 0.044, N = 31.8241.853MIN: 1.17 / MAX: 20.37MIN: 1.18 / MAX: 15.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenetV3ab0.25520.51040.76561.02081.276SE +/- 0.009, N = 31.1341.097MIN: 0.69 / MAX: 11.14MIN: 0.68 / MAX: 11.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: nasnetab1.12682.25363.38044.50725.634SE +/- 0.036, N = 35.0084.996MIN: 4.49 / MAX: 27.91MIN: 4.52 / MAX: 20.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0.2Blend File: Classroom - Compute: CPU-Onlyab20406080100SE +/- 0.08, N = 378.3778.36

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0.2Blend File: Fishy Cat - Compute: CPU-Onlyab1632486480SE +/- 0.44, N = 373.0272.18

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: PartialTweetsab0.91351.8272.74053.6544.5675SE +/- 0.00, N = 34.064.061. (CXX) g++ options: -O3 -lrt

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: DistinctUserIDab0.94051.8812.82153.7624.7025SE +/- 0.00, N = 34.164.181. (CXX) g++ options: -O3 -lrt

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: TopTweetab0.93151.8632.79453.7264.6575SE +/- 0.01, N = 34.144.111. (CXX) g++ options: -O3 -lrt

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 4Kab246810SE +/- 0.03, N = 38.818.861. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardab7001400210028003500SE +/- 21.77, N = 33298.193259.101. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardab0.0690.1380.2070.2760.345SE +/- 0.001997, N = 30.3032220.3068331. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallelab6001200180024003000SE +/- 9.74, N = 32776.332789.321. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallelab0.0810.1620.2430.3240.405SE +/- 0.001260, N = 30.3601960.3585091. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standardab5001000150020002500SE +/- 4.77, N = 32160.112186.921. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standardab0.10420.20840.31260.41680.521SE +/- 0.001019, N = 30.4629430.4572631. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelab400800120016002000SE +/- 8.64, N = 31715.641741.971. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelab0.13120.26240.39360.52480.656SE +/- 0.002945, N = 30.5829020.5740631. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: Kostyaab0.70431.40862.11292.81723.5215SE +/- 0.01, N = 33.113.131. (CXX) g++ options: -O3 -lrt

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardab4080120160200SE +/- 2.71, N = 3194.36201.751. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardab1.1582.3163.4744.6325.79SE +/- 0.07121, N = 35.146854.956621. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Parallelab4080120160200SE +/- 1.46, N = 3172.23170.271. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Parallelab1.32142.64283.96425.28566.607SE +/- 0.04903, N = 35.807115.873091. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardab0.59851.1971.79552.3942.9925SE +/- 0.01750, N = 32.560032.660131. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardab80160240320400SE +/- 2.61, N = 3390.07375.381. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standardab1.07072.14143.21214.28285.3535SE +/- 0.05942, N = 34.758654.725481. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standardab50100150200250SE +/- 2.64, N = 3210.12211.541. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Parallelab3691215SE +/- 0.06344, N = 39.151939.205931. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Parallelab20406080100SE +/- 0.76, N = 3109.24108.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagickOperation: Noise-Gaussianab50100150200250SE +/- 1.73, N = 32172231. GraphicsMagick 1.3.42 2023-09-23 Q16 http://www.GraphicsMagick.org/

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagickOperation: Rotateab50100150200250SE +/- 0.88, N = 32092091. GraphicsMagick 1.3.42 2023-09-23 Q16 http://www.GraphicsMagick.org/

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelab0.70881.41762.12642.83523.544SE +/- 0.00260, N = 33.111583.150371. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelab70140210280350SE +/- 0.27, N = 3321.24317.281. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardab0.17810.35620.53430.71240.8905SE +/- 0.001678, N = 30.7916240.7856431. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardab30060090012001500SE +/- 2.68, N = 31262.141271.751. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagickOperation: Sharpenab4080120160200SE +/- 0.33, N = 31711701. GraphicsMagick 1.3.42 2023-09-23 Q16 http://www.GraphicsMagick.org/

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelab246810SE +/- 0.02687, N = 36.197266.152941. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelab4080120160200SE +/- 0.70, N = 3161.33162.491. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample high resolution (currently 15400 x 6940) JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: Resizingab100200300400500SE +/- 6.06, N = 34424381. (CC) gcc options: -fopenmp -O2 -ljpeg -lSM -lICE -lX11 -lz -lm -lpthread -lgomp

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Parallelab1224364860SE +/- 0.29, N = 353.6354.311. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Parallelab510152025SE +/- 0.10, N = 318.6518.411. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardab0.70831.41662.12492.83323.5415SE +/- 0.00610, N = 33.145233.147881. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardab70140210280350SE +/- 0.61, N = 3317.87317.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standardab246810SE +/- 0.04448, N = 36.285536.241111. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standardab4080120160200SE +/- 1.12, N = 3159.07160.171. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagickOperation: Enhancedab80160240320400SE +/- 0.33, N = 33513501. GraphicsMagick 1.3.42 2023-09-23 Q16 http://www.GraphicsMagick.org/

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: Sharpenab90180270360450SE +/- 0.58, N = 34114081. (CC) gcc options: -fopenmp -O2 -ljpeg -lSM -lICE -lX11 -lz -lm -lpthread -lgomp

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagickOperation: Resizingab60120180240300SE +/- 1.15, N = 32822841. GraphicsMagick 1.3.42 2023-09-23 Q16 http://www.GraphicsMagick.org/

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagickOperation: HWB Color Spaceab90180270360450SE +/- 0.67, N = 34304081. GraphicsMagick 1.3.42 2023-09-23 Q16 http://www.GraphicsMagick.org/

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: Rotateab70140210280350SE +/- 4.26, N = 33313261. (CC) gcc options: -fopenmp -O2 -ljpeg -lSM -lICE -lX11 -lz -lm -lpthread -lgomp

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: Swirlab150300450600750SE +/- 4.26, N = 36576851. (CC) gcc options: -fopenmp -O2 -ljpeg -lSM -lICE -lX11 -lz -lm -lpthread -lgomp

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: Enhancedab80160240320400SE +/- 0.67, N = 33593611. (CC) gcc options: -fopenmp -O2 -ljpeg -lSM -lICE -lX11 -lz -lm -lpthread -lgomp

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagickOperation: Swirlab130260390520650SE +/- 5.51, N = 36056191. GraphicsMagick 1.3.42 2023-09-23 Q16 http://www.GraphicsMagick.org/

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: HWB Color Spaceab140280420560700SE +/- 8.82, N = 36566711. (CC) gcc options: -fopenmp -O2 -ljpeg -lSM -lICE -lX11 -lz -lm -lpthread -lgomp

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2024Implementation: MPI CPU - Input: water_GMX50_bareab246810SE +/- 0.003, N = 36.0015.9901. (CXX) g++ options: -O3 -lm

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: LargeRandomab0.25880.51760.77641.03521.294SE +/- 0.00, N = 31.151.141. (CXX) g++ options: -O3 -lrt

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 1080pab3691215SE +/- 0.18, N = 312.6112.351. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: raytraceab50100150200250SE +/- 0.33, N = 3217218

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0.2Blend File: BMW27 - Compute: CPU-Onlyab918273645SE +/- 0.04, N = 338.0638.43

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: goab20406080100SE +/- 0.07, N = 398.297.8

Etcpak

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 2.0Benchmark: Multi-Threaded - Configuration: ETC2ab100200300400500SE +/- 2.47, N = 3471.19469.591. (CXX) g++ options: -flto -pthread

C-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 2.0Resolution: 5K - Rays Per Pixel: 16ab816243240SE +/- 0.02, N = 336.2136.131. (CC) gcc options: -lpthread -lm

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: chaosab1122334455SE +/- 0.06, N = 347.447.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: json_loadsab48121620SE +/- 0.06, N = 317.517.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: regex_compileab20406080100SE +/- 0.12, N = 382.382.1

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: django_templateab612182430SE +/- 0.12, N = 326.326.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pathlibab48121620SE +/- 0.03, N = 315.515.4

WarpX

OpenBenchmarking.orgSeconds, Fewer Is BetterWarpX 24.10Input: Plasma Accelerationab510152025SE +/- 0.03, N = 320.3820.411. (CXX) g++ options: -O3

OpenBenchmarking.orgSeconds, Fewer Is BetterWarpX 24.10Input: Uniform Plasmaab48121620SE +/- 0.18, N = 316.9016.891. (CXX) g++ options: -O3

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pickle_pure_pythonab4080120160200SE +/- 0.33, N = 3205204

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: nbodyab1428425670SE +/- 0.09, N = 364.564.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: floatab1326395265SE +/- 0.03, N = 356.856.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: crypto_pyaesab1224364860SE +/- 0.03, N = 354.855.0

C-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 2.0Resolution: 4K - Rays Per Pixel: 16ab510152025SE +/- 0.00, N = 320.3620.341. (CC) gcc options: -lpthread -lm

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 24.05Test: Decompression Ratingab90K180K270K360K450KSE +/- 944.71, N = 34205244181621. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 24.05Test: Compression Ratingab80K160K240K320K400KSE +/- 4213.31, N = 33847753844211. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Decompression Ratingab90K180K270K360K450KSE +/- 507.84, N = 34188194183651. 7-Zip 23.01 (arm64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Compression Ratingab90K180K270K360K450KSE +/- 3097.20, N = 33935233988211. 7-Zip 23.01 (arm64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

PostgreSQL

Scaling Factor: 100 - Clients: 800 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1 - Clients: 1000 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 100 - Clients: 1000 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 100 - Clients: 1000 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1 - Clients: 800 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1 - Clients: 800 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1000 - Clients: 500 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 100 - Clients: 800 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 100 - Clients: 500 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1 - Clients: 500 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1000 - Clients: 800 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1000 - Clients: 500 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 100 - Clients: 500 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1 - Clients: 1000 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1000 - Clients: 800 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

Scaling Factor: 1 - Clients: 500 - Mode: Read Write

a: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

b: The test run did not produce a result. E: ./pgbench: 21: pg_/bin/pgbench: not found

POV-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-RayTrace Timeab246810SE +/- 0.061, N = 37.7867.8691. POV-Ray 3.7.0.10.unofficial

LeelaChessZero

Backend: BLAS

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

C-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 2.0Resolution: 1080p - Rays Per Pixel: 16ab1.16932.33863.50794.67725.8465SE +/- 0.003, N = 35.1955.1971. (CC) gcc options: -lpthread -lm

Apache Cassandra

Test: Writes

a: The test run did not produce a result.

b: The test run did not produce a result.

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

Implementation: NVIDIA CUDA GPU - Input: water_GMX50_bare

a: The test quit with a non-zero exit status. E: ./gromacs: 5: /cuda-build/run-gromacs: not found

b: The test quit with a non-zero exit status. E: ./gromacs: 5: /cuda-build/run-gromacs: not found

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "FasterRCNN-12-int8/FasterRCNN-12-int8.onnx" failed: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Parallel

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "resnet100/resnet100.onnx" failed: No such file or directory

Model: GPT-2 - Device: CPU - Executor: Parallel

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "GPT2/model.onnx" failed: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Standard

a: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory

b: The test quit with a non-zero exit status. E: onnxruntime/onnxruntime/test/onnx/onnx_model_info.cc:45 void OnnxModelInfo::InitOnnxModelInfo(const std::filesystem::__cxx11::path&) open file "bertsquad-12/bertsquad-12.onnx" failed: No such file or directory

118 Results Shown

XNNPACK:
  QU8MobileNetV3Small
  QU8MobileNetV3Large
  QU8MobileNetV2
  FP16MobileNetV3Small
  FP16MobileNetV3Large
  FP16MobileNetV2
  FP32MobileNetV3Small
  FP32MobileNetV3Large
  FP32MobileNetV2
Blender
Stockfish
LeelaChessZero
Stockfish
BYTE Unix Benchmark
Timed Linux Kernel Compilation
Epoch
Timed LLVM Compilation
BYTE Unix Benchmark:
  System Call
  Pipe
  Dhrystone 2
PyPerformance
ONNX Runtime:
  ZFNet-512 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
GraphicsMagick
Timed Linux Kernel Compilation
PyPerformance
GROMACS
Timed LLVM Compilation
Blender
PyPerformance:
  async_tree_io
  xml_etree
  python_startup
  asyncio_websockets
Build2
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
  nasnet
Blender:
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
simdjson:
  PartialTweets
  DistinctUserID
  TopTweet
x265
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet101_DUC_HDC-12 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
simdjson
ONNX Runtime:
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ZFNet-512 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
GraphicsMagick:
  Noise-Gaussian
  Rotate
ONNX Runtime:
  CaffeNet 12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
GraphicsMagick
ONNX Runtime:
  ResNet50 v1-12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
GraphicsMagick
ONNX Runtime:
  super-resolution-10 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
GraphicsMagick
GraphicsMagick
GraphicsMagick:
  Resizing
  HWB Color Space
GraphicsMagick:
  Rotate
  Swirl
  Enhanced
GraphicsMagick
GraphicsMagick
GROMACS
simdjson
x265
PyPerformance
Blender
PyPerformance
Etcpak
C-Ray
PyPerformance:
  chaos
  json_loads
  regex_compile
  django_template
  pathlib
WarpX:
  Plasma Acceleration
  Uniform Plasma
PyPerformance:
  pickle_pure_python
  nbody
  float
  crypto_pyaes
C-Ray
7-Zip Compression:
  Decompression Rating
  Compression Rating
7-Zip Compression:
  Decompression Rating
  Compression Rating
POV-Ray
C-Ray