llama cpp grace

ARMv8 Neoverse-V2 testing with a Pegatron JIMBO P4352 (00022432 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2411235-NE-LLAMACPPG43
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 23
  52 Minutes
b
November 23
  44 Minutes
c
November 24
  46 Minutes
d
November 24
  11 Minutes
Invert Hiding All Results Option
  38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


llama cpp graceOpenBenchmarking.orgPhoronix Test SuiteARMv8 Neoverse-V2 @ 3.47GHz (72 Cores)Pegatron JIMBO P4352 (00022432 BIOS)1 x 480GB LPDDR5-6400MT/s NVIDIA 699-2G530-0236-RC11000GB CT1000T700SSD3ASPEED2 x Intel X550Ubuntu 24.046.8.0-49-generic-64k (aarch64)GCC 13.2.0 + Clang 18.1.3 + CUDA 11.8ext41920x1200ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionLlama Cpp Grace BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-dIwDw0/gcc-13-13.2.0/debian/tmp-nvptx/usr --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto --without-cuda-driver -v - Scaling Governor: cppc_cpufreq ondemand (Boost: Disabled)- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%103%107%110%114%Llama.cppLlama.cppLlama.cppLlama.cppLlama.cppLlama.cppLlama.cppLlama.cppLlama.cppLlama.cppLlama.cppLlama.cppCPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - T.G.1CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - T.G.1CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - P.P.5CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - P.P.1CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - P.P.2CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - T.G.1CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - P.P.5CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - P.P.5CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - P.P.2CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - P.P.2CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - P.P.1CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - P.P.1

llama cpp gracellama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024abcd20.0721.48123.29132.61131.8150.75121.74122.12105.71107.00119.72118.7720.5621.78123.69132.75134.4250.87121.76122.28105.56106.65119.92118.8820.7020.06123.66128.63130.5251.00121.85122.54105.75106.90119.62119.0218.2419.48131.19131.74133.6449.91120.71121.45106.1106.49119.36118.98OpenBenchmarking.org

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128dcba510152025SE +/- 0.25, N = 15SE +/- 0.25, N = 15SE +/- 0.21, N = 1518.2420.7020.5620.071. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128dcba510152025SE +/- 0.24, N = 3SE +/- 0.32, N = 15SE +/- 0.31, N = 1519.4820.0621.7821.481. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512dcba306090120150SE +/- 1.07, N = 15SE +/- 0.55, N = 3SE +/- 1.11, N = 15131.19123.66123.69123.291. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024dcba306090120150SE +/- 0.91, N = 3SE +/- 1.65, N = 3SE +/- 1.59, N = 4131.74128.63132.75132.611. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048dcba306090120150SE +/- 1.22, N = 3SE +/- 0.94, N = 3SE +/- 1.44, N = 4133.64130.52134.42131.811. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128dcba1224364860SE +/- 0.45, N = 15SE +/- 0.71, N = 3SE +/- 0.59, N = 349.9151.0050.8750.751. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512dcba306090120150SE +/- 0.05, N = 3SE +/- 0.24, N = 3SE +/- 0.11, N = 3120.71121.85121.76121.741. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512dcba306090120150SE +/- 0.22, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3121.45122.54122.28122.121. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048dcba20406080100SE +/- 0.37, N = 3SE +/- 0.03, N = 3SE +/- 0.13, N = 3106.10105.75105.56105.711. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048dcba20406080100SE +/- 0.19, N = 3SE +/- 0.15, N = 3SE +/- 0.16, N = 3106.49106.90106.65107.001. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024dcba306090120150SE +/- 0.28, N = 3SE +/- 0.12, N = 3SE +/- 0.01, N = 3119.36119.62119.92119.721. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024dcba306090120150SE +/- 0.10, N = 3SE +/- 0.19, N = 3SE +/- 0.20, N = 3118.98119.02118.88118.771. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -mcpu=native -fopenmp -lopenblas