Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark llama-cpp.

Project Site

github.com

Source Repository

github.com

Test Created

10 January 2024

Last Updated

2 June 2024

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

43 Seconds

Average Run Time

16 Minutes, 59 Seconds

Test Dependencies

C/C++ Compiler Toolchain + BLAS (Basic Linear Algebra Sub-Routine)

Accolades

5k+ Downloads

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page ViewsOpenBenchmarking.orgEventsLlama.cpp Popularity Statisticspts/llama-cpp2024.012024.022024.032024.042024.052024.062024.072K4K6K8K10K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
Data updated weekly as of 15 July 2024.

Revision History

pts/llama-cpp-1.1.0   [View Source]   Sun, 02 Jun 2024 10:36:25 GMT
Update against Llama.cpp upstream, switch to Llama 3 model.

pts/llama-cpp-1.0.0   [View Source]   Wed, 10 Jan 2024 18:02:20 GMT
Initial commit of llama.cpp CPU benchmark.

Suites Using This Test

Machine Learning

HPC - High Performance Computing

Large Language Models


Performance Metrics

Analyze Test Configuration:

Llama.cpp b3067

Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf

OpenBenchmarking.org metrics for this test profile configuration based on 83 public results since 2 June 2024 with the latest data as of 20 July 2024.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
Tokens Per Second (Average)
92nd
4
17.71 +/- 0.04
Mid-Tier
75th
< 10.34
69th
4
9.50 +/- 0.02
59th
3
8.75 +/- 0.73
Median
50th
8.29
48th
7
8.12 +/- 0.11
26th
3
7.11 +/- 0.01
Low-Tier
25th
< 7.11
20th
3
6.22 +/- 0.02
14th
3
5.76 +/- 0.01
11th
3
5.48 +/- 0.57
OpenBenchmarking.orgDistribution Of Public Results - Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf81 Results Range From 1 To 23 Tokens Per Second135791113151719212325714212835

Based on OpenBenchmarking.org data, the selected test / test configuration (Llama.cpp b3067 - Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf) has an average run-time of 3 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkModel: Meta-Llama-3-8B-Instruct-Q8_0.ggufRun-Time246810Min: 1 / Avg: 2.86 / Max: 4

Notable Instruction Set Usage

Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.

Instruction Set
Support
Instructions Detected
Used by default on supported hardware.
Found on Intel processors since Sandy Bridge (2011).
Found on AMD processors since Bulldozer (2011).

 
VZEROUPPER VEXTRACTF128 VPERMILPS VBROADCASTSS VPERM2F128 VINSERTF128 VBROADCASTSD VMASKMOVPS
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Excavator (2016).

 
VPBROADCASTQ VEXTRACTI128 VPBROADCASTD VPERMQ VGATHERQPS VPBROADCASTW VINSERTI128 VPBROADCASTB VPSLLVD VPERMD VPSRLVD VPERM2I128
Advanced Vector Extensions 512 (AVX512)
Used by default on supported hardware.
 
(ZMM REGISTER USE)
AVX Vector Neural Network Instructions (AVX-VNNI)
Used by default on supported hardware.
 
VPDPBUSD
FMA (FMA)
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Bulldozer (2011).

 
VFMADD231PS VFMADD132PS VFNMADD231PS VFMADD213PS VFNMADD231SS VFMADD132SS VFNMADD213SS VFMADD132SD VFMADD231SS VFMADD213SS
The test / benchmark does honor compiler flag changes.
Last automated analysis: 6 June 2024

This test profile binary relies on the shared libraries libopenblas.so.0, libm.so.6, libc.so.6, libgfortran.so.5, libquadmath.so.0.

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-V1