Llamafile

Mozilla's Llamafile allows distributing and running large language models (LLMs) as a single file. Llamafile aims to make open-source LLMs more accessible to developers and users. Llamafile supports a variety of models, CPUs and GPUs, and other options.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark llamafile.

Project Site

llamafile.ai

Source Repository

github.com

Test Created

19 January 2024

Last Updated

4 December 2024

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

3 Seconds

Average Run Time

7 Minutes, 55 Seconds

Accolades

10k+ Downloads + Recently Updated Test Profile

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page ViewsOpenBenchmarking.orgEventsLlamafile Popularity Statisticspts/llamafile2024.012024.022024.032024.042024.052024.062024.072024.082024.092024.102024.112024.122K4K6K8K10K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
Data updated weekly as of 25 December 2024.
mistral-7b-instruct-v0.2.Q5_K_M24.9%wizardcoder-python-34b-v1.0.Q6_K23.0%Llama-3.2-3B-Instruct.Q6_K26.1%TinyLlama-1.1B-Chat-v1.0.BF1626.1%Model Option PopularityOpenBenchmarking.org
Prompt Processing 204815.3%Text Generation 1619.6%Prompt Processing 25615.3%Text Generation 12819.3%Prompt Processing 51215.3%Prompt Processing 102415.3%Test Option PopularityOpenBenchmarking.org

Revision History

pts/llamafile-1.3.0   [View Source]   Wed, 04 Dec 2024 12:43:19 GMT
Update against llamafile 0.8.16.

pts/llamafile-1.2.0   [View Source]   Sun, 02 Jun 2024 10:40:47 GMT
Update against Llamafile 0.8.6 upstream.

pts/llamafile-1.1.0   [View Source]   Wed, 03 Apr 2024 14:32:58 GMT
Update against Llamafile 0.7.

pts/llamafile-1.0.0   [View Source]   Fri, 19 Jan 2024 19:08:42 GMT
Initial commit.

Suites Using This Test

Machine Learning

HPC - High Performance Computing

Large Language Models


Performance Metrics

Analyze Test Configuration:

Llamafile 0.8.16

Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16

OpenBenchmarking.org metrics for this test profile configuration based on 61 public results since 5 December 2024 with the latest data as of 20 December 2024.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
Tokens Per Second (Average)
98th
3
64.6 +/- 0.4
Mid-Tier
75th
< 13.5
72nd
3
12.7 +/- 1.1
59th
3
10.9 +/- 0.4
52nd
5
10.5 +/- 0.1
52nd
7
10.5 +/- 0.1
Median
50th
10.5
34th
7
10.4 +/- 0.1
28th
7
10.3 +/- 0.2
Low-Tier
25th
< 10.3
8th
3
10.1 +/- 0.1
8th
3
10.0 +/- 0.2
Detailed Performance Overview
OpenBenchmarking.orgDistribution Of Public Results - Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 1636 Results Range From 10 To 65 Tokens Per Second10152025303540455055606570612182430

Based on OpenBenchmarking.org data, the selected test / test configuration (Llamafile 0.8.16 - Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16) has an average run-time of 2 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkModel: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16Run-Time246810Min: 1 / Avg: 1.67 / Max: 4

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.2%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsModel: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16Deviation246810Min: 0 / Avg: 0.21 / Max: 3

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)

Recent Test Results

OpenBenchmarking.org Results Compare

3 Systems - 127 Benchmark Results

AMD Ryzen 9 9950X 16-Core - ASRock X870E Taichi - AMD Device 14d8

Ubuntu 24.04 - 6.8.0-50-generic - GNOME Shell 46.0

3 Systems - 19 Benchmark Results

AMD EPYC 9655P 96-Core - Supermicro Super Server H13SSL-N v1.01 - AMD 1Ah

Ubuntu 24.10 - 6.13.0-rc1-phx - GNOME Shell 47.0

3 Systems - 20 Benchmark Results

AMD Ryzen 7 7840HS - Framework Laptop 16 - AMD Device 14e8

Ubuntu 24.04 - 6.8.0-49-generic - GNOME Shell 46.0

3 Systems - 79 Benchmark Results

AMD Ryzen AI 9 365 - ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 - AMD Device 1507

Ubuntu 24.10 - 6.12.0-rc7-phx-eraps - GNOME Shell 47.0

4 Systems - 25 Benchmark Results

AMD Ryzen Threadripper 7980X 64-Cores - System76 Thelio Major - AMD Device 14a4

Ubuntu 24.04 - 6.8.0-49-generic - GNOME Shell 46.0

4 Systems - 24 Benchmark Results

AMD Ryzen 7 9800X3D 8-Core - ASRock X870E Taichi - AMD Device 14d8

Ubuntu 24.04 - 6.8.0-49-generic - GNOME Shell 46.0

Find More Test Results