newaa

AMD Ryzen Threadripper 7980X 64-Cores testing with a System76 Thelio Major (FA Z5 BIOS) and AMD Radeon RX 6700 XT 12GB on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2412060-PTS-NEWAA29291
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 06
  1 Hour, 49 Minutes
b
December 06
  20 Minutes
c
December 06
  21 Minutes
d
December 06
  21 Minutes
Invert Behavior (Only Show Selected Data)
  43 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


newaaOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads)System76 Thelio Major (FA Z5 BIOS)AMD Device 14a44 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA21000GB CT1000T700SSD5AMD Radeon RX 6700 XT 12GBAMD Device 14ccDELL P2415QAquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6EUbuntu 24.046.8.0-49-generic (x86_64)GNOME Shell 46.0X Server + Wayland4.6 Mesa 24.0.9-0ubuntu0.2 (LLVM 17.0.6 DRM 3.57)GCC 13.2.0ext41920x1200ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionNewaa BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa108105- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%107%115%122%129%LlamafileLlamafileRELIONLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafileLlamafilewizardcoder-python-34b-v1.0.Q6_K - Text Generation 16TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16Basic - CPULlama-3.2-3B-Instruct.Q6_K - Text Generation 16mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16wizardcoder-python-34b-v1.0.Q6_K - T.G.1TinyLlama-1.1B-Chat-v1.0.BF16 - T.G.1Llama-3.2-3B-Instruct.Q6_K - T.G.1mistral-7b-instruct-v0.2.Q5_K_M - T.G.1Llama-3.2-3B-Instruct.Q6_K - P.P.2Llama-3.2-3B-Instruct.Q6_K - P.P.5Llama-3.2-3B-Instruct.Q6_K - P.P.1Llama-3.2-3B-Instruct.Q6_K - P.P.2TinyLlama-1.1B-Chat-v1.0.BF16 - P.P.2TinyLlama-1.1B-Chat-v1.0.BF16 - P.P.5TinyLlama-1.1B-Chat-v1.0.BF16 - P.P.1TinyLlama-1.1B-Chat-v1.0.BF16 - P.P.2mistral-7b-instruct-v0.2.Q5_K_M - P.P.2mistral-7b-instruct-v0.2.Q5_K_M - P.P.5mistral-7b-instruct-v0.2.Q5_K_M - P.P.1mistral-7b-instruct-v0.2.Q5_K_M - P.P.2wizardcoder-python-34b-v1.0.Q6_K - P.P.2wizardcoder-python-34b-v1.0.Q6_K - P.P.5wizardcoder-python-34b-v1.0.Q6_K - P.P.1wizardcoder-python-34b-v1.0.Q6_K - P.P.2

newaallamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048relion: Basic - CPUabcd41.7242.084096819257.19163843276857.4323.174096819223.424.6616384327684.4540968192163843276815363072614412288411.75937.2342.014096819249.89163843276856.5821.344096819223.473.716384327684.7740968192163843276815363072614412288366.91841.9241.614096819256.4163843276856.4723.474096819223.644.7716384327684.840968192163843276815363072614412288401.0542.2141.624096819256.72163843276856.9723.384096819223.684.7816384327684.7840968192163843276815363072614412288418.274OpenBenchmarking.org

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16abcd1020304050SE +/- 0.17, N = 341.7237.2341.9242.21

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128abcd1020304050SE +/- 0.15, N = 342.0842.0141.6141.62

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256abcd9001800270036004500SE +/- 0.00, N = 34096409640964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512abcd2K4K6K8K10KSE +/- 0.00, N = 38192819281928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16abcd1326395265SE +/- 0.25, N = 357.1949.8956.4056.72

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024abcd4K8K12K16K20KSE +/- 0.00, N = 316384163841638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048abcd7K14K21K28K35KSE +/- 0.00, N = 332768327683276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128abcd1326395265SE +/- 0.17, N = 357.4356.5856.4756.97

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16abcd612182430SE +/- 0.19, N = 1223.1721.3423.4723.38

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256abcd9001800270036004500SE +/- 0.00, N = 34096409640964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512abcd2K4K6K8K10KSE +/- 0.00, N = 38192819281928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128abcd612182430SE +/- 0.04, N = 323.4223.4723.6423.68

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16abcd1.07552.1513.22654.3025.3775SE +/- 0.12, N = 124.663.704.774.78

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024abcd4K8K12K16K20KSE +/- 0.00, N = 316384163841638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048abcd7K14K21K28K35KSE +/- 0.00, N = 332768327683276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128abcd1.082.163.244.325.4SE +/- 0.04, N = 124.454.774.804.78

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256abcd9001800270036004500SE +/- 0.00, N = 34096409640964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512abcd2K4K6K8K10KSE +/- 0.00, N = 38192819281928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024abcd4K8K12K16K20KSE +/- 0.00, N = 316384163841638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048abcd7K14K21K28K35KSE +/- 0.00, N = 332768327683276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256abcd30060090012001500SE +/- 0.00, N = 31536153615361536

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512abcd7001400210028003500SE +/- 0.00, N = 33072307230723072

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024abcd13002600390052006500SE +/- 0.00, N = 36144614461446144

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048abcd3K6K9K12K15KSE +/- 0.00, N = 312288122881228812288

RELION

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 5.0Test: Basic - Device: CPUabcd90180270360450SE +/- 29.20, N = 6411.76366.92401.05418.271. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -ljpeg -lmpi_cxx -lmpi