Linux 6.13 AMD EPYC Performance Benchmarks for a future article. AMD EPYC 9575F 64-Core testing with a Supermicro Super Server H13SSL-N v1.01 (3.0 BIOS) and ASPEED on Ubuntu 24.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2412073-NE-LINUX613A93&rdt&grr .
Linux 6.13 AMD EPYC Performance Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution v6.12 v6.11 v6.13 7 Dec AMD EPYC 9575F 64-Core @ 3.30GHz (64 Cores / 128 Threads) Supermicro Super Server H13SSL-N v1.01 (3.0 BIOS) AMD 1Ah 12 x 64GB DDR5-6000MT/s Micron MTC40F2046S1RC64BDY QSFF 3201GB Micron_7450_MTFDKCB3T2TFS ASPEED 2 x Broadcom NetXtreme BCM5720 PCIe Ubuntu 24.10 6.12.0-phx (x86_64) GNOME Shell 47.0 X Server GCC 14.2.0 ext4 1024x768 6.11.0-phx (x86_64) 1920x1200 6.13.0-rc1-phx (x86_64) 1024x768 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2,rust --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xb002116 Java Details - OpenJDK Runtime Environment (build 21.0.5+11-Ubuntu-1ubuntu124.10) Python Details - Python 3.12.7 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Linux 6.13 AMD EPYC Performance couchdb: 300 - 3000 - 30 couchdb: 500 - 1000 - 30 relion: Basic - CPU palabos: 1000 mariadb: oltp_update_index - 128 mariadb: oltp_read_only - 128 mariadb: oltp_read_write - 128 mariadb: oltp_write_only - 128 build-linux-kernel: allmodconfig openssl: ChaCha20 openssl: ChaCha20-Poly1305 openssl: AES-256-GCM openssl: AES-128-GCM build-llvm: Unix Makefiles blender: Barbershop - CPU-Only couchdb: 300 - 1000 - 30 svt-av1: Preset 3 - Bosphorus 4K pgbench: 100 - 1000 - Read Write - Average Latency pgbench: 100 - 1000 - Read Write pgbench: 100 - 800 - Read Write - Average Latency pgbench: 100 - 800 - Read Write pgbench: 100 - 800 - Read Only - Average Latency pgbench: 100 - 800 - Read Only pgbench: 100 - 1000 - Read Only - Average Latency pgbench: 100 - 1000 - Read Only palabos: 500 cassandra: Writes clickhouse: 100M Rows Hits Dataset, Third Run clickhouse: 100M Rows Hits Dataset, Second Run clickhouse: 100M Rows Hits Dataset, First Run / Cold Cache build-llvm: Ninja dacapobench: Tradebeans llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048 stress-ng: Semaphores memcached: 1:100 memcached: 1:10 memcached: 1:5 stress-ng: Context Switching laghos: Sedov Blast Wave, ube_922_hex.mesh llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048 blender: Pabellon Barcelona - CPU-Only blender: Classroom - CPU-Only svt-av1: Preset 5 - Bosphorus 4K llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024 dacapobench: Tradesoap laghos: Triple Point Problem dacapobench: Apache Lucene Search Engine openvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Token openvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Token openvino-genai: Falcon-7b-instruct-int4-ov - CPU stress-ng: NUMA stress-ng: Mixed Scheduler stress-ng: Futex stress-ng: MEMFD stress-ng: Socket Activity stress-ng: Mutex llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128 build-linux-kernel: defconfig openvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Token openvino-genai: Gemma-7b-int4-ov - CPU - Time To First Token openvino-genai: Gemma-7b-int4-ov - CPU dacapobench: Avrora AVR Simulation Framework primesieve: 1e13 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024 llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16 dacapobench: Eclipse llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048 dacapobench: H2 Database Engine blender: Fishy Cat - CPU-Only dacapobench: jMonkeyEngine blender: Junkshop - CPU-Only llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512 dacapobench: PMD Source Code Analyzer llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128 dacapobench: Apache Kafka dacapobench: Apache Lucene Search Index blender: BMW27 - CPU-Only llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16 openvino-genai: TinyLlama-1.1B-Chat-v1.0 - CPU - Time Per Output Token openvino-genai: TinyLlama-1.1B-Chat-v1.0 - CPU - Time To First Token openvino-genai: TinyLlama-1.1B-Chat-v1.0 - CPU dacapobench: Jython svt-av1: Preset 8 - Bosphorus 4K llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048 dacapobench: BioJava Biological Data Framework llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512 dacapobench: Spring Boot openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Token openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Token openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024 llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16 dacapobench: GraphChi llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16 llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256 svt-av1: Preset 13 - Bosphorus 4K llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024 dacapobench: Apache Tomcat llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512 dacapobench: Zxing 1D/2D Barcode Image Processing dacapobench: Batik SVG Toolkit dacapobench: Apache Xalan XSLT llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512 dacapobench: FOP Print Formatter primesieve: 1e12 llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256 v6.12 v6.11 v6.13 7 Dec 371.288 333.178 179.868 655.184 183537 43581 181192 399417 193.202 724611832580 489904220937 334279722140 361513614630 160.302 148.82 104.163 16.990 8.927 112025 6.550 122181 0.166 4815625 0.211 4732753 775.110 491186 800.56 795.68 776.70 101.599 4968 12288 251549880.01 13294800.59 7297642.21 4064048.14 55755486.32 565.10 32768 47.70 41.74 60.381 15.82 6144 3035 303.11 4326 17.80 30.00 56.21 2298.65 82674.98 4313954.94 2010.99 47501.04 45398166.14 67.71 159.60 21.848 20.37 30.27 49.09 2303 24.972 16384 15.01 6280 32768 2049 21.49 6797 20.21 3072 1167 111.98 5045 2258 15.08 64.80 12.79 15.89 78.18 3755 200.316 32768 4071 8192 2259 15.19 22.73 65.82 16384 107.95 2083 1536 152.89 4096 468.666 16384 1038 8192 551 937 741 4096 8192 335 1.990 4096 363.527 354.526 176.673 481.489 192098 43688 183693 412967 194.338 718106876590 487834985450 333162995897 359931303710 159.819 149.30 103.228 16.880 8.498 117677 6.305 126877 0.167 4783648 0.217 4621379 529.232 483952 801.19 804.06 770.47 101.821 4999 12288 233159294.14 12696519.85 7421710.24 4115352.49 36748330.54 561.91 32768 47.90 42.03 60.171 15.90 6144 3077 297.99 4120 17.63 30.39 56.73 2071.98 79773.37 4154651.85 2054.70 47424.87 45376718.94 68.13 159.90 21.792 20.36 30.39 49.13 2354 25.036 16384 14.98 6349 32768 2104 21.66 6791 20.24 3072 1085 112.56 5070 2294 15.19 64.87 12.76 15.86 78.36 3782 199.209 32768 4058 8192 2724 15.18 22.66 65.89 16384 108.22 2093 1536 152.07 4096 466.614 16384 1053 8192 522 923 743 4096 8192 339 1.999 4096 360.826 311.038 172.482 509.126 158304 42936 179909 386983 192.923 719980763180 489486102253 334428157450 361324888880 160.559 148.16 100.140 17.008 9.082 110112 6.628 120703 0.165 4829255 0.211 4735578 764.024 487743 807.27 803.49 774.01 101.725 4957 12288 246944948.46 13514659.67 7400996.98 4099640.31 53038526.00 565.14 32768 47.39 41.49 60.821 15.77 6144 3049 300.72 3631 17.77 30.18 56.28 2385.59 81582.58 4398887.77 2125.13 47238.70 43186401.84 65.45 156.82 21.855 20.54 30.48 48.70 2283 24.939 16384 14.90 6331 32768 2010 21.35 6793 20.20 3072 1089 109.60 5057 2278 15.02 64.15 12.88 16.05 77.67 3763 200.823 32768 4019 8192 2651 15.17 22.38 65.91 16384 102.21 2102 1536 147.98 4096 480.912 16384 1055 8192 493 938 742 4096 8192 338 1.988 4096 OpenBenchmarking.org
Apache CouchDB Bulk Size: 300 - Inserts: 3000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 300 - Inserts: 3000 - Rounds: 30 v6.12 v6.11 v6.13 7 Dec 80 160 240 320 400 SE +/- 12.48, N = 9 SE +/- 6.26, N = 9 SE +/- 3.45, N = 9 371.29 363.53 360.83 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
Apache CouchDB Bulk Size: 500 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 v6.12 v6.11 v6.13 7 Dec 80 160 240 320 400 SE +/- 36.20, N = 9 SE +/- 62.71, N = 8 SE +/- 29.09, N = 7 333.18 354.53 311.04 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
RELION Test: Basic - Device: CPU OpenBenchmarking.org Seconds, Fewer Is Better RELION 5.0 Test: Basic - Device: CPU v6.12 v6.11 v6.13 7 Dec 40 80 120 160 200 SE +/- 0.79, N = 3 SE +/- 1.95, N = 3 SE +/- 3.31, N = 12 179.87 176.67 172.48 1. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -ljpeg -lmpi_cxx -lmpi
Palabos Grid Size: 1000 OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 1000 v6.12 v6.11 v6.13 7 Dec 140 280 420 560 700 SE +/- 36.69, N = 12 SE +/- 3.07, N = 15 SE +/- 21.98, N = 15 655.18 481.49 509.13 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
MariaDB Test: oltp_update_index - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_update_index - Threads: 128 v6.12 v6.11 v6.13 7 Dec 40K 80K 120K 160K 200K SE +/- 76.99, N = 3 SE +/- 123.94, N = 3 SE +/- 112.30, N = 3 183537 192098 158304 1. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
MariaDB Test: oltp_read_only - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_read_only - Threads: 128 v6.12 v6.11 v6.13 7 Dec 9K 18K 27K 36K 45K SE +/- 15.19, N = 3 SE +/- 14.78, N = 3 SE +/- 18.45, N = 3 43581 43688 42936 1. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
MariaDB Test: oltp_read_write - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_read_write - Threads: 128 v6.12 v6.11 v6.13 7 Dec 40K 80K 120K 160K 200K SE +/- 60.34, N = 3 SE +/- 155.52, N = 3 SE +/- 77.99, N = 3 181192 183693 179909 1. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
MariaDB Test: oltp_write_only - Threads: 128 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_write_only - Threads: 128 v6.12 v6.11 v6.13 7 Dec 90K 180K 270K 360K 450K SE +/- 474.96, N = 3 SE +/- 286.92, N = 3 SE +/- 154.44, N = 3 399417 412967 386983 1. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: allmodconfig v6.12 v6.11 v6.13 7 Dec 40 80 120 160 200 SE +/- 0.34, N = 3 SE +/- 0.21, N = 3 SE +/- 0.25, N = 3 193.20 194.34 192.92
OpenSSL Algorithm: ChaCha20 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20 v6.12 v6.11 v6.13 7 Dec 160000M 320000M 480000M 640000M 800000M SE +/- 1474303291.59, N = 3 SE +/- 652717270.91, N = 3 SE +/- 626958911.68, N = 3 721616676910 718106876590 719980763180 1. OpenSSL 3.3.1 4 Jun 2024 (Library: OpenSSL 3.3.1 4 Jun 2024) - Additional Parameters: -engine qatengine -async_jobs 8
OpenSSL Algorithm: ChaCha20-Poly1305 OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20-Poly1305 v6.12 v6.11 v6.13 7 Dec 110000M 220000M 330000M 440000M 550000M SE +/- 118571563.57, N = 3 SE +/- 271433364.08, N = 3 SE +/- 158903312.00, N = 3 490511085203 487834985450 489486102253 1. OpenSSL 3.3.1 4 Jun 2024 (Library: OpenSSL 3.3.1 4 Jun 2024) - Additional Parameters: -engine qatengine -async_jobs 8
OpenSSL Algorithm: AES-256-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-256-GCM v6.12 v6.11 v6.13 7 Dec 70000M 140000M 210000M 280000M 350000M SE +/- 47580595.33, N = 3 SE +/- 105448439.59, N = 3 SE +/- 134252719.45, N = 3 334486696117 333162995897 334428157450 1. OpenSSL 3.3.1 4 Jun 2024 (Library: OpenSSL 3.3.1 4 Jun 2024) - Additional Parameters: -engine qatengine -async_jobs 8
OpenSSL Algorithm: AES-128-GCM OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-128-GCM v6.12 v6.11 v6.13 7 Dec 80000M 160000M 240000M 320000M 400000M SE +/- 158067306.31, N = 3 SE +/- 130266226.11, N = 3 SE +/- 88875000.56, N = 3 361564053870 359931303710 361324888880 1. OpenSSL 3.3.1 4 Jun 2024 (Library: OpenSSL 3.3.1 4 Jun 2024) - Additional Parameters: -engine qatengine -async_jobs 8
Timed LLVM Compilation Build System: Unix Makefiles OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Unix Makefiles v6.12 v6.11 v6.13 7 Dec 40 80 120 160 200 SE +/- 0.33, N = 3 SE +/- 0.34, N = 3 SE +/- 0.86, N = 3 160.30 159.82 160.56
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Barbershop - Compute: CPU-Only v6.12 v6.11 v6.13 7 Dec 30 60 90 120 150 SE +/- 0.18, N = 3 SE +/- 0.08, N = 3 SE +/- 0.23, N = 3 148.82 149.30 148.16
Apache CouchDB Bulk Size: 300 - Inserts: 1000 - Rounds: 30 OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.4.1 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 v6.12 v6.11 v6.13 7 Dec 20 40 60 80 100 SE +/- 0.71, N = 3 SE +/- 1.11, N = 3 SE +/- 0.87, N = 7 104.16 103.23 100.14 1. (CXX) g++ options: -flto -lstdc++ -shared -lei
SVT-AV1 Encoder Mode: Preset 3 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 3 - Input: Bosphorus 4K v6.12 v6.11 v6.13 7 Dec 4 8 12 16 20 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 SE +/- 0.02, N = 3 16.99 16.88 17.01 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency v6.12 v6.11 v6.13 7 Dec 3 6 9 12 15 SE +/- 0.048, N = 3 SE +/- 0.020, N = 3 SE +/- 0.042, N = 3 8.927 8.498 9.082 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write v6.12 v6.11 v6.13 7 Dec 30K 60K 90K 120K 150K SE +/- 604.26, N = 3 SE +/- 274.82, N = 3 SE +/- 515.38, N = 3 112025 117677 110112 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency v6.12 v6.11 v6.13 7 Dec 2 4 6 8 10 SE +/- 0.082, N = 3 SE +/- 0.030, N = 3 SE +/- 0.030, N = 3 6.550 6.305 6.628 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write v6.12 v6.11 v6.13 7 Dec 30K 60K 90K 120K 150K SE +/- 1549.82, N = 3 SE +/- 594.42, N = 3 SE +/- 536.64, N = 3 122181 126877 120703 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency v6.12 v6.11 v6.13 7 Dec 0.0376 0.0752 0.1128 0.1504 0.188 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 SE +/- 0.000, N = 3 0.166 0.167 0.165 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only v6.12 v6.11 v6.13 7 Dec 1000K 2000K 3000K 4000K 5000K SE +/- 9902.78, N = 3 SE +/- 11484.93, N = 3 SE +/- 8981.60, N = 3 4815625 4783648 4829255 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency v6.12 v6.11 v6.13 7 Dec 0.0488 0.0976 0.1464 0.1952 0.244 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 0.211 0.217 0.211 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only v6.12 v6.11 v6.13 7 Dec 1000K 2000K 3000K 4000K 5000K SE +/- 18553.35, N = 3 SE +/- 23812.59, N = 3 SE +/- 9098.55, N = 3 4732753 4621379 4735578 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
Palabos Grid Size: 500 OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 500 v6.12 v6.11 v6.13 7 Dec 200 400 600 800 1000 SE +/- 1.19, N = 3 SE +/- 40.67, N = 12 SE +/- 5.28, N = 3 775.11 529.23 764.02 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 5.0 Test: Writes v6.12 v6.11 v6.13 7 Dec 110K 220K 330K 440K 550K SE +/- 1282.82, N = 3 SE +/- 5140.53, N = 3 SE +/- 4648.88, N = 3 491186 483952 487743
ClickHouse 100M Rows Hits Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run v6.12 v6.11 v6.13 7 Dec 200 400 600 800 1000 SE +/- 7.60, N = 3 SE +/- 4.79, N = 3 SE +/- 2.96, N = 3 799.99 801.19 807.27 MIN: 64.94 / MAX: 10000 MIN: 65.5 / MAX: 8571.43 MIN: 66.01 / MAX: 8571.43
ClickHouse 100M Rows Hits Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run v6.12 v6.11 v6.13 7 Dec 200 400 600 800 1000 SE +/- 3.55, N = 3 SE +/- 4.73, N = 3 SE +/- 8.62, N = 3 801.90 804.06 803.49 MIN: 65.79 / MAX: 8571.43 MIN: 65.15 / MAX: 10000 MIN: 66.08 / MAX: 10000
ClickHouse 100M Rows Hits Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache v6.12 v6.11 v6.13 7 Dec 200 400 600 800 1000 SE +/- 5.98, N = 3 SE +/- 2.38, N = 3 SE +/- 2.84, N = 3 765.59 770.47 774.01 MIN: 63.29 / MAX: 7500 MIN: 65.15 / MAX: 8571.43 MIN: 65.86 / MAX: 7500
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Ninja v6.12 v6.11 v6.13 7 Dec 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.11, N = 3 SE +/- 0.19, N = 3 101.60 101.82 101.73
DaCapo Benchmark Java Test: Tradebeans OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradebeans v6.12 v6.11 v6.13 7 Dec 1100 2200 3300 4400 5500 SE +/- 76.59, N = 15 SE +/- 48.79, N = 15 SE +/- 100.21, N = 12 4968 4999 4957
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048 v6.12 v6.11 v6.13 7 Dec 3K 6K 9K 12K 15K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 12288 12288 12288
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: Semaphores v6.12 v6.11 v6.13 7 Dec 50M 100M 150M 200M 250M SE +/- 3074547.79, N = 3 SE +/- 2150155.18, N = 15 SE +/- 1084579.30, N = 3 251549880.01 233159294.14 246944948.46 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Memcached Set To Get Ratio: 1:100 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 v6.12 v6.11 v6.13 7 Dec 3M 6M 9M 12M 15M SE +/- 15384.68, N = 3 SE +/- 13756.93, N = 3 SE +/- 24482.38, N = 3 13294800.59 12696519.85 13514659.67 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 v6.12 v6.11 v6.13 7 Dec 1.6M 3.2M 4.8M 6.4M 8M SE +/- 30399.87, N = 3 SE +/- 35717.40, N = 3 SE +/- 37597.18, N = 3 7297642.21 7421710.24 7400996.98 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 v6.12 v6.11 v6.13 7 Dec 900K 1800K 2700K 3600K 4500K SE +/- 16085.40, N = 3 SE +/- 26467.49, N = 3 SE +/- 28528.57, N = 3 4064048.14 4115352.49 4099640.31 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: Context Switching v6.12 v6.11 v6.13 7 Dec 12M 24M 36M 48M 60M SE +/- 64836.17, N = 3 SE +/- 247154.69, N = 13 SE +/- 159927.59, N = 3 55755486.32 36748330.54 53038526.00 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Laghos Test: Sedov Blast Wave, ube_922_hex.mesh OpenBenchmarking.org Major Kernels Total Rate, More Is Better Laghos 3.1 Test: Sedov Blast Wave, ube_922_hex.mesh v6.12 v6.11 v6.13 7 Dec 120 240 360 480 600 SE +/- 2.47, N = 3 SE +/- 3.27, N = 3 SE +/- 2.95, N = 3 565.10 561.91 565.14 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048 v6.12 v6.11 v6.13 7 Dec 7K 14K 21K 28K 35K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 32768 32768 32768
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Pabellon Barcelona - Compute: CPU-Only v6.12 v6.11 v6.13 7 Dec 11 22 33 44 55 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 47.70 47.90 47.39
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Classroom - Compute: CPU-Only v6.12 v6.11 v6.13 7 Dec 10 20 30 40 50 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 41.74 42.03 41.49
SVT-AV1 Encoder Mode: Preset 5 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 5 - Input: Bosphorus 4K v6.12 v6.11 v6.13 7 Dec 14 28 42 56 70 SE +/- 0.12, N = 3 SE +/- 0.07, N = 3 SE +/- 0.08, N = 3 60.38 60.17 60.82 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128 v6.12 v6.11 v6.13 7 Dec 4 8 12 16 20 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 15.82 15.90 15.77
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024 v6.12 v6.11 v6.13 7 Dec 1300 2600 3900 5200 6500 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 6144 6144 6144
DaCapo Benchmark Java Test: Tradesoap OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Tradesoap v6.12 v6.11 v6.13 7 Dec 700 1400 2100 2800 3500 SE +/- 21.97, N = 15 SE +/- 24.92, N = 3 SE +/- 31.71, N = 5 3035 3077 3049
Laghos Test: Triple Point Problem OpenBenchmarking.org Major Kernels Total Rate, More Is Better Laghos 3.1 Test: Triple Point Problem v6.12 v6.11 v6.13 7 Dec 70 140 210 280 350 SE +/- 1.53, N = 3 SE +/- 3.08, N = 3 SE +/- 3.23, N = 3 303.11 297.99 300.72 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
DaCapo Benchmark Java Test: Apache Lucene Search Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Engine v6.12 v6.11 v6.13 7 Dec 900 1800 2700 3600 4500 SE +/- 47.07, N = 3 SE +/- 43.31, N = 15 SE +/- 42.57, N = 3 4326 4120 3631
OpenVINO GenAI Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token v6.12 v6.11 v6.13 7 Dec 4 8 12 16 20 SE +/- 0.17, N = 6 SE +/- 0.19, N = 3 SE +/- 0.17, N = 3 17.80 17.63 17.77
OpenVINO GenAI Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token v6.12 v6.11 v6.13 7 Dec 7 14 21 28 35 SE +/- 0.26, N = 6 SE +/- 0.27, N = 3 SE +/- 0.44, N = 3 30.00 30.39 30.18
OpenVINO GenAI Model: Falcon-7b-instruct-int4-ov - Device: CPU OpenBenchmarking.org tokens/s, More Is Better OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU v6.12 v6.11 v6.13 7 Dec 13 26 39 52 65 SE +/- 0.55, N = 6 SE +/- 0.63, N = 3 SE +/- 0.54, N = 3 56.21 56.73 56.28
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: NUMA v6.12 v6.11 v6.13 7 Dec 500 1000 1500 2000 2500 SE +/- 1.02, N = 3 SE +/- 6.78, N = 3 SE +/- 10.25, N = 3 2298.65 2071.98 2385.59 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Stress-NG Test: Mixed Scheduler OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: Mixed Scheduler v6.12 v6.11 v6.13 7 Dec 20K 40K 60K 80K 100K SE +/- 215.02, N = 3 SE +/- 161.54, N = 3 SE +/- 282.83, N = 3 82674.98 79773.37 81582.58 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: Futex v6.12 v6.11 v6.13 7 Dec 900K 1800K 2700K 3600K 4500K SE +/- 15124.18, N = 3 SE +/- 17548.84, N = 3 SE +/- 31680.82, N = 3 4313954.94 4154651.85 4398887.77 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: MEMFD v6.12 v6.11 v6.13 7 Dec 500 1000 1500 2000 2500 SE +/- 2.72, N = 3 SE +/- 4.79, N = 3 SE +/- 3.27, N = 3 2010.99 2054.70 2125.13 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: Socket Activity v6.12 v6.11 v6.13 7 Dec 10K 20K 30K 40K 50K SE +/- 68.22, N = 3 SE +/- 29.82, N = 3 SE +/- 42.06, N = 3 47501.04 47424.87 47238.70 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.17.08 Test: Mutex v6.12 v6.11 v6.13 7 Dec 10M 20M 30M 40M 50M SE +/- 442253.04, N = 3 SE +/- 181454.22, N = 3 SE +/- 188378.96, N = 3 45398166.14 45376718.94 43186401.84 1. (CXX) g++ options: -O2 -std=gnu99 -lc -lm
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128 v6.12 v6.11 v6.13 7 Dec 15 30 45 60 75 SE +/- 0.47, N = 3 SE +/- 0.37, N = 3 SE +/- 0.36, N = 3 67.71 68.13 65.45
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128 v6.12 v6.11 v6.13 7 Dec 40 80 120 160 200 SE +/- 1.26, N = 3 SE +/- 1.11, N = 3 SE +/- 1.45, N = 15 159.60 159.90 156.82
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: defconfig v6.12 v6.11 v6.13 7 Dec 5 10 15 20 25 SE +/- 0.31, N = 3 SE +/- 0.25, N = 4 SE +/- 0.26, N = 4 21.85 21.79 21.86
OpenVINO GenAI Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token v6.12 v6.11 v6.13 7 Dec 5 10 15 20 25 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 SE +/- 0.11, N = 3 20.37 20.36 20.54
OpenVINO GenAI Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token v6.12 v6.11 v6.13 7 Dec 7 14 21 28 35 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 SE +/- 0.18, N = 3 30.27 30.39 30.48
OpenVINO GenAI Model: Gemma-7b-int4-ov - Device: CPU OpenBenchmarking.org tokens/s, More Is Better OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU v6.12 v6.11 v6.13 7 Dec 11 22 33 44 55 SE +/- 0.10, N = 3 SE +/- 0.15, N = 3 SE +/- 0.27, N = 3 49.09 49.13 48.70
DaCapo Benchmark Java Test: Avrora AVR Simulation Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Avrora AVR Simulation Framework v6.12 v6.11 v6.13 7 Dec 500 1000 1500 2000 2500 SE +/- 15.56, N = 13 SE +/- 16.90, N = 3 SE +/- 20.72, N = 15 2303 2354 2283
Primesieve Length: 1e13 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 12.6 Length: 1e13 v6.12 v6.11 v6.13 7 Dec 6 12 18 24 30 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 24.97 25.04 24.94 1. (CXX) g++ options: -O3
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024 v6.12 v6.11 v6.13 7 Dec 4K 8K 12K 16K 20K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 16384 16384 16384
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16 v6.12 v6.11 v6.13 7 Dec 4 8 12 16 20 SE +/- 0.74, N = 12 SE +/- 0.74, N = 12 SE +/- 0.73, N = 12 15.01 14.98 14.90
DaCapo Benchmark Java Test: Eclipse OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Eclipse v6.12 v6.11 v6.13 7 Dec 1400 2800 4200 5600 7000 SE +/- 33.39, N = 3 SE +/- 14.75, N = 3 SE +/- 23.78, N = 3 6280 6349 6331
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048 v6.12 v6.11 v6.13 7 Dec 7K 14K 21K 28K 35K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 32768 32768 32768
DaCapo Benchmark Java Test: H2 Database Engine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: H2 Database Engine v6.12 v6.11 v6.13 7 Dec 500 1000 1500 2000 2500 SE +/- 13.65, N = 3 SE +/- 28.09, N = 3 SE +/- 18.21, N = 3 2049 2104 2010
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Fishy Cat - Compute: CPU-Only v6.12 v6.11 v6.13 7 Dec 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.12, N = 3 SE +/- 0.06, N = 3 21.49 21.66 21.35
DaCapo Benchmark Java Test: jMonkeyEngine OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: jMonkeyEngine v6.12 v6.11 v6.13 7 Dec 1500 3000 4500 6000 7500 SE +/- 4.84, N = 3 SE +/- 1.86, N = 3 SE +/- 1.86, N = 3 6797 6791 6793
Blender Blend File: Junkshop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: Junkshop - Compute: CPU-Only v6.12 v6.11 v6.13 7 Dec 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 20.21 20.24 20.20
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512 v6.12 v6.11 v6.13 7 Dec 700 1400 2100 2800 3500 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 3072 3072 3072
DaCapo Benchmark Java Test: PMD Source Code Analyzer OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: PMD Source Code Analyzer v6.12 v6.11 v6.13 7 Dec 300 600 900 1200 1500 SE +/- 23.97, N = 15 SE +/- 5.90, N = 3 SE +/- 18.53, N = 15 1167 1085 1089
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128 v6.12 v6.11 v6.13 7 Dec 30 60 90 120 150 SE +/- 1.04, N = 3 SE +/- 0.87, N = 3 SE +/- 1.53, N = 3 111.98 112.56 109.60
DaCapo Benchmark Java Test: Apache Kafka OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Kafka v6.12 v6.11 v6.13 7 Dec 1100 2200 3300 4400 5500 SE +/- 26.64, N = 3 SE +/- 7.33, N = 3 SE +/- 3.53, N = 3 5045 5070 5057
DaCapo Benchmark Java Test: Apache Lucene Search Index OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Lucene Search Index v6.12 v6.11 v6.13 7 Dec 500 1000 1500 2000 2500 SE +/- 8.39, N = 3 SE +/- 16.64, N = 3 SE +/- 25.32, N = 3 2258 2294 2278
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.3 Blend File: BMW27 - Compute: CPU-Only v6.12 v6.11 v6.13 7 Dec 4 8 12 16 20 SE +/- 0.05, N = 3 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 15.08 15.19 15.02
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16 v6.12 v6.11 v6.13 7 Dec 14 28 42 56 70 SE +/- 1.75, N = 12 SE +/- 1.73, N = 12 SE +/- 1.70, N = 12 64.80 64.87 64.15
OpenVINO GenAI Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time Per Output Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time Per Output Token v6.12 v6.11 v6.13 7 Dec 3 6 9 12 15 SE +/- 0.04, N = 3 SE +/- 0.09, N = 3 SE +/- 0.04, N = 3 12.79 12.76 12.88
OpenVINO GenAI Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time To First Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time To First Token v6.12 v6.11 v6.13 7 Dec 4 8 12 16 20 SE +/- 0.05, N = 3 SE +/- 0.19, N = 3 SE +/- 0.08, N = 3 15.89 15.86 16.05
OpenVINO GenAI Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU OpenBenchmarking.org tokens/s, More Is Better OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU v6.12 v6.11 v6.13 7 Dec 20 40 60 80 100 SE +/- 0.23, N = 3 SE +/- 0.57, N = 3 SE +/- 0.28, N = 3 78.18 78.36 77.67
DaCapo Benchmark Java Test: Jython OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Jython v6.12 v6.11 v6.13 7 Dec 800 1600 2400 3200 4000 SE +/- 5.33, N = 3 SE +/- 6.67, N = 3 SE +/- 11.00, N = 3 3755 3782 3763
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 8 - Input: Bosphorus 4K v6.12 v6.11 v6.13 7 Dec 40 80 120 160 200 SE +/- 1.77, N = 3 SE +/- 1.84, N = 3 SE +/- 0.64, N = 3 200.32 199.21 200.82 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048 v6.12 v6.11 v6.13 7 Dec 7K 14K 21K 28K 35K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 32768 32768 32768
DaCapo Benchmark Java Test: BioJava Biological Data Framework OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: BioJava Biological Data Framework v6.12 v6.11 v6.13 7 Dec 900 1800 2700 3600 4500 SE +/- 18.98, N = 3 SE +/- 11.89, N = 3 SE +/- 22.00, N = 3 4071 4058 4019
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512 v6.12 v6.11 v6.13 7 Dec 2K 4K 6K 8K 10K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 8192 8192 8192
DaCapo Benchmark Java Test: Spring Boot OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Spring Boot v6.12 v6.11 v6.13 7 Dec 600 1200 1800 2400 3000 SE +/- 20.60, N = 3 SE +/- 17.89, N = 3 SE +/- 26.77, N = 3 2259 2724 2651
OpenVINO GenAI Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token v6.12 v6.11 v6.13 7 Dec 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.10, N = 3 SE +/- 0.18, N = 3 15.19 15.18 15.17
OpenVINO GenAI Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token OpenBenchmarking.org ms, Fewer Is Better OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token v6.12 v6.11 v6.13 7 Dec 5 10 15 20 25 SE +/- 0.37, N = 3 SE +/- 0.24, N = 3 SE +/- 0.43, N = 3 22.73 22.66 22.38
OpenVINO GenAI Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU OpenBenchmarking.org tokens/s, More Is Better OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU v6.12 v6.11 v6.13 7 Dec 15 30 45 60 75 SE +/- 0.10, N = 3 SE +/- 0.41, N = 3 SE +/- 0.79, N = 3 65.82 65.89 65.91
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024 v6.12 v6.11 v6.13 7 Dec 4K 8K 12K 16K 20K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 16384 16384 16384
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16 v6.12 v6.11 v6.13 7 Dec 20 40 60 80 100 SE +/- 2.55, N = 12 SE +/- 2.48, N = 12 SE +/- 2.52, N = 14 107.95 108.22 102.21
DaCapo Benchmark Java Test: GraphChi OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: GraphChi v6.12 v6.11 v6.13 7 Dec 500 1000 1500 2000 2500 SE +/- 5.51, N = 3 SE +/- 13.86, N = 3 SE +/- 6.00, N = 3 2083 2093 2102
Llamafile Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256 v6.12 v6.11 v6.13 7 Dec 300 600 900 1200 1500 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 1536 1536 1536
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16 v6.12 v6.11 v6.13 7 Dec 30 60 90 120 150 SE +/- 4.63, N = 12 SE +/- 4.56, N = 12 SE +/- 4.35, N = 12 152.89 152.07 147.98
Llamafile Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256 v6.12 v6.11 v6.13 7 Dec 900 1800 2700 3600 4500 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 4096 4096 4096
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.3 Encoder Mode: Preset 13 - Input: Bosphorus 4K v6.12 v6.11 v6.13 7 Dec 100 200 300 400 500 SE +/- 1.95, N = 3 SE +/- 3.40, N = 3 SE +/- 0.44, N = 3 468.67 466.61 480.91 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024 v6.12 v6.11 v6.13 7 Dec 4K 8K 12K 16K 20K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 16384 16384 16384
DaCapo Benchmark Java Test: Apache Tomcat OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Tomcat v6.12 v6.11 v6.13 7 Dec 200 400 600 800 1000 SE +/- 3.51, N = 3 SE +/- 7.81, N = 3 SE +/- 2.00, N = 3 1038 1053 1055
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512 v6.12 v6.11 v6.13 7 Dec 2K 4K 6K 8K 10K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 8192 8192 8192
DaCapo Benchmark Java Test: Zxing 1D/2D Barcode Image Processing OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Zxing 1D/2D Barcode Image Processing v6.12 v6.11 v6.13 7 Dec 120 240 360 480 600 SE +/- 4.38, N = 9 SE +/- 4.26, N = 3 SE +/- 5.55, N = 3 551 522 493
DaCapo Benchmark Java Test: Batik SVG Toolkit OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Batik SVG Toolkit v6.12 v6.11 v6.13 7 Dec 200 400 600 800 1000 SE +/- 4.00, N = 3 SE +/- 3.61, N = 3 SE +/- 7.55, N = 3 937 923 938
DaCapo Benchmark Java Test: Apache Xalan XSLT OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: Apache Xalan XSLT v6.12 v6.11 v6.13 7 Dec 160 320 480 640 800 SE +/- 1.76, N = 3 SE +/- 6.06, N = 3 SE +/- 8.54, N = 3 741 743 742
Llamafile Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256 v6.12 v6.11 v6.13 7 Dec 900 1800 2700 3600 4500 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 4096 4096 4096
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512 v6.12 v6.11 v6.13 7 Dec 2K 4K 6K 8K 10K SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 8192 8192 8192
DaCapo Benchmark Java Test: FOP Print Formatter OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 23.11 Java Test: FOP Print Formatter v6.12 v6.11 v6.13 7 Dec 70 140 210 280 350 SE +/- 2.08, N = 3 SE +/- 0.67, N = 3 SE +/- 1.20, N = 3 335 339 338
Primesieve Length: 1e12 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 12.6 Length: 1e12 v6.12 v6.11 v6.13 7 Dec 0.4498 0.8996 1.3494 1.7992 2.249 SE +/- 0.001, N = 3 SE +/- 0.003, N = 3 SE +/- 0.002, N = 3 1.990 1.999 1.988 1. (CXX) g++ options: -O3
Llamafile Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256 OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.16 Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256 v6.12 v6.11 v6.13 7 Dec 900 1800 2700 3600 4500 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 4096 4096 4096
Phoronix Test Suite v10.8.5