eoy2024

Benchmarks for a future article. AMD EPYC 4484PX 12-Core testing with a Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2412086-NE-EOY20243255
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 05
  6 Hours, 48 Minutes
4484PX
December 07
  7 Hours, 3 Minutes
px
December 07
  7 Hours, 3 Minutes
Invert Behavior (Only Show Selected Data)
  6 Hours, 58 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


eoy2024ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutiona4484PXpxAMD EPYC 4564P 16-Core @ 5.88GHz (16 Cores / 32 Threads)Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS)AMD Device 14d82 x 32GB DRAM-4800MT/s Micron MTC20C2085S1EC48BA1 BC3201GB Micron_7450_MTFDKCC3T2TFS + 960GB SAMSUNG MZ1L2960HCJR-00A07ASPEEDAMD Rembrandt Radeon HD AudioVA24312 x Intel I210Ubuntu 24.046.8.0-11-generic (x86_64)GNOME Shell 45.3X Server 1.21.1.11GCC 13.2.0ext41024x768AMD EPYC 4484PX 12-Core @ 5.66GHz (12 Cores / 24 Threads)6.12.2-061202-generic (x86_64)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- a: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601209- 4484PX: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209- px: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209Java Details- OpenJDK Runtime Environment (build 21.0.2+13-Ubuntu-2)Python Details- Python 3.12.3Security Details- a: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4484PX: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - px: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

a4484PXpxResult OverviewPhoronix Test Suite100%114%128%142%Apache CassandraBYTE Unix BenchmarkASTC EncoderPrimesieveEtcpakPOV-RayOpenSSLBlenderACES DGEMMOSPRayStockfishRELION7-Zip CompressionBuild2RustlsLiteRTNAMDx265SVT-AV1Timed Eigen CompilationWhisperfileoneDNNNumpy BenchmarksimdjsonApache CouchDBWhisper.cppQuantLibLlama.cppGROMACSXNNPACKGcrypt LibraryCP2K Molecular DynamicsY-CruncherLlamafilePyPerformanceONNX RuntimeOpenVINO GenAIRenaissanceFinanceBench

eoy2024quantlib: Ssvt-av1: Preset 3 - Beauty 4K 10-bitrelion: Basic - CPUwhisper-cpp: ggml-medium.en - 2016 State of the Unionblender: Barbershop - CPU-Onlycp2k: H20-256couchdb: 500 - 3000 - 30whisperfile: Mediumllamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048quantlib: XXScouchdb: 300 - 3000 - 30byte: Whetstone Doublellamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128svt-av1: Preset 3 - Bosphorus 4Kllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048byte: Pipebyte: Dhrystone 2byte: System Callwhisper-cpp: ggml-small.en - 2016 State of the Unioncouchdb: 100 - 3000 - 30svt-av1: Preset 5 - Beauty 4K 10-bitblender: Pabellon Barcelona - CPU-Onlyxnnpack: QS8MobileNetV2xnnpack: FP16MobileNetV3Smallxnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV1xnnpack: FP32MobileNetV3Smallxnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV1llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048openssl: ChaCha20openssl: ChaCha20-Poly1305openssl: AES-256-GCMopenssl: AES-128-GCMblender: Classroom - CPU-Onlywhisperfile: Smallllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024rustls: handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake-resume - TLS13_CHACHA20_POLY1305_SHA256gcrypt: ospray: particle_volume/scivis/real_timecouchdb: 500 - 1000 - 30rustls: handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384rustls: handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ospray: particle_volume/pathtracer/real_timesvt-av1: Preset 3 - Bosphorus 1080pcassandra: Writespyperformance: async_tree_ioopenvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time To First Tokenopenvino-genai: Gemma-7b-int4-ov - CPUastcenc: Very Thoroughcouchdb: 300 - 1000 - 30astcenc: Exhaustiveospray: particle_volume/ao/real_timegromacs: water_GMX50_barepyperformance: xml_etreellamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024svt-av1: Preset 8 - Beauty 4K 10-bitbuild2: Time To Compilepyperformance: asyncio_tcp_sslnumpy: primesieve: 1e13simdjson: Kostyallamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048pyperformance: python_startupllamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128cp2k: Fayalite-FISTllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024whisper-cpp: ggml-base.en - 2016 State of the Unionllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128blender: Junkshop - CPU-Onlyblender: Fishy Cat - CPU-Onlyopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPUnamd: STMV with 1,066,628 Atomsrustls: handshake - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384simdjson: LargeRandrenaissance: ALS Movie Lenssvt-av1: Preset 5 - Bosphorus 4Kstockfish: Chess Benchmarkonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUcouchdb: 100 - 1000 - 30simdjson: DistinctUserIDllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128simdjson: TopTweetrenaissance: In-Memory Database Shootoutsimdjson: PartialTweetssvt-av1: Preset 13 - Beauty 4K 10-bitrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Apache Spark PageRankblender: BMW27 - CPU-Onlyrenaissance: Gaussian Mixture Modelstockfish: Chess Benchmarkpyperformance: gc_collectrenaissance: Savina Reactors.IOllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512ospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timerenaissance: Apache Spark Bayesbuild-eigen: Time To Compilerenaissance: Finagle HTTP Requestsrenaissance: Rand Forestospray: gravity_spheres_volume/dim_512/pathtracer/real_timerenaissance: Scala Dottyonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardrenaissance: Genetic Algorithm Using Jenetics + Futuresonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardpyperformance: asyncio_websocketscp2k: H20-64mt-dgemm: Sustained Floating-Point Ratellama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048litert: Inception V4litert: Inception ResNet V2litert: NASNet Mobilelitert: DeepLab V3litert: Mobilenet Floatlitert: SqueezeNetlitert: Quantized COCO SSD MobileNet v1litert: Mobilenet Quantrustls: handshake-ticket - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256rustls: handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512financebench: Bonds OpenMPnamd: ATPase with 327,506 Atomssvt-av1: Preset 5 - Bosphorus 1080pwhisperfile: Tinypyperformance: django_templateastcenc: Thoroughopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPUetcpak: Multi-Threaded - ETC2pyperformance: raytracepyperformance: crypto_pyaespyperformance: floatllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256pyperformance: gofinancebench: Repo OpenMPsvt-av1: Preset 8 - Bosphorus 4Kpyperformance: chaospyperformance: regex_compilerustls: handshake - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256pyperformance: pickle_pure_pythonllamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512pyperformance: pathlibpovray: Trace Timeonednn: Deconvolution Batch shapes_1d - CPUllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16pyperformance: json_loadsllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024pyperformance: nbodyy-cruncher: 1Bx265: Bosphorus 4Kcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingastcenc: Fastonednn: IP Shapes 1D - CPUllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128svt-av1: Preset 13 - Bosphorus 4Kllamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16svt-av1: Preset 8 - Bosphorus 1080pllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512astcenc: Mediumy-cruncher: 500Mllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16onednn: IP Shapes 3D - CPUllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512primesieve: 1e12onednn: Convolution Batch Shapes Auto - CPUx265: Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256onednn: Deconvolution Batch shapes_3d - CPUopenssl: SHA256a4484PXpx12.74761.422944.27700.91506.2592.857511.775534.9191228813.432367.83343491.91.999.593276848806257.11866536062.749140426.6245.07838232.1886.504166.1284492014981190114397918101495125262.9763.091305884950509239352934097172751700104784522170143.36195.4164210.476144404263.45388077.69162.1258.98486148.0491553632.141820810.21236.24529.573271333755101.72106.629.832.741106.131.68449.009171.69235.81638412.46892.053645775.7578.4985.973072327685.7720.136.8894.03270.8569.2687.489737.2473.5671.3577.3486.0612.930.75656423535.681.839805.734.538547527961372.03700.85969.92910.4626.2810.463256.19.7618.5884403.82412.253.553399.5465070386773506.481927.587897.63944490.058.6552319.4414.48.82093477.0648.5221.54196732.87.42776134.596310.8753.21679.76985102.33164.14115.58996.39112156.45390.452311.055223.55342.453721.242947.06911.57084636.3182.55898390.5977.08601141.11731558.1911141.194104279.04163843276821477.819530.2169363579.671211.481794.112129.52823.17262033215363563852.5770.761.7868.433061.218752.79632101.97141.7093520.720.302551.8655.9319.28577.81717541.750.7409677.821418.445312102.00538.269.876454.4580462.6165819214.218.5422.97612355.0910.2212.1163845918.48532.57165916163859396.64951.1257347.72212.52409619.03339.023327.3156.22178.77224.594.05881926.3476.67287114.45842.55840962.4129411.86471.188729.4809.78969679.34628.104559.346473.550911228812.1169406.12244075.32.057.6843276833443359.21346521770.330761218.9268.23891253.995.602226.3471777914671217138380915151365125763.6163.897105235690688165440207116029187076496336760197.2173.3819710.916144344296.24333882.92171.0236.44913164.4681329363.11586292.42199.02325.44617496066697.79121.4810.231.9412117.5661.18876.527761.57736.81638410.967111.651590745.59110.6086.113072327686.0820.397.1192.2166.5766.8592.709337.4197.0196.6774.6593.0113.40.65119306153.21.849378.829.094452675461898.36965.01575.90110.7627.5910.823241.510.117.4064038.42138.174.083860.6337022986993655.881925.548885.63122513.267.3642492.2422.06.41198428.6850.1411.17627904.06.25815159.71355.7512.810939.01322110.9468.905114.51224.80287208.17493.160510.733826.747837.383224.940240.09351.06188941.4012.80544356.4097.98873125.17232153.005842.730642222.75163843276822083.319477.88057.562343.381244.71809.181420.15848.9432282729.6415363035330.2169.111.8368.234600.7734382.3812488.41537.134622114.1749.3158.9120.28410.72618243.151.3409678.622320.33203185.20139.771.757716.6459308.75169819214.425.2643.40293232.2610.4512.41638459.518.37927.16125698141263278.24451.9380652.3198.112409619.49287.047243.14109.02658.68825.862.7307281929.1164.11551101.37776.11540963.508411.8391.184733.02809.489678.4631.31560.7475.510841228812.1057408.4832441312.057.6463276833381363.11340340196.630701622.8266.81425254.7335.551224.6472379815271248138683715741368127263.4163.7997019897450686789555507090265648076184405610197.53167.8921910.936144342775.29333574.3163.8396.52304164.8121340712.851572010.68197.225.44717394665697.61122.310.241.9391119.3491.18626.522061.57536.51638410.855113.78590831.42110.7095.453072327686.0920.517.1294.89666.3566.5293.454637.4497.197.0974.549313.410.65448304060.281.849275.728.824429733961895.68966.01376.3898.9727.810.513175.68.3517.3554002.32229.773.163815.2338715957063676.081925.61475.71084474.967.0762483.1453.26.4074436.2854.3341.1705920.76.33034157.893357.6022.796389.01687110.89268.610414.57474.85142206.09193.344110.712726.948537.104823.060443.3621.066937.7782.80695356.1947.99486125.07632252.724842.012831208.99163843276822752.419490.77931.642359.991244.511821.351417.35849.2092292879.4415363038723.4867.951.8468.8134896.8359382.3537988.2738.7182821.214.146449.2858.8620.29409.87518243.350.8409679.422318.73828184.99839.472.557688.0859206.34168819214.425.3283.40628244.7710.4512.51638459.218.36526.94125605142213277.29941.9391352.37194.024409619.5286.962232.86108.85888.62325.942.7294281929.1474.13321101.25769.81840963.51243OpenBenchmarking.org

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: Sa4484PXpx369121512.7511.8611.841. (CXX) g++ options: -O3 -march=native -fPIE -pie

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Beauty 4K 10-bita4484PXpx0.320.640.961.281.61.4221.1881.1841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

RELION

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 5.0Test: Basic - Device: CPUa4484PXpx2004006008001000944.27729.40733.021. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -lfftw3f -lfftw3 -ldl -ltiff -lpng -ljpeg -lmpi_cxx -lmpi

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Uniona4484PXpx2004006008001000700.91809.79809.491. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Barbershop - Compute: CPU-Onlya4484PXpx150300450600750506.20679.34678.40

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-256a4484PXpx140280420560700592.86628.10631.311. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 3000 - Rounds: 30a4484PXpx120240360480600511.78559.35560.701. (CXX) g++ options: -flto -lstdc++ -shared -lei

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Mediuma4484PXpx120240360480600534.92473.55475.51

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048a4484PXpx3K6K9K12K15K122881228812288

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: XXSa4484PXpx369121513.4312.1212.111. (CXX) g++ options: -O3 -march=native -fPIE -pie

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 3000 - Rounds: 30a4484PXpx90180270360450367.83406.12408.481. (CXX) g++ options: -flto -lstdc++ -shared -lei

BYTE Unix Benchmark

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doublea4484PXpx70K140K210K280K350K343491.9244075.3244131.01. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128a4484PXpx0.46130.92261.38391.84522.30651.992.052.05

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 4Ka4484PXpx36912159.5907.6847.6461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048a4484PXpx7K14K21K28K35K327683276832768

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipea4484PXpx10M20M30M40M50M48806257.133443359.233381363.11. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2a4484PXpx400M800M1200M1600M2000M1866536062.71346521770.31340340196.61. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Calla4484PXpx11M22M33M44M55M49140426.630761218.930701622.81. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Uniona4484PXpx60120180240300245.08268.24266.811. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 3000 - Rounds: 30a4484PXpx60120180240300232.19253.99254.731. (CXX) g++ options: -flto -lstdc++ -shared -lei

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Beauty 4K 10-bita4484PXpx2468106.5045.6025.5511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Pabellon Barcelona - Compute: CPU-Onlya4484PXpx50100150200250166.12226.34224.64

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2a4484PXpx20040060080010008447177231. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smalla4484PXpx20040060080010009207797981. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largea4484PXpx300600900120015001498146715271. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2a4484PXpx300600900120015001190121712481. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1a4484PXpx300600900120015001143138313861. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smalla4484PXpx20040060080010009798098371. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largea4484PXpx4008001200160020001810151515741. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2a4484PXpx300600900120015001495136513681. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1a4484PXpx300600900120015001252125712721. (CXX) g++ options: -O3 -lrt -lm

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048a4484PXpx142842567062.9763.6163.411. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048a4484PXpx142842567063.0963.8063.791. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenSSL

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20a4484PXpx30000M60000M90000M120000M150000M13058849505097105235690970198974501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20-Poly1305a4484PXpx20000M40000M60000M80000M100000M9239352934068816544020686789555501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-256-GCMa4484PXpx20000M40000M60000M80000M100000M9717275170071160291870709026564801. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-128-GCMa4484PXpx20000M40000M60000M80000M100000M10478452217076496336760761844056101. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Classroom - Compute: CPU-Onlya4484PXpx4080120160200143.36197.20197.53

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Smalla4484PXpx4080120160200195.42173.38167.89

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128a4484PXpx369121510.4710.9110.93

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024a4484PXpx13002600390052006500614461446144

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256a4484PXpx90K180K270K360K450K404263.45344296.24342775.291. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256a4484PXpx80K160K240K320K400K388077.69333882.92333574.301. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Gcrypt Library

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.10.3a4484PXpx4080120160200162.13171.02163.841. (CC) gcc options: -O2 -fvisibility=hidden

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/scivis/real_timea4484PXpx36912158.984866.449136.52304

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 1000 - Rounds: 30a4484PXpx4080120160200148.05164.47164.811. (CXX) g++ options: -flto -lstdc++ -shared -lei

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384a4484PXpx300K600K900K1200K1500K1553632.141329363.101340712.851. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384a4484PXpx400K800K1200K1600K2000K1820810.211586292.421572010.681. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/pathtracer/real_timea4484PXpx50100150200250236.25199.02197.20

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 1080pa4484PXpx71421283529.5725.4525.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Cassandra

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writesa4484PXpx60K120K180K240K300K271333174960173946

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: async_tree_ioa4484PXpx160320480640800755666656

OpenVINO GenAI

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Tokena4484PXpx20406080100101.7297.7997.61

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time To First Tokena4484PXpx306090120150106.62121.48122.30

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPUa4484PXpx36912159.8310.2310.24

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Very Thorougha4484PXpx0.61671.23341.85012.46683.08352.74101.94121.93911. (CXX) g++ options: -O3 -flto -pthread

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 1000 - Rounds: 30a4484PXpx306090120150106.13117.57119.351. (CXX) g++ options: -flto -lstdc++ -shared -lei

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Exhaustivea4484PXpx0.3790.7581.1371.5161.8951.68441.18871.18621. (CXX) g++ options: -O3 -flto -pthread

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/ao/real_timea4484PXpx36912159.009176.527766.52206

GROMACS

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACSInput: water_GMX50_barea4484PXpx0.38070.76141.14211.52281.90351.6921.5771.5751. GROMACS version: 2023.3-Ubuntu_2023.3_1ubuntu3

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: xml_etreea4484PXpx81624324035.836.836.5

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024a4484PXpx4K8K12K16K20K163841638416384

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Beauty 4K 10-bita4484PXpx369121512.4710.9710.861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Build2

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.17Time To Compilea4484PXpx30609012015092.05111.65113.78

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_tcp_ssla4484PXpx140280420560700645590590

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarka4484PXpx2004006008001000775.75745.59831.42

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e13a4484PXpx2040608010078.50110.61110.711. (CXX) g++ options: -O3

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: Kostyaa4484PXpx2468105.976.115.451. (CXX) g++ options: -O3 -lrt

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512a4484PXpx7001400210028003500307230723072

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048a4484PXpx7K14K21K28K35K327683276832768

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: python_startupa4484PXpx2468105.776.086.09

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128a4484PXpx51015202520.1320.3920.51

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128a4484PXpx2468106.887.117.121. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: Fayalite-FISTa4484PXpx2040608010094.0392.2194.901. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024a4484PXpx163248648070.8566.5766.351. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024a4484PXpx153045607569.2666.8566.521. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Uniona4484PXpx2040608010087.4992.7193.451. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128a4484PXpx2468107.247.417.441. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Junkshop - Compute: CPU-Onlya4484PXpx2040608010073.5697.0197.10

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Fishy Cat - Compute: CPU-Onlya4484PXpx2040608010071.3596.6797.09

OpenVINO GenAI

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Tokena4484PXpx2040608010077.3474.6574.54

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Tokena4484PXpx2040608010086.0693.0193.00

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPUa4484PXpx369121512.9313.4013.41

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atomsa4484PXpx0.17020.34040.51060.68080.8510.756560.651190.65448

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384a4484PXpx90K180K270K360K450K423535.68306153.20304060.281. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: LargeRandoma4484PXpx0.4140.8281.2421.6562.071.831.841.841. (CXX) g++ options: -O3 -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: ALS Movie Lensa4484PXpx2K4K6K8K10K9805.79378.89275.7MIN: 9253.4 / MAX: 10057.61MIN: 8718.36 / MAX: 9413.7MIN: 8821.09 / MAX: 9495.91

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 4Ka4484PXpx81624324034.5429.0928.821. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmarka4484PXpx12M24M36M48M60M5475279645267546429733961. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUa4484PXpx4008001200160020001372.031898.361895.68MIN: 1342.06MIN: 1894.26MIN: 1892.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUa4484PXpx2004006008001000700.86965.02966.01MIN: 679.89MIN: 963.27MIN: 963.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 1000 - Rounds: 30a4484PXpx2040608010069.9375.9076.391. (CXX) g++ options: -flto -lstdc++ -shared -lei

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: DistinctUserIDa4484PXpx369121510.4610.768.971. (CXX) g++ options: -O3 -lrt

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128a4484PXpx71421283526.2827.5927.80

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: TopTweeta4484PXpx369121510.4610.8210.511. (CXX) g++ options: -O3 -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: In-Memory Database Shootouta4484PXpx70014002100280035003256.13241.53175.6MIN: 3019.89 / MAX: 3599.5MIN: 3037.03 / MAX: 3491.91MIN: 2896.06 / MAX: 3367.44

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: PartialTweetsa4484PXpx36912159.7610.108.351. (CXX) g++ options: -O3 -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Beauty 4K 10-bita4484PXpx51015202518.5917.4117.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Akka Unbalanced Cobwebbed Treea4484PXpx90018002700360045004403.84038.44002.3MAX: 5719.11MIN: 4038.36 / MAX: 5089.28MIN: 4002.27 / MAX: 4983.72

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark PageRanka4484PXpx50010001500200025002412.22138.12229.7MIN: 1691.04MIN: 1499.64MIN: 1612.96 / MAX: 2229.74

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: BMW27 - Compute: CPU-Onlya4484PXpx163248648053.5574.0873.16

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Gaussian Mixture Modela4484PXpx80016002400320040003399.53860.63815.2MIN: 2471.52MIN: 2758.89 / MAX: 3860.61MIN: 2749.56 / MAX: 3815.24

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfishChess Benchmarka4484PXpx10M20M30M40M50M4650703833702298338715951. Stockfish 16 by the Stockfish developers (see AUTHORS file)

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: gc_collecta4484PXpx150300450600750677699706

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Savina Reactors.IOa4484PXpx80016002400320040003506.43655.83676.0MIN: 3506.38 / MAX: 4329.37MIN: 3655.76 / MAX: 4484.97MAX: 4536.84

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512a4484PXpx2K4K6K8K10K819281928192

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/scivis/real_timea4484PXpx2468107.587895.548885.61470

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/ao/real_timea4484PXpx2468107.639445.631225.71084

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark Bayesa4484PXpx110220330440550490.0513.2474.9MIN: 459.29 / MAX: 580.9MIN: 453.66 / MAX: 554.7MIN: 454.77 / MAX: 514.32

Timed Eigen Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.4.0Time To Compilea4484PXpx153045607558.6667.3667.08

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Finagle HTTP Requestsa4484PXpx50010001500200025002319.42492.22483.1MIN: 1832.84MIN: 1947.63MIN: 1933.43

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Random Foresta4484PXpx100200300400500414.4422.0453.2MIN: 322.79 / MAX: 466.1MIN: 357.91 / MAX: 497.55MIN: 352.31 / MAX: 513.31

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timea4484PXpx2468108.820936.411986.40740

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Scala Dottya4484PXpx100200300400500477.0428.6436.2MIN: 371.54 / MAX: 736.5MIN: 378.22 / MAX: 628.77MIN: 380.62 / MAX: 721.56

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standarda4484PXpx2004006008001000648.52850.14854.331. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standarda4484PXpx0.34690.69381.04071.38761.73451.541961.176271.170501. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Genetic Algorithm Using Jenetics + Futuresa4484PXpx2004006008001000732.8904.0920.7MIN: 713.67 / MAX: 813.49MIN: 886.83 / MAX: 919.31MIN: 888.75 / MAX: 934.44

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standarda4484PXpx2468107.427766.258156.330341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standarda4484PXpx4080120160200134.60159.71157.891. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda4484PXpx80160240320400310.88355.75357.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda4484PXpx0.72381.44762.17142.89523.6193.216702.810932.796381. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standarda4484PXpx36912159.769859.013229.016871. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standarda4484PXpx20406080100102.33110.94110.891. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standarda4484PXpx153045607564.1468.9168.611. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standarda4484PXpx4812162015.5914.5114.571. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standarda4484PXpx2468106.391124.802874.851421. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standarda4484PXpx50100150200250156.45208.17206.091. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standarda4484PXpx2040608010090.4593.1693.341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standarda4484PXpx369121511.0610.7310.711. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda4484PXpx61218243023.5526.7526.951. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda4484PXpx102030405042.4537.3837.101. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standarda4484PXpx61218243021.2424.9423.061. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standarda4484PXpx112233445547.0740.0943.361. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standarda4484PXpx0.35340.70681.06021.41361.7671.570841.061881.066001. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standarda4484PXpx2004006008001000636.32941.40937.781. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standarda4484PXpx0.63161.26321.89482.52643.1582.558982.805442.806951. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standarda4484PXpx80160240320400390.60356.41356.191. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standarda4484PXpx2468107.086017.988737.994861. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standarda4484PXpx306090120150141.12125.17125.081. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_websocketsa4484PXpx70140210280350315321322

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-64a4484PXpx132639526558.1953.0152.721. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

ACES DGEMM

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratea4484PXpx20040060080010001141.19842.73842.011. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048a4484PXpx60120180240300279.04222.75208.991. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024a4484PXpx4K8K12K16K20K163841638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048a4484PXpx7K14K21K28K35K327683276832768

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4a4484PXpx5K10K15K20K25K21477.822083.322752.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2a4484PXpx4K8K12K16K20K19530.219477.819490.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobilea4484PXpx4K8K12K16K20K16936.008057.567931.64

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3a4484PXpx80016002400320040003579.672343.382359.99

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floata4484PXpx300600900120015001211.481244.701244.51

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNeta4484PXpx4008001200160020001794.111809.181821.35

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1a4484PXpx50010001500200025002129.521420.151417.35

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quanta4484PXpx2004006008001000823.17848.94849.21

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256a4484PXpx600K1200K1800K2400K3000K2620332.002282729.642292879.441. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256a4484PXpx30060090012001500153615361536

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256a4484PXpx800K1600K2400K3200K4000K3563852.573035330.213038723.481. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512a4484PXpx163248648070.7669.1167.951. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16a4484PXpx0.4140.8281.2421.6562.071.781.831.84

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512a4484PXpx153045607568.4068.2068.811. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPa4484PXpx7K14K21K28K35K33061.2234600.7734896.841. (CXX) g++ options: -O3 -march=native -fopenmp

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atomsa4484PXpx0.62921.25841.88762.51683.1462.796322.381242.35379

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 1080pa4484PXpx20406080100101.9788.4288.271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Tinya4484PXpx102030405041.7137.1338.72

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: django_templatea4484PXpx51015202520.721.021.2

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Thorougha4484PXpx51015202520.3014.1714.151. (CXX) g++ options: -O3 -flto -pthread

OpenVINO GenAI

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Tokena4484PXpx122436486051.8649.3149.28

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Tokena4484PXpx132639526555.9358.9158.86

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPUa4484PXpx51015202519.2820.2820.29

Etcpak

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 2.0Benchmark: Multi-Threaded - Configuration: ETC2a4484PXpx120240360480600577.82410.73409.881. (CXX) g++ options: -flto -pthread

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: raytracea4484PXpx4080120160200175182182

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: crypto_pyaesa4484PXpx102030405041.743.143.3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: floata4484PXpx122436486050.751.350.8

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256a4484PXpx9001800270036004500409640964096

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: goa4484PXpx2040608010077.878.679.4

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPa4484PXpx5K10K15K20K25K21418.4522320.3322318.741. (CXX) g++ options: -O3 -march=native -fopenmp

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 4Ka4484PXpx20406080100102.0185.2085.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: chaosa4484PXpx91827364538.239.739.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: regex_compilea4484PXpx163248648069.871.772.5

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256a4484PXpx16K32K48K64K80K76454.4557716.6457688.081. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256a4484PXpx20K40K60K80K100K80462.6059308.7559206.341. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pickle_pure_pythona4484PXpx4080120160200165169168

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512a4484PXpx2K4K6K8K10K819281928192

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pathliba4484PXpx4812162014.214.414.4

POV-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-RayTrace Timea4484PXpx61218243018.5425.2625.331. POV-Ray 3.7.0.10.unofficial

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUa4484PXpx0.76641.53282.29923.06563.8322.976123.402933.40628MIN: 2.42MIN: 3.03MIN: 3.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024a4484PXpx80160240320400355.09232.26244.771. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16a4484PXpx369121510.2210.4510.45

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: json_loadsa4484PXpx369121512.112.412.5

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024a4484PXpx4K8K12K16K20K163841638416384

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: nbodya4484PXpx132639526559.059.559.2

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 1Ba4484PXpx51015202518.4918.3818.37

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 4Ka4484PXpx81624324032.5727.1626.941. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Decompression Ratinga4484PXpx40K80K120K160K200K1659161256981256051. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Compression Ratinga4484PXpx40K80K120K160K200K1638591412631422131. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Fasta4484PXpx90180270360450396.65278.24277.301. (CXX) g++ options: -O3 -flto -pthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUa4484PXpx0.43630.87261.30891.74522.18151.125731.938061.93913MIN: 1.03MIN: 1.92MIN: 1.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128a4484PXpx122436486047.7252.3052.371. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 4Ka4484PXpx50100150200250212.52198.11194.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256a4484PXpx9001800270036004500409640964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16a4484PXpx51015202519.0319.4919.50

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 1080pa4484PXpx70140210280350339.02287.05286.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512a4484PXpx70140210280350327.30243.14232.861. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Mediuma4484PXpx306090120150156.22109.03108.861. (CXX) g++ options: -O3 -flto -pthread

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 500Ma4484PXpx2468108.7728.6888.623

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16a4484PXpx61218243024.5925.8625.94

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUa4484PXpx0.91311.82622.73933.65244.56554.058002.730722.72942MIN: 3.75MIN: 2.7MIN: 2.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512a4484PXpx2K4K6K8K10K819281928192

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e12a4484PXpx36912156.3479.1169.1471. (CXX) g++ options: -O3

Renaissance

Test: Apache Spark ALS

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUa4484PXpx2468106.672874.115514.13321MIN: 6.2MIN: 4.05MIN: 4.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 1080pa4484PXpx306090120150114.45101.37101.251. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 1080pa4484PXpx2004006008001000842.56776.12769.821. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256a4484PXpx9001800270036004500409640964096

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUa4484PXpx0.79031.58062.37093.16123.95152.412943.508403.51243MIN: 2.34MIN: 3.46MIN: 3.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenVINO GenAI

Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU

a: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

4484PX: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

px: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

OpenSSL

Algorithm: RSA4096

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

Algorithm: SHA512

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

Algorithm: SHA256

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

213 Results Shown

QuantLib
SVT-AV1
RELION
Whisper.cpp
Blender
CP2K Molecular Dynamics
Apache CouchDB
Whisperfile
Llamafile
QuantLib
Apache CouchDB
BYTE Unix Benchmark
Llamafile
SVT-AV1
Llamafile
BYTE Unix Benchmark:
  Pipe
  Dhrystone 2
  System Call
Whisper.cpp
Apache CouchDB
SVT-AV1
Blender
XNNPACK:
  QS8MobileNetV2
  FP16MobileNetV3Small
  FP16MobileNetV3Large
  FP16MobileNetV2
  FP16MobileNetV1
  FP32MobileNetV3Small
  FP32MobileNetV3Large
  FP32MobileNetV2
  FP32MobileNetV1
Llama.cpp:
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048
OpenSSL:
  ChaCha20
  ChaCha20-Poly1305
  AES-256-GCM
  AES-128-GCM
Blender
Whisperfile
Llamafile:
  mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024
Rustls:
  handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256
  handshake-resume - TLS13_CHACHA20_POLY1305_SHA256
Gcrypt Library
OSPRay
Apache CouchDB
Rustls:
  handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
OSPRay
SVT-AV1
Apache Cassandra
PyPerformance
OpenVINO GenAI:
  Gemma-7b-int4-ov - CPU - Time Per Output Token
  Gemma-7b-int4-ov - CPU - Time To First Token
  Gemma-7b-int4-ov - CPU
ASTC Encoder
Apache CouchDB
ASTC Encoder
OSPRay
GROMACS
PyPerformance
Llamafile
SVT-AV1
Build2
PyPerformance
Numpy Benchmark
Primesieve
simdjson
Llamafile:
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048
PyPerformance
Llamafile
Llama.cpp
CP2K Molecular Dynamics
Llama.cpp:
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024
Whisper.cpp
Llama.cpp
Blender:
  Junkshop - CPU-Only
  Fishy Cat - CPU-Only
OpenVINO GenAI:
  Falcon-7b-instruct-int4-ov - CPU - Time Per Output Token
  Falcon-7b-instruct-int4-ov - CPU - Time To First Token
  Falcon-7b-instruct-int4-ov - CPU
NAMD
Rustls
simdjson
Renaissance
SVT-AV1
Stockfish
oneDNN:
  Recurrent Neural Network Training - CPU
  Recurrent Neural Network Inference - CPU
Apache CouchDB
simdjson
Llamafile
simdjson
Renaissance
simdjson
SVT-AV1
Renaissance:
  Akka Unbalanced Cobwebbed Tree
  Apache Spark PageRank
Blender
Renaissance
Stockfish
PyPerformance
Renaissance
Llamafile
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
Renaissance
Timed Eigen Compilation
Renaissance:
  Finagle HTTP Requests
  Rand Forest
OSPRay
Renaissance
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
Renaissance
ONNX Runtime:
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ZFNet-512 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
PyPerformance
CP2K Molecular Dynamics
ACES DGEMM
Llama.cpp
Llamafile:
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048
LiteRT:
  Inception V4
  Inception ResNet V2
  NASNet Mobile
  DeepLab V3
  Mobilenet Float
  SqueezeNet
  Quantized COCO SSD MobileNet v1
  Mobilenet Quant
Rustls
Llamafile
Rustls
Llama.cpp
Llamafile
Llama.cpp
FinanceBench
NAMD
SVT-AV1
Whisperfile
PyPerformance
ASTC Encoder
OpenVINO GenAI:
  Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Token
  Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Token
  Phi-3-mini-128k-instruct-int4-ov - CPU
Etcpak
PyPerformance:
  raytrace
  crypto_pyaes
  float
Llamafile
PyPerformance
FinanceBench
SVT-AV1
PyPerformance:
  chaos
  regex_compile
Rustls:
  handshake - TLS13_CHACHA20_POLY1305_SHA256
  handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
PyPerformance
Llamafile
PyPerformance
POV-Ray
oneDNN
Llama.cpp
Llamafile
PyPerformance
Llamafile
PyPerformance
Y-Cruncher
x265
7-Zip Compression:
  Decompression Rating
  Compression Rating
ASTC Encoder
oneDNN
Llama.cpp
SVT-AV1
Llamafile:
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256
  Llama-3.2-3B-Instruct.Q6_K - Text Generation 16
SVT-AV1
Llama.cpp
ASTC Encoder
Y-Cruncher
Llamafile
oneDNN
Llamafile
Primesieve
oneDNN
x265
SVT-AV1
Llamafile
oneDNN