eoy2024

Benchmarks for a future article. AMD EPYC 4484PX 12-Core testing with a Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2412083-NE-EOY20246055
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 05
  6 Hours, 48 Minutes
4484PX
December 07
  7 Hours, 3 Minutes
px
December 07
  7 Hours, 3 Minutes
Invert Behavior (Only Show Selected Data)
  6 Hours, 58 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


eoy2024ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutiona4484PXpxAMD EPYC 4564P 16-Core @ 5.88GHz (16 Cores / 32 Threads)Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS)AMD Device 14d82 x 32GB DRAM-4800MT/s Micron MTC20C2085S1EC48BA1 BC3201GB Micron_7450_MTFDKCC3T2TFS + 960GB SAMSUNG MZ1L2960HCJR-00A07ASPEEDAMD Rembrandt Radeon HD AudioVA24312 x Intel I210Ubuntu 24.046.8.0-11-generic (x86_64)GNOME Shell 45.3X Server 1.21.1.11GCC 13.2.0ext41024x768AMD EPYC 4484PX 12-Core @ 5.66GHz (12 Cores / 24 Threads)6.12.2-061202-generic (x86_64)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- a: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601209- 4484PX: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209- px: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209Java Details- OpenJDK Runtime Environment (build 21.0.2+13-Ubuntu-2)Python Details- Python 3.12.3Security Details- a: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4484PX: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - px: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

a4484PXpxResult OverviewPhoronix Test Suite100%114%128%142%Apache CassandraBYTE Unix BenchmarkASTC EncoderPrimesieveEtcpakPOV-RayOpenSSLBlenderACES DGEMMOSPRayStockfishRELION7-Zip CompressionBuild2RustlsLiteRTNAMDx265SVT-AV1Timed Eigen CompilationWhisperfileoneDNNNumpy BenchmarksimdjsonApache CouchDBWhisper.cppQuantLibLlama.cppGROMACSXNNPACKGcrypt LibraryCP2K Molecular DynamicsY-CruncherLlamafilePyPerformanceONNX RuntimeOpenVINO GenAIRenaissanceFinanceBench

eoy2024quantlib: Ssvt-av1: Preset 3 - Beauty 4K 10-bitrelion: Basic - CPUwhisper-cpp: ggml-medium.en - 2016 State of the Unionblender: Barbershop - CPU-Onlycp2k: H20-256couchdb: 500 - 3000 - 30whisperfile: Mediumllamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048quantlib: XXScouchdb: 300 - 3000 - 30byte: Whetstone Doublellamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128svt-av1: Preset 3 - Bosphorus 4Kllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048byte: Pipebyte: Dhrystone 2byte: System Callwhisper-cpp: ggml-small.en - 2016 State of the Unioncouchdb: 100 - 3000 - 30svt-av1: Preset 5 - Beauty 4K 10-bitblender: Pabellon Barcelona - CPU-Onlyxnnpack: QS8MobileNetV2xnnpack: FP16MobileNetV3Smallxnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV1xnnpack: FP32MobileNetV3Smallxnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV1llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048openssl: ChaCha20openssl: ChaCha20-Poly1305openssl: AES-256-GCMopenssl: AES-128-GCMblender: Classroom - CPU-Onlywhisperfile: Smallllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024rustls: handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake-resume - TLS13_CHACHA20_POLY1305_SHA256gcrypt: ospray: particle_volume/scivis/real_timecouchdb: 500 - 1000 - 30rustls: handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384rustls: handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ospray: particle_volume/pathtracer/real_timesvt-av1: Preset 3 - Bosphorus 1080pcassandra: Writespyperformance: async_tree_ioopenvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time To First Tokenopenvino-genai: Gemma-7b-int4-ov - CPUastcenc: Very Thoroughcouchdb: 300 - 1000 - 30astcenc: Exhaustiveospray: particle_volume/ao/real_timegromacs: water_GMX50_barepyperformance: xml_etreellamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024svt-av1: Preset 8 - Beauty 4K 10-bitbuild2: Time To Compilepyperformance: asyncio_tcp_sslnumpy: primesieve: 1e13simdjson: Kostyallamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048pyperformance: python_startupllamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128cp2k: Fayalite-FISTllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024whisper-cpp: ggml-base.en - 2016 State of the Unionllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128blender: Junkshop - CPU-Onlyblender: Fishy Cat - CPU-Onlyopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPUnamd: STMV with 1,066,628 Atomsrustls: handshake - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384simdjson: LargeRandrenaissance: ALS Movie Lenssvt-av1: Preset 5 - Bosphorus 4Kstockfish: Chess Benchmarkonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUcouchdb: 100 - 1000 - 30simdjson: DistinctUserIDllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128simdjson: TopTweetrenaissance: In-Memory Database Shootoutsimdjson: PartialTweetssvt-av1: Preset 13 - Beauty 4K 10-bitrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Apache Spark PageRankblender: BMW27 - CPU-Onlyrenaissance: Gaussian Mixture Modelstockfish: Chess Benchmarkpyperformance: gc_collectrenaissance: Savina Reactors.IOllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512ospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timerenaissance: Apache Spark Bayesbuild-eigen: Time To Compilerenaissance: Finagle HTTP Requestsrenaissance: Rand Forestospray: gravity_spheres_volume/dim_512/pathtracer/real_timerenaissance: Scala Dottyonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardrenaissance: Genetic Algorithm Using Jenetics + Futuresonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardpyperformance: asyncio_websocketscp2k: H20-64mt-dgemm: Sustained Floating-Point Ratellama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048litert: Inception V4litert: Inception ResNet V2litert: NASNet Mobilelitert: DeepLab V3litert: Mobilenet Floatlitert: SqueezeNetlitert: Quantized COCO SSD MobileNet v1litert: Mobilenet Quantrustls: handshake-ticket - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256rustls: handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512financebench: Bonds OpenMPnamd: ATPase with 327,506 Atomssvt-av1: Preset 5 - Bosphorus 1080pwhisperfile: Tinypyperformance: django_templateastcenc: Thoroughopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPUetcpak: Multi-Threaded - ETC2pyperformance: raytracepyperformance: crypto_pyaespyperformance: floatllamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256pyperformance: gofinancebench: Repo OpenMPsvt-av1: Preset 8 - Bosphorus 4Kpyperformance: chaospyperformance: regex_compilerustls: handshake - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256pyperformance: pickle_pure_pythonllamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512pyperformance: pathlibpovray: Trace Timeonednn: Deconvolution Batch shapes_1d - CPUllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16pyperformance: json_loadsllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024pyperformance: nbodyy-cruncher: 1Bx265: Bosphorus 4Kcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingastcenc: Fastonednn: IP Shapes 1D - CPUllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128svt-av1: Preset 13 - Bosphorus 4Kllamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16svt-av1: Preset 8 - Bosphorus 1080pllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512astcenc: Mediumy-cruncher: 500Mllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16onednn: IP Shapes 3D - CPUllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512primesieve: 1e12onednn: Convolution Batch Shapes Auto - CPUx265: Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256onednn: Deconvolution Batch shapes_3d - CPUopenssl: SHA256a4484PXpx12.74761.422944.27700.91506.2592.857511.775534.9191228813.432367.83343491.91.999.593276848806257.11866536062.749140426.6245.07838232.1886.504166.1284492014981190114397918101495125262.9763.091305884950509239352934097172751700104784522170143.36195.4164210.476144404263.45388077.69162.1258.98486148.0491553632.141820810.21236.24529.573271333755101.72106.629.832.741106.131.68449.009171.69235.81638412.46892.053645775.7578.4985.973072327685.7720.136.8894.03270.8569.2687.489737.2473.5671.3577.3486.0612.930.75656423535.681.839805.734.538547527961372.03700.85969.92910.4626.2810.463256.19.7618.5884403.82412.253.553399.5465070386773506.481927.587897.63944490.058.6552319.4414.48.82093477.0648.5221.54196732.87.42776134.596310.8753.21679.76985102.33164.14115.58996.39112156.45390.452311.055223.55342.453721.242947.06911.57084636.3182.55898390.5977.08601141.11731558.1911141.194104279.04163843276821477.819530.2169363579.671211.481794.112129.52823.17262033215363563852.5770.761.7868.433061.218752.79632101.97141.7093520.720.302551.8655.9319.28577.81717541.750.7409677.821418.445312102.00538.269.876454.4580462.6165819214.218.5422.97612355.0910.2212.1163845918.48532.57165916163859396.64951.1257347.72212.52409619.03339.023327.3156.22178.77224.594.05881926.3476.67287114.45842.55840962.4129411.86471.188729.4809.78969679.34628.104559.346473.550911228812.1169406.12244075.32.057.6843276833443359.21346521770.330761218.9268.23891253.995.602226.3471777914671217138380915151365125763.6163.897105235690688165440207116029187076496336760197.2173.3819710.916144344296.24333882.92171.0236.44913164.4681329363.11586292.42199.02325.44617496066697.79121.4810.231.9412117.5661.18876.527761.57736.81638410.967111.651590745.59110.6086.113072327686.0820.397.1192.2166.5766.8592.709337.4197.0196.6774.6593.0113.40.65119306153.21.849378.829.094452675461898.36965.01575.90110.7627.5910.823241.510.117.4064038.42138.174.083860.6337022986993655.881925.548885.63122513.267.3642492.2422.06.41198428.6850.1411.17627904.06.25815159.71355.7512.810939.01322110.9468.905114.51224.80287208.17493.160510.733826.747837.383224.940240.09351.06188941.4012.80544356.4097.98873125.17232153.005842.730642222.75163843276822083.319477.88057.562343.381244.71809.181420.15848.9432282729.6415363035330.2169.111.8368.234600.7734382.3812488.41537.134622114.1749.3158.9120.28410.72618243.151.3409678.622320.33203185.20139.771.757716.6459308.75169819214.425.2643.40293232.2610.4512.41638459.518.37927.16125698141263278.24451.9380652.3198.112409619.49287.047243.14109.02658.68825.862.7307281929.1164.11551101.37776.11540963.508411.8391.184733.02809.489678.4631.31560.7475.510841228812.1057408.4832441312.057.6463276833381363.11340340196.630701622.8266.81425254.7335.551224.6472379815271248138683715741368127263.4163.7997019897450686789555507090265648076184405610197.53167.8921910.936144342775.29333574.3163.8396.52304164.8121340712.851572010.68197.225.44717394665697.61122.310.241.9391119.3491.18626.522061.57536.51638410.855113.78590831.42110.7095.453072327686.0920.517.1294.89666.3566.5293.454637.4497.197.0974.549313.410.65448304060.281.849275.728.824429733961895.68966.01376.3898.9727.810.513175.68.3517.3554002.32229.773.163815.2338715957063676.081925.61475.71084474.967.0762483.1453.26.4074436.2854.3341.1705920.76.33034157.893357.6022.796389.01687110.89268.610414.57474.85142206.09193.344110.712726.948537.104823.060443.3621.066937.7782.80695356.1947.99486125.07632252.724842.012831208.99163843276822752.419490.77931.642359.991244.511821.351417.35849.2092292879.4415363038723.4867.951.8468.8134896.8359382.3537988.2738.7182821.214.146449.2858.8620.29409.87518243.350.8409679.422318.73828184.99839.472.557688.0859206.34168819214.425.3283.40628244.7710.4512.51638459.218.36526.94125605142213277.29941.9391352.37194.024409619.5286.962232.86108.85888.62325.942.7294281929.1474.13321101.25769.81840963.51243OpenBenchmarking.org

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: S4484PXapx369121511.8612.7511.841. (CXX) g++ options: -O3 -march=native -fPIE -pie

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit4484PXapx0.320.640.961.281.61.1881.4221.1841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

RELION

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 5.0Test: Basic - Device: CPU4484PXapx2004006008001000729.40944.27733.021. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -lfftw3f -lfftw3 -ldl -ltiff -lpng -ljpeg -lmpi_cxx -lmpi

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Union4484PXapx2004006008001000809.79700.91809.491. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Barbershop - Compute: CPU-Only4484PXapx150300450600750679.34506.20678.40

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-2564484PXapx140280420560700628.10592.86631.311. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 3000 - Rounds: 304484PXapx120240360480600559.35511.78560.701. (CXX) g++ options: -flto -lstdc++ -shared -lei

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Medium4484PXapx120240360480600473.55534.92475.51

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 20484484PXapx3K6K9K12K15K122881228812288

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: XXS4484PXapx369121512.1213.4312.111. (CXX) g++ options: -O3 -march=native -fPIE -pie

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 3000 - Rounds: 304484PXapx90180270360450406.12367.83408.481. (CXX) g++ options: -flto -lstdc++ -shared -lei

BYTE Unix Benchmark

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Double4484PXapx70K140K210K280K350K244075.3343491.9244131.01. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 1284484PXapx0.46130.92261.38391.84522.30652.051.992.05

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 4K4484PXapx36912157.6849.5907.6461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 20484484PXapx7K14K21K28K35K327683276832768

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipe4484PXapx10M20M30M40M50M33443359.248806257.133381363.11. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 24484PXapx400M800M1200M1600M2000M1346521770.31866536062.71340340196.61. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Call4484PXapx11M22M33M44M55M30761218.949140426.630701622.81. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Union4484PXapx60120180240300268.24245.08266.811. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 3000 - Rounds: 304484PXapx60120180240300253.99232.19254.731. (CXX) g++ options: -flto -lstdc++ -shared -lei

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit4484PXapx2468105.6026.5045.5511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Pabellon Barcelona - Compute: CPU-Only4484PXapx50100150200250226.34166.12224.64

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV24484PXapx20040060080010007178447231. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Small4484PXapx20040060080010007799207981. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Large4484PXapx300600900120015001467149815271. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV24484PXapx300600900120015001217119012481. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV14484PXapx300600900120015001383114313861. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Small4484PXapx20040060080010008099798371. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Large4484PXapx4008001200160020001515181015741. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV24484PXapx300600900120015001365149513681. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV14484PXapx300600900120015001257125212721. (CXX) g++ options: -O3 -lrt -lm

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 20484484PXapx142842567063.6162.9763.411. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 20484484PXapx142842567063.8063.0963.791. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenSSL

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha204484PXapx30000M60000M90000M120000M150000M97105235690130588495050970198974501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20-Poly13054484PXapx20000M40000M60000M80000M100000M6881654402092393529340686789555501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-256-GCM4484PXapx20000M40000M60000M80000M100000M7116029187097172751700709026564801. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-128-GCM4484PXapx20000M40000M60000M80000M100000M76496336760104784522170761844056101. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Classroom - Compute: CPU-Only4484PXapx4080120160200197.20143.36197.53

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Small4484PXapx4080120160200173.38195.42167.89

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 1284484PXapx369121510.9110.4710.93

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 10244484PXapx13002600390052006500614461446144

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA2564484PXapx90K180K270K360K450K344296.24404263.45342775.291. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA2564484PXapx80K160K240K320K400K333882.92388077.69333574.301. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Gcrypt Library

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.10.34484PXapx4080120160200171.02162.13163.841. (CC) gcc options: -O2 -fvisibility=hidden

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/scivis/real_time4484PXapx36912156.449138.984866.52304

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 1000 - Rounds: 304484PXapx4080120160200164.47148.05164.811. (CXX) g++ options: -flto -lstdc++ -shared -lei

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA3844484PXapx300K600K900K1200K1500K1329363.101553632.141340712.851. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA3844484PXapx400K800K1200K1600K2000K1586292.421820810.211572010.681. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/pathtracer/real_time4484PXapx50100150200250199.02236.25197.20

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 1080p4484PXapx71421283525.4529.5725.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Cassandra

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writes4484PXapx60K120K180K240K300K174960271333173946

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: async_tree_io4484PXapx160320480640800666755656

OpenVINO GenAI

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token4484PXapx2040608010097.79101.7297.61

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token4484PXapx306090120150121.48106.62122.30

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU4484PXapx369121510.239.8310.24

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Very Thorough4484PXapx0.61671.23341.85012.46683.08351.94122.74101.93911. (CXX) g++ options: -O3 -flto -pthread

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 1000 - Rounds: 304484PXapx306090120150117.57106.13119.351. (CXX) g++ options: -flto -lstdc++ -shared -lei

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Exhaustive4484PXapx0.3790.7581.1371.5161.8951.18871.68441.18621. (CXX) g++ options: -O3 -flto -pthread

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/ao/real_time4484PXapx36912156.527769.009176.52206

GROMACS

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACSInput: water_GMX50_bare4484PXapx0.38070.76141.14211.52281.90351.5771.6921.5751. GROMACS version: 2023.3-Ubuntu_2023.3_1ubuntu3

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: xml_etree4484PXapx81624324036.835.836.5

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 10244484PXapx4K8K12K16K20K163841638416384

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit4484PXapx369121510.9712.4710.861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Build2

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.17Time To Compile4484PXapx306090120150111.6592.05113.78

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_tcp_ssl4484PXapx140280420560700590645590

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark4484PXapx2004006008001000745.59775.75831.42

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e134484PXapx20406080100110.6178.50110.711. (CXX) g++ options: -O3

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: Kostya4484PXapx2468106.115.975.451. (CXX) g++ options: -O3 -lrt

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 5124484PXapx7001400210028003500307230723072

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 20484484PXapx7K14K21K28K35K327683276832768

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: python_startup4484PXapx2468106.085.776.09

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 1284484PXapx51015202520.3920.1320.51

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 1284484PXapx2468107.116.887.121. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: Fayalite-FIST4484PXapx2040608010092.2194.0394.901. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 10244484PXapx163248648066.5770.8566.351. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 10244484PXapx153045607566.8569.2666.521. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Union4484PXapx2040608010092.7187.4993.451. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 1284484PXapx2468107.417.247.441. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Junkshop - Compute: CPU-Only4484PXapx2040608010097.0173.5697.10

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Fishy Cat - Compute: CPU-Only4484PXapx2040608010096.6771.3597.09

OpenVINO GenAI

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token4484PXapx2040608010074.6577.3474.54

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token4484PXapx2040608010093.0186.0693.00

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU4484PXapx369121513.4012.9313.41

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atoms4484PXapx0.17020.34040.51060.68080.8510.651190.756560.65448

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA3844484PXapx90K180K270K360K450K306153.20423535.68304060.281. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: LargeRandom4484PXapx0.4140.8281.2421.6562.071.841.831.841. (CXX) g++ options: -O3 -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: ALS Movie Lens4484PXapx2K4K6K8K10K9378.89805.79275.7MIN: 8718.36 / MAX: 9413.7MIN: 9253.4 / MAX: 10057.61MIN: 8821.09 / MAX: 9495.91

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 4K4484PXapx81624324029.0934.5428.821. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmark4484PXapx12M24M36M48M60M4526754654752796429733961. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPU4484PXapx4008001200160020001898.361372.031895.68MIN: 1894.26MIN: 1342.06MIN: 1892.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPU4484PXapx2004006008001000965.02700.86966.01MIN: 963.27MIN: 679.89MIN: 963.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 1000 - Rounds: 304484PXapx2040608010075.9069.9376.391. (CXX) g++ options: -flto -lstdc++ -shared -lei

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: DistinctUserID4484PXapx369121510.7610.468.971. (CXX) g++ options: -O3 -lrt

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 1284484PXapx71421283527.5926.2827.80

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: TopTweet4484PXapx369121510.8210.4610.511. (CXX) g++ options: -O3 -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: In-Memory Database Shootout4484PXapx70014002100280035003241.53256.13175.6MIN: 3037.03 / MAX: 3491.91MIN: 3019.89 / MAX: 3599.5MIN: 2896.06 / MAX: 3367.44

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: PartialTweets4484PXapx369121510.109.768.351. (CXX) g++ options: -O3 -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit4484PXapx51015202517.4118.5917.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Akka Unbalanced Cobwebbed Tree4484PXapx90018002700360045004038.44403.84002.3MIN: 4038.36 / MAX: 5089.28MAX: 5719.11MIN: 4002.27 / MAX: 4983.72

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark PageRank4484PXapx50010001500200025002138.12412.22229.7MIN: 1499.64MIN: 1691.04MIN: 1612.96 / MAX: 2229.74

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: BMW27 - Compute: CPU-Only4484PXapx163248648074.0853.5573.16

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Gaussian Mixture Model4484PXapx80016002400320040003860.63399.53815.2MIN: 2758.89 / MAX: 3860.61MIN: 2471.52MIN: 2749.56 / MAX: 3815.24

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfishChess Benchmark4484PXapx10M20M30M40M50M3370229846507038338715951. Stockfish 16 by the Stockfish developers (see AUTHORS file)

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: gc_collect4484PXapx150300450600750699677706

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Savina Reactors.IO4484PXapx80016002400320040003655.83506.43676.0MIN: 3655.76 / MAX: 4484.97MIN: 3506.38 / MAX: 4329.37MAX: 4536.84

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 5124484PXapx2K4K6K8K10K819281928192

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/scivis/real_time4484PXapx2468105.548887.587895.61470

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/ao/real_time4484PXapx2468105.631227.639445.71084

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark Bayes4484PXapx110220330440550513.2490.0474.9MIN: 453.66 / MAX: 554.7MIN: 459.29 / MAX: 580.9MIN: 454.77 / MAX: 514.32

Timed Eigen Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.4.0Time To Compile4484PXapx153045607567.3658.6667.08

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Finagle HTTP Requests4484PXapx50010001500200025002492.22319.42483.1MIN: 1947.63MIN: 1832.84MIN: 1933.43

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Random Forest4484PXapx100200300400500422.0414.4453.2MIN: 357.91 / MAX: 497.55MIN: 322.79 / MAX: 466.1MIN: 352.31 / MAX: 513.31

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time4484PXapx2468106.411988.820936.40740

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Scala Dotty4484PXapx100200300400500428.6477.0436.2MIN: 378.22 / MAX: 628.77MIN: 371.54 / MAX: 736.5MIN: 380.62 / MAX: 721.56

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard4484PXapx2004006008001000850.14648.52854.331. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard4484PXapx0.34690.69381.04071.38761.73451.176271.541961.170501. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Genetic Algorithm Using Jenetics + Futures4484PXapx2004006008001000904.0732.8920.7MIN: 886.83 / MAX: 919.31MIN: 713.67 / MAX: 813.49MIN: 888.75 / MAX: 934.44

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standard4484PXapx2468106.258157.427766.330341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standard4484PXapx4080120160200159.71134.60157.891. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standard4484PXapx80160240320400355.75310.88357.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standard4484PXapx0.72381.44762.17142.89523.6192.810933.216702.796381. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standard4484PXapx36912159.013229.769859.016871. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standard4484PXapx20406080100110.94102.33110.891. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standard4484PXapx153045607568.9164.1468.611. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standard4484PXapx4812162014.5115.5914.571. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standard4484PXapx2468104.802876.391124.851421. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standard4484PXapx50100150200250208.17156.45206.091. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standard4484PXapx2040608010093.1690.4593.341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standard4484PXapx369121510.7311.0610.711. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard4484PXapx61218243026.7523.5526.951. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard4484PXapx102030405037.3842.4537.101. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard4484PXapx61218243024.9421.2423.061. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard4484PXapx112233445540.0947.0743.361. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard4484PXapx0.35340.70681.06021.41361.7671.061881.570841.066001. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard4484PXapx2004006008001000941.40636.32937.781. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard4484PXapx0.63161.26321.89482.52643.1582.805442.558982.806951. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard4484PXapx80160240320400356.41390.60356.191. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standard4484PXapx2468107.988737.086017.994861. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standard4484PXapx306090120150125.17141.12125.081. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_websockets4484PXapx70140210280350321315322

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-644484PXapx132639526553.0158.1952.721. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

ACES DGEMM

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Rate4484PXapx2004006008001000842.731141.19842.011. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 20484484PXapx60120180240300222.75279.04208.991. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 10244484PXapx4K8K12K16K20K163841638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 20484484PXapx7K14K21K28K35K327683276832768

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V44484PXapx5K10K15K20K25K22083.321477.822752.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V24484PXapx4K8K12K16K20K19477.819530.219490.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobile4484PXapx4K8K12K16K20K8057.5616936.007931.64

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V34484PXapx80016002400320040002343.383579.672359.99

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Float4484PXapx300600900120015001244.701211.481244.51

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNet4484PXapx4008001200160020001809.181794.111821.35

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v14484PXapx50010001500200025001420.152129.521417.35

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quant4484PXapx2004006008001000848.94823.17849.21

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA2564484PXapx600K1200K1800K2400K3000K2282729.642620332.002292879.441. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2564484PXapx30060090012001500153615361536

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA2564484PXapx800K1600K2400K3200K4000K3035330.213563852.573038723.481. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 5124484PXapx163248648069.1170.7667.951. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 164484PXapx0.4140.8281.2421.6562.071.831.781.84

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 5124484PXapx153045607568.2068.4068.811. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMP4484PXapx7K14K21K28K35K34600.7733061.2234896.841. (CXX) g++ options: -O3 -march=native -fopenmp

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atoms4484PXapx0.62921.25841.88762.51683.1462.381242.796322.35379

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 1080p4484PXapx2040608010088.42101.9788.271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Tiny4484PXapx102030405037.1341.7138.72

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: django_template4484PXapx51015202521.020.721.2

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Thorough4484PXapx51015202514.1720.3014.151. (CXX) g++ options: -O3 -flto -pthread

OpenVINO GenAI

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token4484PXapx122436486049.3151.8649.28

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token4484PXapx132639526558.9155.9358.86

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU4484PXapx51015202520.2819.2820.29

Etcpak

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 2.0Benchmark: Multi-Threaded - Configuration: ETC24484PXapx120240360480600410.73577.82409.881. (CXX) g++ options: -flto -pthread

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: raytrace4484PXapx4080120160200182175182

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: crypto_pyaes4484PXapx102030405043.141.743.3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: float4484PXapx122436486051.350.750.8

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2564484PXapx9001800270036004500409640964096

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: go4484PXapx2040608010078.677.879.4

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMP4484PXapx5K10K15K20K25K22320.3321418.4522318.741. (CXX) g++ options: -O3 -march=native -fopenmp

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 4K4484PXapx2040608010085.20102.0185.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: chaos4484PXapx91827364539.738.239.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: regex_compile4484PXapx163248648071.769.872.5

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA2564484PXapx16K32K48K64K80K57716.6476454.4557688.081. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA2564484PXapx20K40K60K80K100K59308.7580462.6059206.341. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pickle_pure_python4484PXapx4080120160200169165168

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 5124484PXapx2K4K6K8K10K819281928192

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pathlib4484PXapx4812162014.414.214.4

POV-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-RayTrace Time4484PXapx61218243025.2618.5425.331. POV-Ray 3.7.0.10.unofficial

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPU4484PXapx0.76641.53282.29923.06563.8323.402932.976123.40628MIN: 3.03MIN: 2.42MIN: 3.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 10244484PXapx80160240320400232.26355.09244.771. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 164484PXapx369121510.4510.2210.45

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: json_loads4484PXapx369121512.412.112.5

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 10244484PXapx4K8K12K16K20K163841638416384

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: nbody4484PXapx132639526559.559.059.2

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 1B4484PXapx51015202518.3818.4918.37

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 4K4484PXapx81624324027.1632.5726.941. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Decompression Rating4484PXapx40K80K120K160K200K1256981659161256051. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Compression Rating4484PXapx40K80K120K160K200K1412631638591422131. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Fast4484PXapx90180270360450278.24396.65277.301. (CXX) g++ options: -O3 -flto -pthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPU4484PXapx0.43630.87261.30891.74522.18151.938061.125731.93913MIN: 1.92MIN: 1.03MIN: 1.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 1284484PXapx122436486052.3047.7252.371. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 4K4484PXapx50100150200250198.11212.52194.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2564484PXapx9001800270036004500409640964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 164484PXapx51015202519.4919.0319.50

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 1080p4484PXapx70140210280350287.05339.02286.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 5124484PXapx70140210280350243.14327.30232.861. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Medium4484PXapx306090120150109.03156.22108.861. (CXX) g++ options: -O3 -flto -pthread

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 500M4484PXapx2468108.6888.7728.623

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 164484PXapx61218243025.8624.5925.94

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPU4484PXapx0.91311.82622.73933.65244.56552.730724.058002.72942MIN: 2.7MIN: 3.75MIN: 2.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 5124484PXapx2K4K6K8K10K819281928192

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e124484PXapx36912159.1166.3479.1471. (CXX) g++ options: -O3

Renaissance

Test: Apache Spark ALS

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPU4484PXapx2468104.115516.672874.13321MIN: 4.05MIN: 6.2MIN: 4.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 1080p4484PXapx306090120150101.37114.45101.251. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 1080p4484PXapx2004006008001000776.12842.56769.821. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2564484PXapx9001800270036004500409640964096

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPU4484PXapx0.79031.58062.37093.16123.95153.508402.412943.51243MIN: 3.46MIN: 2.34MIN: 3.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenVINO GenAI

Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU

a: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

4484PX: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

px: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

OpenSSL

Algorithm: RSA4096

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

Algorithm: SHA512

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

Algorithm: SHA256

a: The test quit with a non-zero exit status.

4484PX: The test quit with a non-zero exit status.

px: The test quit with a non-zero exit status.

213 Results Shown

QuantLib
SVT-AV1
RELION
Whisper.cpp
Blender
CP2K Molecular Dynamics
Apache CouchDB
Whisperfile
Llamafile
QuantLib
Apache CouchDB
BYTE Unix Benchmark
Llamafile
SVT-AV1
Llamafile
BYTE Unix Benchmark:
  Pipe
  Dhrystone 2
  System Call
Whisper.cpp
Apache CouchDB
SVT-AV1
Blender
XNNPACK:
  QS8MobileNetV2
  FP16MobileNetV3Small
  FP16MobileNetV3Large
  FP16MobileNetV2
  FP16MobileNetV1
  FP32MobileNetV3Small
  FP32MobileNetV3Large
  FP32MobileNetV2
  FP32MobileNetV1
Llama.cpp:
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048
OpenSSL:
  ChaCha20
  ChaCha20-Poly1305
  AES-256-GCM
  AES-128-GCM
Blender
Whisperfile
Llamafile:
  mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024
Rustls:
  handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256
  handshake-resume - TLS13_CHACHA20_POLY1305_SHA256
Gcrypt Library
OSPRay
Apache CouchDB
Rustls:
  handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
OSPRay
SVT-AV1
Apache Cassandra
PyPerformance
OpenVINO GenAI:
  Gemma-7b-int4-ov - CPU - Time Per Output Token
  Gemma-7b-int4-ov - CPU - Time To First Token
  Gemma-7b-int4-ov - CPU
ASTC Encoder
Apache CouchDB
ASTC Encoder
OSPRay
GROMACS
PyPerformance
Llamafile
SVT-AV1
Build2
PyPerformance
Numpy Benchmark
Primesieve
simdjson
Llamafile:
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048
PyPerformance
Llamafile
Llama.cpp
CP2K Molecular Dynamics
Llama.cpp:
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024
Whisper.cpp
Llama.cpp
Blender:
  Junkshop - CPU-Only
  Fishy Cat - CPU-Only
OpenVINO GenAI:
  Falcon-7b-instruct-int4-ov - CPU - Time Per Output Token
  Falcon-7b-instruct-int4-ov - CPU - Time To First Token
  Falcon-7b-instruct-int4-ov - CPU
NAMD
Rustls
simdjson
Renaissance
SVT-AV1
Stockfish
oneDNN:
  Recurrent Neural Network Training - CPU
  Recurrent Neural Network Inference - CPU
Apache CouchDB
simdjson
Llamafile
simdjson
Renaissance
simdjson
SVT-AV1
Renaissance:
  Akka Unbalanced Cobwebbed Tree
  Apache Spark PageRank
Blender
Renaissance
Stockfish
PyPerformance
Renaissance
Llamafile
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
Renaissance
Timed Eigen Compilation
Renaissance:
  Finagle HTTP Requests
  Rand Forest
OSPRay
Renaissance
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
Renaissance
ONNX Runtime:
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ZFNet-512 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
PyPerformance
CP2K Molecular Dynamics
ACES DGEMM
Llama.cpp
Llamafile:
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048
LiteRT:
  Inception V4
  Inception ResNet V2
  NASNet Mobile
  DeepLab V3
  Mobilenet Float
  SqueezeNet
  Quantized COCO SSD MobileNet v1
  Mobilenet Quant
Rustls
Llamafile
Rustls
Llama.cpp
Llamafile
Llama.cpp
FinanceBench
NAMD
SVT-AV1
Whisperfile
PyPerformance
ASTC Encoder
OpenVINO GenAI:
  Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Token
  Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Token
  Phi-3-mini-128k-instruct-int4-ov - CPU
Etcpak
PyPerformance:
  raytrace
  crypto_pyaes
  float
Llamafile
PyPerformance
FinanceBench
SVT-AV1
PyPerformance:
  chaos
  regex_compile
Rustls:
  handshake - TLS13_CHACHA20_POLY1305_SHA256
  handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
PyPerformance
Llamafile
PyPerformance
POV-Ray
oneDNN
Llama.cpp
Llamafile
PyPerformance
Llamafile
PyPerformance
Y-Cruncher
x265
7-Zip Compression:
  Decompression Rating
  Compression Rating
ASTC Encoder
oneDNN
Llama.cpp
SVT-AV1
Llamafile:
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256
  Llama-3.2-3B-Instruct.Q6_K - Text Generation 16
SVT-AV1
Llama.cpp
ASTC Encoder
Y-Cruncher
Llamafile
oneDNN
Llamafile
Primesieve
oneDNN
x265
SVT-AV1
Llamafile
oneDNN