eoy2024

Benchmarks for a future article. AMD EPYC 4484PX 12-Core testing with a Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2412086-NE-EOY20243255&grt&rdt.

eoy2024ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutiona4484PXpxAMD EPYC 4564P 16-Core @ 5.88GHz (16 Cores / 32 Threads)Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS)AMD Device 14d82 x 32GB DRAM-4800MT/s Micron MTC20C2085S1EC48BA1 BC3201GB Micron_7450_MTFDKCC3T2TFS + 960GB SAMSUNG MZ1L2960HCJR-00A07ASPEEDAMD Rembrandt Radeon HD AudioVA24312 x Intel I210Ubuntu 24.046.8.0-11-generic (x86_64)GNOME Shell 45.3X Server 1.21.1.11GCC 13.2.0ext41024x768AMD EPYC 4484PX 12-Core @ 5.66GHz (12 Cores / 24 Threads)6.12.2-061202-generic (x86_64)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- a: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601209- 4484PX: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209- px: Scaling Governor: amd-pstate-epp performance (Boost: Enabled EPP: performance) - CPU Microcode: 0xa601209Java Details- OpenJDK Runtime Environment (build 21.0.2+13-Ubuntu-2)Python Details- Python 3.12.3Security Details- a: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 4484PX: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - px: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

eoy2024compress-7zip: Compression Ratingcompress-7zip: Decompression Ratingmt-dgemm: Sustained Floating-Point Ratecassandra: Writescouchdb: 100 - 1000 - 30couchdb: 100 - 3000 - 30couchdb: 300 - 1000 - 30couchdb: 300 - 3000 - 30couchdb: 500 - 1000 - 30couchdb: 500 - 3000 - 30astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustiveastcenc: Very Thoroughblender: BMW27 - CPU-Onlyblender: Junkshop - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlybuild2: Time To Compilebyte: Pipebyte: Dhrystone 2byte: System Callbyte: Whetstone Doublecp2k: H20-64cp2k: H20-256cp2k: Fayalite-FISTetcpak: Multi-Threaded - ETC2financebench: Repo OpenMPfinancebench: Bonds OpenMPgcrypt: gromacs: water_GMX50_barelitert: DeepLab V3litert: SqueezeNetlitert: Inception V4litert: NASNet Mobilelitert: Mobilenet Floatlitert: Mobilenet Quantlitert: Inception ResNet V2litert: Quantized COCO SSD MobileNet v1llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048namd: ATPase with 327,506 Atomsnamd: STMV with 1,066,628 Atomsnumpy: onednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardopenssl: ChaCha20openssl: AES-128-GCMopenssl: AES-256-GCMopenssl: ChaCha20-Poly1305openvino-genai: Gemma-7b-int4-ov - CPUopenvino-genai: Gemma-7b-int4-ov - CPU - Time To First Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPUopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPUopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Tokenospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timepovray: Trace Timeprimesieve: 1e12primesieve: 1e13pyperformance: gopyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: xml_etreepyperformance: gc_collectpyperformance: json_loadspyperformance: crypto_pyaespyperformance: async_tree_iopyperformance: regex_compilepyperformance: python_startuppyperformance: asyncio_tcp_sslpyperformance: django_templatepyperformance: asyncio_websocketspyperformance: pickle_pure_pythonquantlib: Squantlib: XXSrelion: Basic - CPUrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: Gaussian Mixture Modelrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futuresrustls: handshake - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256rustls: handshake-resume - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384rustls: handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256rustls: handshake-ticket - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256rustls: handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384rustls: handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384simdjson: Kostyasimdjson: TopTweetsimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDstockfish: Chess Benchmarkstockfish: Chess Benchmarksvt-av1: Preset 3 - Bosphorus 4Ksvt-av1: Preset 5 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 3 - Bosphorus 1080psvt-av1: Preset 5 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 3 - Beauty 4K 10-bitsvt-av1: Preset 5 - Beauty 4K 10-bitsvt-av1: Preset 8 - Beauty 4K 10-bitsvt-av1: Preset 13 - Beauty 4K 10-bitbuild-eigen: Time To Compilewhisper-cpp: ggml-base.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-medium.en - 2016 State of the Unionwhisperfile: Tinywhisperfile: Smallwhisperfile: Mediumx265: Bosphorus 4Kx265: Bosphorus 1080pxnnpack: FP32MobileNetV1xnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV3Smallxnnpack: FP16MobileNetV1xnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV3Smallxnnpack: QS8MobileNetV2y-cruncher: 1By-cruncher: 500Ma4484PXpx1638591659161141.19410427133369.929232.188106.13367.83148.049511.775396.6495156.221720.30251.68442.74153.5573.56143.3671.35506.2166.1292.05348806257.11866536062.749140426.6343491.958.191592.85794.032577.81721418.44531233061.21875162.1251.6923579.671794.1121477.8169361211.48823.1719530.22129.526.8870.7670.8563.097.2468.469.2662.9747.72327.3355.09279.0419.0320.134096819224.59163843276826.2810.224096819210.471.7816384327681.99409681921638432768153630726144122882.796320.75656775.751.125734.0586.672872.976122.412941372.03700.859134.5967.4277611.055290.4523102.3319.76985156.4536.3911215.589964.141636.3181.570843.2167310.87542.453723.553390.5972.55898141.1177.086011.54196648.52247.069121.242913058849505010478452217097172751700923935293409.83106.62101.7212.9386.0677.3419.2855.9351.869.009178.98486236.2457.639447.587898.8209318.5426.34778.49877.838.250.75914.217535.867712.141.775569.85.7764520.731516512.747613.432944.27477.0414.49805.7490.03506.42412.22319.43399.53256.14403.8732.876454.4580462.6388077.69404263.45423535.683563852.5726203321820810.211553632.145.9710.461.839.7610.4646507038547527969.5934.538102.005212.5229.573101.971339.023842.5581.4226.50412.46818.58858.65587.48973245.07838700.9141.70935195.41642534.91932.57114.4512521495181097911431190149892084418.4858.772141263125698842.73064217496075.901253.99117.566406.12164.468559.346278.2445109.026514.171.18871.941274.0897.01197.296.67679.34226.34111.65133443359.21346521770.330761218.9244075.353.005628.10492.21410.72622320.33203134600.773438171.0231.5772343.381809.1822083.38057.561244.7848.94319477.81420.157.1169.1166.5763.87.4168.266.8563.6152.3243.14232.26222.7519.4920.394096819225.86163843276827.5910.454096819210.911.8316384327682.05409681921638432768153630726144122882.381240.65119745.591.938062.730724.115513.402933.50841898.36965.015159.716.2581510.733893.1605110.949.01322208.1744.8028714.512268.9051941.4011.061882.81093355.75137.383226.7478356.4092.80544125.1727.988731.17627850.14140.093524.94029710523569076496336760711602918706881654402010.23121.4897.7913.493.0174.6520.2858.9149.316.527766.44913199.0235.631225.548886.4119825.2649.116110.60878.639.751.359.514.418236.869912.443.166671.76.085902132116911.864712.1169729.4428.6422.09378.8513.23655.82138.12492.23860.63241.54038.4904.057716.6459308.75333882.92344296.24306153.23035330.212282729.641586292.421329363.16.1110.821.8410.110.7633702298452675467.68429.09485.201198.11225.44688.415287.047776.1151.1885.60210.96717.40667.36492.70933268.23891809.7896937.13462173.38197473.5509127.16101.3712571365151580913831217146777971718.3798.688142213125605842.01283117394676.389254.733119.349408.483164.812560.7277.2994108.858814.14641.18621.939173.1697.1197.5397.09678.4224.64113.7833381363.11340340196.630701622.824413152.724631.3194.896409.87522318.73828134896.835938163.8391.5752359.991821.3522752.47931.641244.51849.20919490.71417.357.1267.9566.3563.797.4468.8166.5263.4152.37232.86244.77208.9919.520.514096819225.94163843276827.810.454096819210.931.8416384327682.05409681921638432768153630726144122882.353790.65448831.421.939132.729424.133213.406283.512431895.68966.013157.8936.3303410.712793.3441110.8929.01687206.0914.8514214.574768.6104937.7781.0662.79638357.60237.104826.9485356.1942.80695125.0767.994861.1705854.33443.36223.06049701989745076184405610709026564806867895555010.24122.397.6113.419374.5420.2958.8649.286.522066.52304197.25.710845.61476.407425.3289.147110.70979.439.450.859.214.418236.570612.543.365672.56.0959021.232216811.83912.1057733.02436.2453.29275.7474.93676.02229.72483.13815.23175.64002.3920.757688.0859206.34333574.3342775.29304060.283038723.482292879.441572010.681340712.855.4510.511.848.358.9733871595429733967.64628.82484.998194.02425.44788.27286.962769.8181.1845.55110.85517.35567.07693.45463266.81425809.48938.71828167.89219475.5108426.94101.2512721368157483713861248152779872318.3658.623OpenBenchmarking.org

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Compression Ratinga4484PXpx40K80K120K160K200K1638591412631422131. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Decompression Ratinga4484PXpx40K80K120K160K200K1659161256981256051. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

ACES DGEMM

Sustained Floating-Point Rate

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratea4484PXpx20040060080010001141.19842.73842.011. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writesa4484PXpx60K120K180K240K300K271333174960173946

Apache CouchDB

Bulk Size: 100 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 1000 - Rounds: 30a4484PXpx2040608010069.9375.9076.391. (CXX) g++ options: -flto -lstdc++ -shared -lei

Apache CouchDB

Bulk Size: 100 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 3000 - Rounds: 30a4484PXpx60120180240300232.19253.99254.731. (CXX) g++ options: -flto -lstdc++ -shared -lei

Apache CouchDB

Bulk Size: 300 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 1000 - Rounds: 30a4484PXpx306090120150106.13117.57119.351. (CXX) g++ options: -flto -lstdc++ -shared -lei

Apache CouchDB

Bulk Size: 300 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 3000 - Rounds: 30a4484PXpx90180270360450367.83406.12408.481. (CXX) g++ options: -flto -lstdc++ -shared -lei

Apache CouchDB

Bulk Size: 500 - Inserts: 1000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 1000 - Rounds: 30a4484PXpx4080120160200148.05164.47164.811. (CXX) g++ options: -flto -lstdc++ -shared -lei

Apache CouchDB

Bulk Size: 500 - Inserts: 3000 - Rounds: 30

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 3000 - Rounds: 30a4484PXpx120240360480600511.78559.35560.701. (CXX) g++ options: -flto -lstdc++ -shared -lei

ASTC Encoder

Preset: Fast

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Fasta4484PXpx90180270360450396.65278.24277.301. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Medium

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Mediuma4484PXpx306090120150156.22109.03108.861. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Thorough

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Thorougha4484PXpx51015202520.3014.1714.151. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Exhaustive

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Exhaustivea4484PXpx0.3790.7581.1371.5161.8951.68441.18871.18621. (CXX) g++ options: -O3 -flto -pthread

ASTC Encoder

Preset: Very Thorough

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Very Thorougha4484PXpx0.61671.23341.85012.46683.08352.74101.94121.93911. (CXX) g++ options: -O3 -flto -pthread

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: BMW27 - Compute: CPU-Onlya4484PXpx163248648053.5574.0873.16

Blender

Blend File: Junkshop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Junkshop - Compute: CPU-Onlya4484PXpx2040608010073.5697.0197.10

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Classroom - Compute: CPU-Onlya4484PXpx4080120160200143.36197.20197.53

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Fishy Cat - Compute: CPU-Onlya4484PXpx2040608010071.3596.6797.09

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Barbershop - Compute: CPU-Onlya4484PXpx150300450600750506.20679.34678.40

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Pabellon Barcelona - Compute: CPU-Onlya4484PXpx50100150200250166.12226.34224.64

Build2

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.17Time To Compilea4484PXpx30609012015092.05111.65113.78

BYTE Unix Benchmark

Computational Test: Pipe

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipea4484PXpx10M20M30M40M50M48806257.133443359.233381363.11. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

BYTE Unix Benchmark

Computational Test: Dhrystone 2

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2a4484PXpx400M800M1200M1600M2000M1866536062.71346521770.31340340196.61. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

BYTE Unix Benchmark

Computational Test: System Call

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Calla4484PXpx11M22M33M44M55M49140426.630761218.930701622.81. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

BYTE Unix Benchmark

Computational Test: Whetstone Double

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doublea4484PXpx70K140K210K280K350K343491.9244075.3244131.01. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

CP2K Molecular Dynamics

Input: H20-64

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-64a4484PXpx132639526558.1953.0152.721. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

CP2K Molecular Dynamics

Input: H20-256

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-256a4484PXpx140280420560700592.86628.10631.311. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

CP2K Molecular Dynamics

Input: Fayalite-FIST

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: Fayalite-FISTa4484PXpx2040608010094.0392.2194.901. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Etcpak

Benchmark: Multi-Threaded - Configuration: ETC2

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 2.0Benchmark: Multi-Threaded - Configuration: ETC2a4484PXpx120240360480600577.82410.73409.881. (CXX) g++ options: -flto -pthread

FinanceBench

Benchmark: Repo OpenMP

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPa4484PXpx5K10K15K20K25K21418.4522320.3322318.741. (CXX) g++ options: -O3 -march=native -fopenmp

FinanceBench

Benchmark: Bonds OpenMP

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPa4484PXpx7K14K21K28K35K33061.2234600.7734896.841. (CXX) g++ options: -O3 -march=native -fopenmp

Gcrypt Library

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.10.3a4484PXpx4080120160200162.13171.02163.841. (CC) gcc options: -O2 -fvisibility=hidden

GROMACS

Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACSInput: water_GMX50_barea4484PXpx0.38070.76141.14211.52281.90351.6921.5771.5751. GROMACS version: 2023.3-Ubuntu_2023.3_1ubuntu3

LiteRT

Model: DeepLab V3

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3a4484PXpx80016002400320040003579.672343.382359.99

LiteRT

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNeta4484PXpx4008001200160020001794.111809.181821.35

LiteRT

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4a4484PXpx5K10K15K20K25K21477.822083.322752.4

LiteRT

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobilea4484PXpx4K8K12K16K20K16936.008057.567931.64

LiteRT

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floata4484PXpx300600900120015001211.481244.701244.51

LiteRT

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quanta4484PXpx2004006008001000823.17848.94849.21

LiteRT

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2a4484PXpx4K8K12K16K20K19530.219477.819490.7

LiteRT

Model: Quantized COCO SSD MobileNet v1

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1a4484PXpx50010001500200025002129.521420.151417.35

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128a4484PXpx2468106.887.117.121. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512a4484PXpx163248648070.7669.1167.951. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024a4484PXpx163248648070.8566.5766.351. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048a4484PXpx142842567063.0963.8063.791. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128a4484PXpx2468107.247.417.441. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512a4484PXpx153045607568.4068.2068.811. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024a4484PXpx153045607569.2666.8566.521. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048a4484PXpx142842567062.9763.6163.411. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128a4484PXpx122436486047.7252.3052.371. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512a4484PXpx70140210280350327.30243.14232.861. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024a4484PXpx80160240320400355.09232.26244.771. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llama.cpp

Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048a4484PXpx60120180240300279.04222.75208.991. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16a4484PXpx51015202519.0319.4919.50

Llamafile

Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128a4484PXpx51015202520.1320.3920.51

Llamafile

Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256a4484PXpx9001800270036004500409640964096

Llamafile

Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512a4484PXpx2K4K6K8K10K819281928192

Llamafile

Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16a4484PXpx61218243024.5925.8625.94

Llamafile

Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024a4484PXpx4K8K12K16K20K163841638416384

Llamafile

Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048a4484PXpx7K14K21K28K35K327683276832768

Llamafile

Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128a4484PXpx71421283526.2827.5927.80

Llamafile

Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16a4484PXpx369121510.2210.4510.45

Llamafile

Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256a4484PXpx9001800270036004500409640964096

Llamafile

Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512a4484PXpx2K4K6K8K10K819281928192

Llamafile

Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128a4484PXpx369121510.4710.9110.93

Llamafile

Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16a4484PXpx0.4140.8281.2421.6562.071.781.831.84

Llamafile

Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024a4484PXpx4K8K12K16K20K163841638416384

Llamafile

Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048a4484PXpx7K14K21K28K35K327683276832768

Llamafile

Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128a4484PXpx0.46130.92261.38391.84522.30651.992.052.05

Llamafile

Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256a4484PXpx9001800270036004500409640964096

Llamafile

Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512a4484PXpx2K4K6K8K10K819281928192

Llamafile

Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024a4484PXpx4K8K12K16K20K163841638416384

Llamafile

Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048a4484PXpx7K14K21K28K35K327683276832768

Llamafile

Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256a4484PXpx30060090012001500153615361536

Llamafile

Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512a4484PXpx7001400210028003500307230723072

Llamafile

Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024a4484PXpx13002600390052006500614461446144

Llamafile

Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048a4484PXpx3K6K9K12K15K122881228812288

NAMD

Input: ATPase with 327,506 Atoms

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atomsa4484PXpx0.62921.25841.88762.51683.1462.796322.381242.35379

NAMD

Input: STMV with 1,066,628 Atoms

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atomsa4484PXpx0.17020.34040.51060.68080.8510.756560.651190.65448

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarka4484PXpx2004006008001000775.75745.59831.42

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUa4484PXpx0.43630.87261.30891.74522.18151.125731.938061.93913MIN: 1.03MIN: 1.92MIN: 1.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUa4484PXpx0.91311.82622.73933.65244.56554.058002.730722.72942MIN: 3.75MIN: 2.7MIN: 2.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUa4484PXpx2468106.672874.115514.13321MIN: 6.2MIN: 4.05MIN: 4.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUa4484PXpx0.76641.53282.29923.06563.8322.976123.402933.40628MIN: 2.42MIN: 3.03MIN: 3.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUa4484PXpx0.79031.58062.37093.16123.95152.412943.508403.51243MIN: 2.34MIN: 3.46MIN: 3.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUa4484PXpx4008001200160020001372.031898.361895.68MIN: 1342.06MIN: 1894.26MIN: 1892.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUa4484PXpx2004006008001000700.86965.02966.01MIN: 679.89MIN: 963.27MIN: 963.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standarda4484PXpx4080120160200134.60159.71157.891. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standarda4484PXpx2468107.427766.258156.330341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standarda4484PXpx369121511.0610.7310.711. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standarda4484PXpx2040608010090.4593.1693.341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ZFNet-512 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standarda4484PXpx20406080100102.33110.94110.891. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ZFNet-512 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standarda4484PXpx36912159.769859.013229.016871. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standarda4484PXpx50100150200250156.45208.17206.091. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: T5 Encoder - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standarda4484PXpx2468106.391124.802874.851421. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standarda4484PXpx4812162015.5914.5114.571. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standarda4484PXpx153045607564.1468.9168.611. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standarda4484PXpx2004006008001000636.32941.40937.781. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standarda4484PXpx0.35340.70681.06021.41361.7671.570841.061881.066001. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda4484PXpx0.72381.44762.17142.89523.6193.216702.810932.796381. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standarda4484PXpx80160240320400310.88355.75357.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda4484PXpx102030405042.4537.3837.101. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standarda4484PXpx61218243023.5526.7526.951. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standarda4484PXpx80160240320400390.60356.41356.191. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standarda4484PXpx0.63161.26321.89482.52643.1582.558982.805442.806951. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standarda4484PXpx306090120150141.12125.17125.081. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standarda4484PXpx2468107.086017.988737.994861. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standarda4484PXpx0.34690.69381.04071.38761.73451.541961.176271.170501. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standarda4484PXpx2004006008001000648.52850.14854.331. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standarda4484PXpx112233445547.0740.0943.361. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standarda4484PXpx61218243021.2424.9423.061. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenSSL

Algorithm: ChaCha20

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20a4484PXpx30000M60000M90000M120000M150000M13058849505097105235690970198974501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenSSL

Algorithm: AES-128-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-128-GCMa4484PXpx20000M40000M60000M80000M100000M10478452217076496336760761844056101. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenSSL

Algorithm: AES-256-GCM

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-256-GCMa4484PXpx20000M40000M60000M80000M100000M9717275170071160291870709026564801. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenSSL

Algorithm: ChaCha20-Poly1305

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20-Poly1305a4484PXpx20000M40000M60000M80000M100000M9239352934068816544020686789555501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPUa4484PXpx36912159.8310.2310.24

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time To First Tokena4484PXpx306090120150106.62121.48122.30

OpenVINO GenAI

Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Tokena4484PXpx20406080100101.7297.7997.61

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPUa4484PXpx369121512.9313.4013.41

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Tokena4484PXpx2040608010086.0693.0193.00

OpenVINO GenAI

Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Tokena4484PXpx2040608010077.3474.6574.54

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPUa4484PXpx51015202519.2820.2820.29

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Tokena4484PXpx132639526555.9358.9158.86

OpenVINO GenAI

Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Tokena4484PXpx122436486051.8649.3149.28

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/ao/real_timea4484PXpx36912159.009176.527766.52206

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/scivis/real_timea4484PXpx36912158.984866.449136.52304

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/pathtracer/real_timea4484PXpx50100150200250236.25199.02197.20

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/ao/real_timea4484PXpx2468107.639445.631225.71084

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/scivis/real_timea4484PXpx2468107.587895.548885.61470

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timea4484PXpx2468108.820936.411986.40740

POV-Ray

Trace Time

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-RayTrace Timea4484PXpx61218243018.5425.2625.331. POV-Ray 3.7.0.10.unofficial

Primesieve

Length: 1e12

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e12a4484PXpx36912156.3479.1169.1471. (CXX) g++ options: -O3

Primesieve

Length: 1e13

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e13a4484PXpx2040608010078.50110.61110.711. (CXX) g++ options: -O3

PyPerformance

Benchmark: go

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: goa4484PXpx2040608010077.878.679.4

PyPerformance

Benchmark: chaos

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: chaosa4484PXpx91827364538.239.739.4

PyPerformance

Benchmark: float

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: floata4484PXpx122436486050.751.350.8

PyPerformance

Benchmark: nbody

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: nbodya4484PXpx132639526559.059.559.2

PyPerformance

Benchmark: pathlib

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pathliba4484PXpx4812162014.214.414.4

PyPerformance

Benchmark: raytrace

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: raytracea4484PXpx4080120160200175182182

PyPerformance

Benchmark: xml_etree

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: xml_etreea4484PXpx81624324035.836.836.5

PyPerformance

Benchmark: gc_collect

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: gc_collecta4484PXpx150300450600750677699706

PyPerformance

Benchmark: json_loads

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: json_loadsa4484PXpx369121512.112.412.5

PyPerformance

Benchmark: crypto_pyaes

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: crypto_pyaesa4484PXpx102030405041.743.143.3

PyPerformance

Benchmark: async_tree_io

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: async_tree_ioa4484PXpx160320480640800755666656

PyPerformance

Benchmark: regex_compile

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: regex_compilea4484PXpx163248648069.871.772.5

PyPerformance

Benchmark: python_startup

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: python_startupa4484PXpx2468105.776.086.09

PyPerformance

Benchmark: asyncio_tcp_ssl

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_tcp_ssla4484PXpx140280420560700645590590

PyPerformance

Benchmark: django_template

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: django_templatea4484PXpx51015202520.721.021.2

PyPerformance

Benchmark: asyncio_websockets

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_websocketsa4484PXpx70140210280350315321322

PyPerformance

Benchmark: pickle_pure_python

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pickle_pure_pythona4484PXpx4080120160200165169168

QuantLib

Size: S

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: Sa4484PXpx369121512.7511.8611.841. (CXX) g++ options: -O3 -march=native -fPIE -pie

QuantLib

Size: XXS

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: XXSa4484PXpx369121513.4312.1212.111. (CXX) g++ options: -O3 -march=native -fPIE -pie

RELION

Test: Basic - Device: CPU

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 5.0Test: Basic - Device: CPUa4484PXpx2004006008001000944.27729.40733.021. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -lfftw3f -lfftw3 -ldl -ltiff -lpng -ljpeg -lmpi_cxx -lmpi

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Scala Dottya4484PXpx100200300400500477.0428.6436.2MIN: 371.54 / MAX: 736.5MIN: 378.22 / MAX: 628.77MIN: 380.62 / MAX: 721.56

Renaissance

Test: Random Forest

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Random Foresta4484PXpx100200300400500414.4422.0453.2MIN: 322.79 / MAX: 466.1MIN: 357.91 / MAX: 497.55MIN: 352.31 / MAX: 513.31

Renaissance

Test: ALS Movie Lens

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: ALS Movie Lensa4484PXpx2K4K6K8K10K9805.79378.89275.7MIN: 9253.4 / MAX: 10057.61MIN: 8718.36 / MAX: 9413.7MIN: 8821.09 / MAX: 9495.91

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark Bayesa4484PXpx110220330440550490.0513.2474.9MIN: 459.29 / MAX: 580.9MIN: 453.66 / MAX: 554.7MIN: 454.77 / MAX: 514.32

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Savina Reactors.IOa4484PXpx80016002400320040003506.43655.83676.0MIN: 3506.38 / MAX: 4329.37MIN: 3655.76 / MAX: 4484.97MAX: 4536.84

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark PageRanka4484PXpx50010001500200025002412.22138.12229.7MIN: 1691.04MIN: 1499.64MIN: 1612.96 / MAX: 2229.74

Renaissance

Test: Finagle HTTP Requests

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Finagle HTTP Requestsa4484PXpx50010001500200025002319.42492.22483.1MIN: 1832.84MIN: 1947.63MIN: 1933.43

Renaissance

Test: Gaussian Mixture Model

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Gaussian Mixture Modela4484PXpx80016002400320040003399.53860.63815.2MIN: 2471.52MIN: 2758.89 / MAX: 3860.61MIN: 2749.56 / MAX: 3815.24

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: In-Memory Database Shootouta4484PXpx70014002100280035003256.13241.53175.6MIN: 3019.89 / MAX: 3599.5MIN: 3037.03 / MAX: 3491.91MIN: 2896.06 / MAX: 3367.44

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Akka Unbalanced Cobwebbed Treea4484PXpx90018002700360045004403.84038.44002.3MAX: 5719.11MIN: 4038.36 / MAX: 5089.28MIN: 4002.27 / MAX: 4983.72

Renaissance

Test: Genetic Algorithm Using Jenetics + Futures

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Genetic Algorithm Using Jenetics + Futuresa4484PXpx2004006008001000732.8904.0920.7MIN: 713.67 / MAX: 813.49MIN: 886.83 / MAX: 919.31MIN: 888.75 / MAX: 934.44

Rustls

Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256a4484PXpx16K32K48K64K80K76454.4557716.6457688.081. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256a4484PXpx20K40K60K80K100K80462.6059308.7559206.341. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256a4484PXpx80K160K240K320K400K388077.69333882.92333574.301. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256a4484PXpx90K180K270K360K450K404263.45344296.24342775.291. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384a4484PXpx90K180K270K360K450K423535.68306153.20304060.281. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256a4484PXpx800K1600K2400K3200K4000K3563852.573035330.213038723.481. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256a4484PXpx600K1200K1800K2400K3000K2620332.002282729.642292879.441. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384a4484PXpx400K800K1200K1600K2000K1820810.211586292.421572010.681. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Rustls

Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384a4484PXpx300K600K900K1200K1500K1553632.141329363.101340712.851. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

simdjson

Throughput Test: Kostya

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: Kostyaa4484PXpx2468105.976.115.451. (CXX) g++ options: -O3 -lrt

simdjson

Throughput Test: TopTweet

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: TopTweeta4484PXpx369121510.4610.8210.511. (CXX) g++ options: -O3 -lrt

simdjson

Throughput Test: LargeRandom

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: LargeRandoma4484PXpx0.4140.8281.2421.6562.071.831.841.841. (CXX) g++ options: -O3 -lrt

simdjson

Throughput Test: PartialTweets

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: PartialTweetsa4484PXpx36912159.7610.108.351. (CXX) g++ options: -O3 -lrt

simdjson

Throughput Test: DistinctUserID

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: DistinctUserIDa4484PXpx369121510.4610.768.971. (CXX) g++ options: -O3 -lrt

Stockfish

Chess Benchmark

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfishChess Benchmarka4484PXpx10M20M30M40M50M4650703833702298338715951. Stockfish 16 by the Stockfish developers (see AUTHORS file)

Stockfish

Chess Benchmark

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmarka4484PXpx12M24M36M48M60M5475279645267546429733961. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

SVT-AV1

Encoder Mode: Preset 3 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 4Ka4484PXpx36912159.5907.6847.6461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 5 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 4Ka4484PXpx81624324034.5429.0928.821. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 4Ka4484PXpx20406080100102.0185.2085.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 4Ka4484PXpx50100150200250212.52198.11194.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 3 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 1080pa4484PXpx71421283529.5725.4525.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 5 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 1080pa4484PXpx20406080100101.9788.4288.271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 1080pa4484PXpx70140210280350339.02287.05286.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 1080pa4484PXpx2004006008001000842.56776.12769.821. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Beauty 4K 10-bita4484PXpx0.320.640.961.281.61.4221.1881.1841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Beauty 4K 10-bita4484PXpx2468106.5045.6025.5511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Beauty 4K 10-bita4484PXpx369121512.4710.9710.861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Beauty 4K 10-bita4484PXpx51015202518.5917.4117.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Eigen Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.4.0Time To Compilea4484PXpx153045607558.6667.3667.08

Whisper.cpp

Model: ggml-base.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Uniona4484PXpx2040608010087.4992.7193.451. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Uniona4484PXpx60120180240300245.08268.24266.811. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Uniona4484PXpx2004006008001000700.91809.79809.491. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Whisperfile

Model Size: Tiny

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Tinya4484PXpx102030405041.7137.1338.72

Whisperfile

Model Size: Small

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Smalla4484PXpx4080120160200195.42173.38167.89

Whisperfile

Model Size: Medium

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Mediuma4484PXpx120240360480600534.92473.55475.51

x265

Video Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 4Ka4484PXpx81624324032.5727.1626.941. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

x265

Video Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 1080pa4484PXpx306090120150114.45101.37101.251. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

XNNPACK

Model: FP32MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1a4484PXpx300600900120015001252125712721. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2a4484PXpx300600900120015001495136513681. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largea4484PXpx4008001200160020001810151515741. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smalla4484PXpx20040060080010009798098371. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1a4484PXpx300600900120015001143138313861. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2a4484PXpx300600900120015001190121712481. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largea4484PXpx300600900120015001498146715271. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smalla4484PXpx20040060080010009207797981. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: QS8MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2a4484PXpx20040060080010008447177231. (CXX) g++ options: -O3 -lrt -lm

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 1Ba4484PXpx51015202518.4918.3818.37

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 500Ma4484PXpx2468108.7728.6888.623


Phoronix Test Suite v10.8.5