AMD EPYC 9755 1P - SMT On/Off Comparison

AMD EPYC 9755 with SMT on/off comparison for a future article by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2410160-NE-TURINSMTO45
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
SMT On - Default
September 16
  14 Hours, 1 Minute
SMT Off
September 18
  12 Hours, 29 Minutes
Invert Behavior (Only Show Selected Data)
  13 Hours, 15 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 9755 1P - SMT On/Off ComparisonOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 9755 128-Core @ 2.70GHz (128 Cores / 256 Threads)AMD EPYC 9755 128-Core @ 2.70GHz (128 Cores)AMD VOLCANO (RVOT1000D BIOS)AMD Device 153a12 x 64GB DDR5-6000MT/s Samsung M321R8GA0PB1-CCPKC2 x 1920GB KIOXIA KCD8XPUG1T92ASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 24.046.10.0-phx (x86_64)GCC 13.2.0ext41920x1200ProcessorsMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionAMD EPYC 9755 1P - SMT On/Off Comparison BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xb002110 - OpenJDK Runtime Environment (build 21.0.3-ea+7-Ubuntu-1build1)- Python 3.12.2- SMT On - Default: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected - SMT Off: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: disabled; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

SMT On - Default vs. SMT Off ComparisonPhoronix Test SuiteBaseline+52.6%+52.6%+105.2%+105.2%+157.8%+157.8%165.3%129.3%110%108.7%107.3%105.9%83.7%83%80.6%76.7%71.1%69.5%69.1%69.1%48.9%40.7%38.6%32.5%30.2%27.7%26.5%25.9%21.1%19.7%17.9%17.5%15%13.9%13.9%11.4%11.1%11%10.2%10%8.8%7.9%7.3%7.2%7.2%7.1%7.1%7%6.9%6.8%6.5%5%5%4.5%4.4%4.3%4.1%4%3.9%3.8%3.7%3.7%2.9%2.9%2.6%2.5%2.4%2.4%2.1%2.1%V.P.M210.2%CaffeNet 12-int8 - CPU - Parallel173.8%P.D.F - CPUR.v.1.i - CPU - Parallel144.8%P.P.B.T.TR.S.A.F.I - CPUA.G.R.R.0.F.I - CPUF.D.F.I - CPUW.P.D.F.I - CPUArcFace ResNet-100 - CPU - Parallel104.3%bertsquad-12 - CPU - Parallel97.4%yolov4 - CPU - Parallel91.6%Read While Writing85.6%V.D.F.I - CPUF.D.R.F.I - CPUCPU - 1 - GoogLeNetM.T.E.T.D.F - CPUP.V.B.D.F - CPUCPU - 1 - ResNet-50H.E.R.F.I - CPUP.R.I.R.F - CPURead While Writing65.1%D.R56.3%256 - 256 - 3254.9%P.P.B.T.T51.1%1:10050.9%fcn-resnet101-11 - CPU - Parallel50.8%Pathtracer ISPC - Crown50.2%Pathtracer ISPC - Asian Dragon48.5%Pathtracer ISPC - Asian Dragon Obj48.3%S.F.P.R48.2%WPA PSK44.9%3 - 4K - 32 - Path Tracer - CPU43.9%Time To Solve42.4%leblancbigN.S.P.L.F - CPUBlowfish36.8%bcrypt36.7%ChaCha20-Poly130535%ChaCha2034.2%Chess Benchmark32.6%P.D.F - CPUN.S.P.L.F - CPU32.4%Bosphorus 4K - Slow32%CoreMark Size 666 - I.P.S31.4%Orange Juice - CPU31%Bosphorus 4K - Medium30.3%allmodconfigSHA25630.1%LuxCore Benchmark - CPU29.6%3 - 4K - 1 - Path Tracer - CPU29%3 - 4K - 16 - Path Tracer - CPU28.8%Bosphorus 4K - Slow28.4%1 - 4K - 1 - Path Tracer - CPU27.9%d.M.M.S - Mesh Time1 - 4K - 32 - Path Tracer - CPU27.4%1 - 4K - 16 - Path Tracer - CPU27.4%Bosphorus 4K - Medium27.2%v.I26.9%Rhodopsin ProteinA.w.3.5.Agravity_spheres_volume/dim_512/pathtracer/real_time25.5%Pabellon Barcelona - CPU-Only24.7%Barbershop - CPU-Only23.9%P.R.I.R.F - CPU23.8%Fishy Cat - CPU-Only22.9%Junkshop - CPU-Only22.5%Classroom - CPU-Only22.4%DLSC - CPU22%256 - 256 - 5721.5%F.1.0.R.a.m.i.C21.1%1BEnhanced21%100 - 1000 - Read Only20.6%100 - 1000 - Read Only - Average Latency20.5%Noise-Gaussian20.4%BMW27 - CPU-Only19.8%CPU - 64 - GoogLeNetGhostRider - 1M19.3%P.V.B.D.F - CPU19%H.E.R.F.I - CPU18%d.S.M.S - Mesh Time500MCompression Rating16.9%F.D.R.F.I - CPU16%LU.CV.D.F.I - CPU14.9%RSA409614.5%256 - 256 - 51214.4%Disney MaterialM.T.E.T.D.F - CPU13.3%1.R.H.D.T.Rparticle_volume/scivis/real_time11.3%particle_volume/ao/real_time11.2%CPU - 64 - ResNet-501.R.H.D.F.R.C.C1.R.H.D.S.RMulti-Threaded10.1%defconfigD.B.s - CPUA.G.R.R.0.F.I - CPU8.5%Exhaustive8.3%Very Thorough7.9%Bosphorus 4K - FasterF.R.C.R.5.F.i - CPU - Parallel7.9%Thorough7.6%MG.Cd.S.M.S - Execution TimesedovbigPreset 8 - Bosphorus 4KBosphorus 4K - FastCPU - 64 - AlexNetPreset 13 - Bosphorus 4KCPU - 1 - AlexNetSwirl6.6%Preset 12 - Bosphorus 4Kclover_bm64_shortCPU - 256 - GoogLeNetclover_bm16C.P.D.TRand ReadTime To CompileSHA5124%R.S.A.F.I - CPURand ReadAES-256-GCM3.9%Time To CompileF.D.F.I - CPUTime To CompileBosphorus 4K - Very Fast3.5%tConvolve MT - Gridding3.1%S.w.1.0.6.ACPU - 256 - ResNet-50Carbon NanotubeSP.CCPU - 512 - GoogLeNetd.M.M.S - Execution TimeNinja2.4%tConvolve MPI - Degridding2.2%Time To Compile2.2%CPU - 1 - ResNet-152Preset 4 - Bosphorus 4KF.R.C.R.5.F.i - CPU - Parallel7.9%fcn-resnet101-11 - CPU - Parallel50.7%yolov4 - CPU - Parallel91.6%bertsquad-12 - CPU - Parallel97.4%ArcFace ResNet-100 - CPU - Parallel104.3%CaffeNet 12-int8 - CPU - Parallel174%R.v.1.i - CPU - Parallel144.8%BRL-CADONNX RuntimeOpenVINOONNX RuntimesrsRAN ProjectOpenVINOOpenVINOOpenVINOOpenVINOONNX RuntimeONNX RuntimeONNX RuntimeSpeedbOpenVINOOpenVINOTensorFlowOpenVINOOpenVINOTensorFlowOpenVINOOpenVINORocksDB7-Zip CompressionLiquid-DSPsrsRAN ProjectMemcachedONNX RuntimeEmbreeAircrack-ngEmbreeEmbreeACES DGEMMJohn The RipperOSPRay Studiom-queensPennantOpenVINOJohn The RipperJohn The RipperOpenSSLOpenSSLStockfishOpenVINOOpenVINOuvg266CoremarkLuxCoreRenderuvg266Timed Linux Kernel CompilationOpenSSLLuxCoreRenderOSPRay StudioOSPRay StudioKvazaarOSPRay StudioOpenFOAMOSPRay StudioOSPRay StudioKvazaarOpenVKLLAMMPS Molecular Dynamics SimulatorNAMDOSPRayBlenderBlenderOpenVINOBlenderBlenderBlenderLuxCoreRenderLiquid-DSPParallel BZIP2 CompressionY-CruncherGraphicsMagickPostgreSQLPostgreSQLGraphicsMagickBlenderTensorFlowXmrigOpenVINOOpenVINOOpenFOAMY-Cruncher7-Zip CompressionOpenVINONAS Parallel BenchmarksOpenVINOOpenSSLLiquid-DSPAppleseedLULESHOpenVINOClickHouseOSPRayOSPRayTensorFlowClickHouseClickHouseQuantLibTimed Linux Kernel CompilationoneDNNOpenVINOASTC EncoderASTC EncoderVVenCONNX RuntimeASTC EncoderNAS Parallel BenchmarksOpenFOAMPennantSVT-AV1VVenCTensorFlowSVT-AV1TensorFlowGraphicsMagickSVT-AV1CloverLeafTensorFlowCloverLeafOpenRadiossRocksDBTimed Mesa CompilationOpenSSLOpenVINOSpeedbOpenSSLTimed Gem5 CompilationOpenVINOTimed Godot Game Engine CompilationKvazaarASKAPNAMDTensorFlowGPAWNAS Parallel BenchmarksTensorFlowOpenFOAMTimed LLVM CompilationASKAPTimed Node.js CompilationPyTorchSVT-AV1ONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeONNX RuntimeSMT On - DefaultSMT Off

AMD EPYC 9755 1P - SMT On/Off Comparisonwrf: conus 2.5kmopenvkl: vklBenchmarkCPU ISPCbrl-cad: VGR Performance Metricnwchem: C240 Buckyballhpcg: 144 144 144 - 60luxcorerender: Orange Juice - CPUstockfish: Chess Benchmarkluxcorerender: DLSC - CPUopenssl: RSA4096speedb: Read While Writingtensorflow: CPU - 512 - ResNet-50incompact3d: X3D-benchmarking input.i3dopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUrocksdb: Read While Writingopenssl: ChaCha20openssl: AES-256-GCMopenssl: AES-128-GCMopenssl: ChaCha20-Poly1305openssl: SHA512openssl: SHA256build-linux-kernel: allmodconfigcloverleaf: clover_bm16ospray: particle_volume/scivis/real_timeluxcorerender: LuxCore Benchmark - CPUxmrig: GhostRider - 1Maskap: tConvolve MT - Griddingpytorch: CPU - 256 - ResNet-152tensorflow: CPU - 256 - ResNet-50clickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachepgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlylammps: 20k Atomsbuild-gem5: Time To Compileospray: particle_volume/ao/real_timepytorch: CPU - 64 - ResNet-152pytorch: CPU - 512 - ResNet-152build-nodejs: Time To Compileopenradioss: Chrysler Neon 1Mblender: Barbershop - CPU-Onlybuild-llvm: Ninjaopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timebuild-godot: Time To Compileopenradioss: INIVOL and Fluid Structure Interaction Drop Containerrocksdb: Rand Readospray-studio: 1 - 4K - 32 - Path Tracer - CPUlulesh: memcached: 1:100tensorflow: CPU - 512 - GoogLeNetospray: gravity_spheres_volume/dim_512/pathtracer/real_timeopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUospray-studio: 3 - 4K - 32 - Path Tracer - CPUcoremark: CoreMark Size 666 - Iterations Per Secondospray-studio: 3 - 4K - 1 - Path Tracer - CPUonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelvvenc: Bosphorus 4K - Fastopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelgraphics-magick: Noise-Gaussianonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelspeedb: Rand Readgraphics-magick: Enhancedgraphics-magick: Swirlospray-studio: 3 - 4K - 16 - Path Tracer - CPUpytorch: CPU - 1 - ResNet-152ospray-studio: 1 - 4K - 16 - Path Tracer - CPUtensorflow: CPU - 64 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 512 - ResNet-50pytorch: CPU - 256 - ResNet-50tensorflow: CPU - 256 - GoogLeNetquantlib: Multi-Threadedaircrack-ng: askap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingblender: Pabellon Barcelona - CPU-Onlybuild-linux-kernel: defconfigsrsran: PDSCH Processor Benchmark, Throughput Totalliquid-dsp: 256 - 256 - 512liquid-dsp: 256 - 256 - 57john-the-ripper: WPA PSKliquid-dsp: 256 - 256 - 32john-the-ripper: Blowfishjohn-the-ripper: bcryptcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingblender: Classroom - CPU-Onlyamg: vvenc: Bosphorus 4K - Fastergpaw: Carbon Nanotubeopenradioss: Cell Phone Drop Testnpb: LU.Cpytorch: CPU - 1 - ResNet-50tensorflow: CPU - 1 - GoogLeNetpennant: sedovbigtensorflow: CPU - 512 - AlexNetcloverleaf: clover_bm64_shortsvt-av1: Preset 4 - Bosphorus 4Ktensorflow: CPU - 64 - GoogLeNetuvg266: Bosphorus 4K - Slowpennant: leblancbigsrsran: PUSCH Processor Benchmark, Throughput Totalblender: Junkshop - CPU-Onlyastcenc: Very Thoroughblender: Fishy Cat - CPU-Onlyappleseed: Disney Materialtensorflow: CPU - 256 - AlexNetkvazaar: Bosphorus 4K - Slowastcenc: Exhaustivekvazaar: Bosphorus 4K - Mediumnamd: STMV with 1,066,628 Atomsoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyuvg266: Bosphorus 4K - Mediumbuild-mesa: Time To Compilegromacs: MPI CPU - water_GMX50_baretensorflow: CPU - 1 - ResNet-50embree: Pathtracer ISPC - Asian Dragon Objbuild-imagemagick: Time To Compilesvt-av1: Preset 13 - Bosphorus 4Kastcenc: Thoroughtensorflow: CPU - 64 - AlexNetblender: BMW27 - CPU-Onlyy-cruncher: 1Bopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timenpb: SP.Csvt-av1: Preset 8 - Bosphorus 4Km-queens: Time To Solvekvazaar: Bosphorus 4K - Very Fasttensorflow: CPU - 1 - AlexNetnamd: ATPase with 327,506 Atomsnpb: IS.Doidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlymt-dgemm: Sustained Floating-Point Ratey-cruncher: 500Membree: Pathtracer ISPC - Crownonednn: Deconvolution Batch shapes_3d - CPUastcenc: Mediumnpb: MG.Cembree: Pathtracer ISPC - Asian Dragonsvt-av1: Preset 12 - Bosphorus 4Klammps: Rhodopsin Proteincompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionSMT On - DefaultSMT Off5574.363365458990551326.263.670832.0230734392320.942755042.318687897246.24199.84458987.14733.801706302111863592944501837765850747200736535147080700984142772359767240186660393313201.755178.9354.395813.9619873.016464.117.42203.33724.64729.56698.120.185541867267.776128.13254.392717.4617.49109.56177.2686.6288.893160.37378120.345783.29373.677868965802037937645.97313316660.10819.0555.1762329.53193.8010.7310557.16239816042366.34304775322.237744.964657.571109.757.228730.884.4813949.8919.363287.430.48190777.83309.7573.229569.8103.8831655.5768.719614.551140.750624.53915.6211232.2626.674774.3324.268341.20526.6118679.366391.02998969.2813134.47392223.4758221028284558511198220.2410154137.6243.6843.8343.61659.11465436.8133016.39174093.370762.829.7523.34419454.1285546666781519000001361333849473333332301032300084673690815924.00317708200023.74323.82217.87385558.5351.2022.694.3936062665.9222.0911.275361.3839.142.7612607954.212.7115.719512.5030.6924412271.9553.429.657653.954.622772.8543.8513.66922.7297.11191.78399.868271.924110.03831086.489.468.17320.59708921.44146194853.91118.1635.49296.8125.9314.244938647.625.845.8556.4088904.877179.44940.513417701.7315167026.93223.2053281.58955.7910.9348965583.476288019014251319.663.776124.4423169900417.162407045.510067442250.82201.27080332.84972.45103379938843002736201769373865300199918456604359772953222369546046390143420589203155.009171.2248.884110.7716656.315967.917.28209.16806.99804.06774.870.223449198468.695123.38248.932917.4117.59112.00376.42107.3490.989156.5799194.26375980.33073.448203578362596642873.2188825111.21839.0543.9735158.97201.017.747974.96345194598401.13564097123.992141.678232.58979.204.227335.762.6511265.539.223419.610.23175803.47466.9092.1421610.5032.1227294.20131.6957.5937680.439412.43223.069772.4215.774044.5649.590920.17073.2118690.328172.82229354.04826010.951891.30088541807013767981543720.6712934152.8343.1943.0143.68691.93422695.2198070.05874970.269207.537.1121.22244608.924970333336711066667939777548250000023609423619854163577665829.37314554433325.61823.21017.12443291.7152.1440.984.1003392675.1221.0311.512432.4929.661.9619895264.215.5714.566615.3626.9423582234.7541.608.919242.414.755812.8333.6613.13022.82112.05129.31019.904290.639102.27411162.5011.336.75019.21584218.185483199661.62126.6067.82393.5427.7017.935148556.575.835.8338.0591834.149119.50870.471717691.9921179200.08150.3383299.97470.5751.132014OpenBenchmarking.org

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmSMT OffSMT On - Default120024003600480060005583.485574.361. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCSMT OffSMT On - Default8001600240032004000SE +/- 1.45, N = 3SE +/- 0.88, N = 328803654MIN: 230 / MAX: 36075MIN: 293 / MAX: 42376

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.38.2VGR Performance MetricSMT OffSMT On - Default1.3M2.6M3.9M5.2M6.5M190142558990551. (CXX) g++ options: -std=c++17 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lnetpbm -lregex_brl -lz_brl -lassimp -ldl -lm -ltk8.6

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballSMT OffSMT On - Default300600900120015001319.61326.21. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60SMT OffSMT On - Default1428425670SE +/- 0.02, N = 3SE +/- 0.01, N = 363.7863.671. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUSMT OffSMT On - Default714212835SE +/- 0.24, N = 15SE +/- 0.52, N = 1524.4432.02MIN: 21.11 / MAX: 32.81MIN: 26.48 / MAX: 43.04

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 1024 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess BenchmarkSMT OffSMT On - Default70M140M210M280M350MSE +/- 4076501.74, N = 15SE +/- 4338134.07, N = 152316990043073439231. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUSMT OffSMT On - Default510152025SE +/- 0.21, N = 15SE +/- 0.33, N = 1517.1620.94MIN: 15.64 / MAX: 21.09MIN: 19.5 / MAX: 27

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.3Algorithm: RSA4096SMT OffSMT On - Default600K1200K1800K2400K3000KSE +/- 145.15, N = 3SE +/- 331.00, N = 32407045.52755042.31. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While WritingSMT OffSMT On - Default4M8M12M16M20MSE +/- 205985.46, N = 15SE +/- 433421.14, N = 1210067442186878971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: ResNet-50SMT OffSMT On - Default50100150200250SE +/- 0.10, N = 3SE +/- 0.52, N = 3250.82246.24

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dSMT OffSMT On - Default4080120160200SE +/- 0.21, N = 3SE +/- 0.78, N = 3201.27199.841. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUSMT OffSMT On - Default20406080100SE +/- 0.20, N = 3SE +/- 0.54, N = 1532.8487.14MIN: 26.14 / MAX: 69.48MIN: 35.35 / MAX: 200.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUSMT OffSMT On - Default2004006008001000SE +/- 5.91, N = 3SE +/- 4.77, N = 15972.45733.801. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read While WritingSMT OffSMT On - Default4M8M12M16M20MSE +/- 37116.95, N = 3SE +/- 241581.30, N = 1510337993170630211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.3Algorithm: ChaCha20SMT OffSMT On - Default300000M600000M900000M1200000M1500000MSE +/- 277988469.18, N = 3SE +/- 507390352.95, N = 388430027362011863592944501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.3Algorithm: AES-256-GCMSMT OffSMT On - Default400000M800000M1200000M1600000M2000000MSE +/- 1218174976.10, N = 3SE +/- 2482404044.78, N = 3176937386530018377658507471. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.3Algorithm: AES-128-GCMSMT OffSMT On - Default400000M800000M1200000M1600000M2000000MSE +/- 3907203464.57, N = 3SE +/- 679943093.73, N = 3199918456604320073653514701. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.3Algorithm: ChaCha20-Poly1305SMT OffSMT On - Default200000M400000M600000M800000M1000000MSE +/- 501642086.81, N = 3SE +/- 89801965.81, N = 35977295322238070098414271. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.3Algorithm: SHA512SMT OffSMT On - Default15000M30000M45000M60000M75000MSE +/- 155480309.48, N = 3SE +/- 163780192.60, N = 369546046390723597672401. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.3Algorithm: SHA256SMT OffSMT On - Default40000M80000M120000M160000M200000MSE +/- 126118866.35, N = 3SE +/- 76407988.26, N = 31434205892031866603933131. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigSMT OffSMT On - Default4080120160200SE +/- 0.37, N = 3SE +/- 0.88, N = 3155.01201.76

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm16SMT OffSMT On - Default4080120160200SE +/- 0.10, N = 3SE +/- 0.24, N = 3171.22178.931. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/scivis/real_timeSMT OffSMT On - Default1224364860SE +/- 0.02, N = 3SE +/- 0.01, N = 348.8854.40

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUSMT OffSMT On - Default48121620SE +/- 0.14, N = 3SE +/- 0.14, N = 1510.7713.96MIN: 4.91 / MAX: 12.41MIN: 6.55 / MAX: 16.95

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1MSMT OffSMT On - Default4K8K12K16K20KSE +/- 6.58, N = 3SE +/- 912.79, N = 1516656.319873.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingSMT OffSMT On - Default4K8K12K16K20KSE +/- 13.89, N = 3SE +/- 0.00, N = 315967.916464.11. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152SMT OffSMT On - Default48121620SE +/- 0.09, N = 3SE +/- 0.19, N = 417.2817.42MIN: 15.38 / MAX: 17.87MIN: 16.53 / MAX: 18.16

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: ResNet-50SMT OffSMT On - Default50100150200250SE +/- 0.40, N = 3SE +/- 0.65, N = 3209.16203.33

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunSMT OffSMT On - Default2004006008001000SE +/- 4.24, N = 3SE +/- 4.70, N = 3806.99724.64MIN: 82.19 / MAX: 7500MIN: 85.11 / MAX: 6666.67

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunSMT OffSMT On - Default2004006008001000SE +/- 5.07, N = 3SE +/- 5.27, N = 3804.06729.56MIN: 82.99 / MAX: 7500MIN: 81.63 / MAX: 7500

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheSMT OffSMT On - Default2004006008001000SE +/- 3.29, N = 3SE +/- 6.12, N = 3774.87698.12MIN: 80.97 / MAX: 6666.67MIN: 80.86 / MAX: 6666.67

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencySMT OffSMT On - Default0.05020.10040.15060.20080.251SE +/- 0.002, N = 3SE +/- 0.001, N = 30.2230.1851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlySMT OffSMT On - Default1.2M2.4M3.6M4.8M6MSE +/- 52234.47, N = 3SE +/- 39392.13, N = 3449198454186721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsSMT OffSMT On - Default1530456075SE +/- 0.72, N = 3SE +/- 0.06, N = 368.7067.781. (CXX) g++ options: -O3 -lm -ldl

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To CompileSMT OffSMT On - Default306090120150SE +/- 1.07, N = 3SE +/- 1.52, N = 3123.38128.13

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/ao/real_timeSMT OffSMT On - Default1224364860SE +/- 0.02, N = 3SE +/- 0.02, N = 348.9354.39

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-152SMT OffSMT On - Default48121620SE +/- 0.09, N = 3SE +/- 0.12, N = 317.4117.46MIN: 15.31 / MAX: 17.97MIN: 16.79 / MAX: 17.99

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152SMT OffSMT On - Default48121620SE +/- 0.15, N = 3SE +/- 0.05, N = 317.5917.49MIN: 3.09 / MAX: 18.18MIN: 16.94 / MAX: 18.01

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 21.7.2Time To CompileSMT OffSMT On - Default306090120150SE +/- 0.10, N = 3SE +/- 0.21, N = 3112.00109.56

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MSMT OffSMT On - Default20406080100SE +/- 0.05, N = 3SE +/- 0.11, N = 376.4277.26

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Barbershop - Compute: CPU-OnlySMT OffSMT On - Default20406080100SE +/- 0.04, N = 3SE +/- 0.12, N = 3107.3486.62

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: NinjaSMT OffSMT On - Default20406080100SE +/- 0.05, N = 3SE +/- 0.32, N = 390.9988.89

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeSMT OffSMT On - Default4080120160200156.58160.371. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeSMT OffSMT On - Default30609012015094.26120.351. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To CompileSMT OffSMT On - Default20406080100SE +/- 0.06, N = 3SE +/- 0.10, N = 380.3383.29

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainerSMT OffSMT On - Default1632486480SE +/- 0.14, N = 3SE +/- 0.07, N = 373.4473.67

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random ReadSMT OffSMT On - Default200M400M600M800M1000MSE +/- 9096243.38, N = 5SE +/- 5260460.78, N = 38203578367868965801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUSMT OffSMT On - Default6K12K18K24K30KSE +/- 34.28, N = 3SE +/- 39.68, N = 32596620379

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3SMT OffSMT On - Default9K18K27K36K45KSE +/- 169.28, N = 3SE +/- 450.61, N = 1542873.2237645.971. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100SMT OffSMT On - Default3M6M9M12M15MSE +/- 110048.07, N = 4SE +/- 136953.60, N = 38825111.2113316660.101. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: GoogLeNetSMT OffSMT On - Default2004006008001000SE +/- 1.83, N = 3SE +/- 3.19, N = 3839.05819.05

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeSMT OffSMT On - Default1224364860SE +/- 0.02, N = 3SE +/- 0.01, N = 343.9755.18

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUSMT OffSMT On - Default70140210280350SE +/- 0.05, N = 3SE +/- 0.54, N = 3158.97329.53MIN: 147.03 / MAX: 169.16MIN: 165.13 / MAX: 354.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUSMT OffSMT On - Default4080120160200SE +/- 0.09, N = 3SE +/- 0.32, N = 3201.01193.801. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUSMT OffSMT On - Default3691215SE +/- 0.03, N = 3SE +/- 0.05, N = 37.7410.73MIN: 5.2 / MAX: 16.54MIN: 6.4 / MAX: 32.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUSMT OffSMT On - Default2K4K6K8K10KSE +/- 33.26, N = 3SE +/- 33.49, N = 37974.9610557.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUSMT OffSMT On - Default7K14K21K28K35KSE +/- 23.25, N = 3SE +/- 32.92, N = 33451923981

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondSMT OffSMT On - Default1.3M2.6M3.9M5.2M6.5MSE +/- 8927.75, N = 3SE +/- 10789.95, N = 34598401.146042366.341. (CC) gcc options: -O2 -lrt" -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUSMT OffSMT On - Default2004006008001000SE +/- 0.00, N = 3SE +/- 0.58, N = 3971753

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelSMT OffSMT On - Default612182430SE +/- 0.11, N = 3SE +/- 0.03, N = 323.9922.241. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: ParallelSMT OffSMT On - Default1020304050SE +/- 0.19, N = 3SE +/- 0.06, N = 341.6844.961. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUSMT OffSMT On - Default1326395265SE +/- 0.08, N = 3SE +/- 0.16, N = 332.5857.57MIN: 22.75 / MAX: 50.14MIN: 34.99 / MAX: 106.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUSMT OffSMT On - Default2004006008001000SE +/- 2.32, N = 3SE +/- 2.99, N = 3979.201109.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUSMT OffSMT On - Default246810SE +/- 0.01, N = 3SE +/- 0.01, N = 34.227.22MIN: 3.09 / MAX: 10.37MIN: 4.48 / MAX: 26.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUSMT OffSMT On - Default2K4K6K8K10KSE +/- 19.63, N = 3SE +/- 8.58, N = 37335.768730.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUSMT OffSMT On - Default1.0082.0163.0244.0325.04SE +/- 0.00, N = 3SE +/- 0.00, N = 32.654.48MIN: 1.8 / MAX: 9.28MIN: 1.99 / MAX: 22.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUSMT OffSMT On - Default3K6K9K12K15KSE +/- 6.68, N = 3SE +/- 14.83, N = 311265.5313949.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUSMT OffSMT On - Default510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 39.2219.36MIN: 7.78 / MAX: 20.73MIN: 9.99 / MAX: 44.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUSMT OffSMT On - Default7001400210028003500SE +/- 7.32, N = 3SE +/- 2.87, N = 33419.613287.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUSMT OffSMT On - Default0.1080.2160.3240.4320.54SE +/- 0.00, N = 3SE +/- 0.01, N = 30.230.48MIN: 0.13 / MAX: 19.9MIN: 0.13 / MAX: 26.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUSMT OffSMT On - Default40K80K120K160K200KSE +/- 585.07, N = 3SE +/- 1255.01, N = 3175803.47190777.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelSMT OffSMT On - Default100200300400500SE +/- 4.66, N = 3SE +/- 4.35, N = 3466.91309.761. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelSMT OffSMT On - Default0.72671.45342.18012.90683.6335SE +/- 0.02146, N = 3SE +/- 0.04511, N = 32.142163.229561. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: FastSMT OffSMT On - Default3691215SE +/- 0.065, N = 3SE +/- 0.028, N = 310.5039.8101. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUSMT OffSMT On - Default0.8731.7462.6193.4924.365SE +/- 0.00, N = 3SE +/- 0.00, N = 32.123.88MIN: 1.56 / MAX: 9.64MIN: 1.55 / MAX: 24.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUSMT OffSMT On - Default7K14K21K28K35KSE +/- 28.47, N = 3SE +/- 0.83, N = 327294.2031655.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: yolov4 - Device: CPU - Executor: ParallelSMT OffSMT On - Default306090120150SE +/- 0.89, N = 3SE +/- 0.11, N = 3131.7068.721. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: yolov4 - Device: CPU - Executor: ParallelSMT OffSMT On - Default48121620SE +/- 0.05138, N = 3SE +/- 0.02257, N = 37.5937614.551101. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: bertsquad-12 - Device: CPU - Executor: ParallelSMT OffSMT On - Default20406080100SE +/- 0.50, N = 3SE +/- 0.21, N = 380.4440.751. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: bertsquad-12 - Device: CPU - Executor: ParallelSMT OffSMT On - Default612182430SE +/- 0.08, N = 3SE +/- 0.13, N = 312.4324.541. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUSMT OffSMT On - Default1.26452.5293.79355.0586.3225SE +/- 0.00, N = 3SE +/- 0.00, N = 33.065.62MIN: 1.95 / MAX: 14.46MIN: 2.01 / MAX: 29.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUSMT OffSMT On - Default2K4K6K8K10KSE +/- 10.17, N = 3SE +/- 2.97, N = 39772.4211232.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUSMT OffSMT On - Default612182430SE +/- 0.04, N = 3SE +/- 0.03, N = 315.7726.67MIN: 14.16 / MAX: 30.7MIN: 15.22 / MAX: 47.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUSMT OffSMT On - Default10002000300040005000SE +/- 9.68, N = 3SE +/- 5.09, N = 34044.564774.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelSMT OffSMT On - Default1122334455SE +/- 0.62, N = 3SE +/- 0.12, N = 349.5924.271. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelSMT OffSMT On - Default918273645SE +/- 0.26, N = 3SE +/- 0.20, N = 320.1741.211. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUSMT OffSMT On - Default246810SE +/- 0.00, N = 3SE +/- 0.01, N = 33.216.61MIN: 2.13 / MAX: 11.62MIN: 2.25 / MAX: 28.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUSMT OffSMT On - Default4K8K12K16K20KSE +/- 14.17, N = 3SE +/- 13.14, N = 318690.3218679.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUSMT OffSMT On - Default2004006008001000SE +/- 0.33, N = 3SE +/- 0.33, N = 3817639

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelSMT OffSMT On - Default0.6351.271.9052.543.175SE +/- 0.01243, N = 3SE +/- 0.00186, N = 32.822291.029981. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: CaffeNet 12-int8 - Device: CPU - Executor: ParallelSMT OffSMT On - Default2004006008001000SE +/- 1.56, N = 3SE +/- 1.72, N = 3354.05969.281. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample high resolution (currently 15400 x 6940) JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: Noise-GaussianSMT OffSMT On - Default70140210280350SE +/- 1.00, N = 3SE +/- 0.58, N = 32603131. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lsharpyuv -lwebp -lwebpmux -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lbz2 -lz -lm -lpthread -lgomp

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.17Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelSMT OffSMT On - Default3691215SE +/- 0.02184, N = 3SE +/- 0.03295, N = 310.951804.473921. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.17Model: ResNet50 v1-12-int8 - Device: CPU - Executor: ParallelSMT OffSMT On - Default50100150200250SE +/- 0.18, N = 3SE +/- 1.63, N = 391.30223.481. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random ReadSMT OffSMT On - Default200M400M600M800M1000MSE +/- 36730.51, N = 3SE +/- 707549.54, N = 38541807018221028281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample high resolution (currently 15400 x 6940) JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: EnhancedSMT OffSMT On - Default100200300400500SE +/- 0.67, N = 3SE +/- 0.33, N = 33764551. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lsharpyuv -lwebp -lwebpmux -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lbz2 -lz -lm -lpthread -lgomp

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.43Operation: SwirlSMT OffSMT On - Default2004006008001000SE +/- 0.33, N = 3SE +/- 0.33, N = 37988511. (CC) gcc options: -fopenmp -O2 -ltiff -ljbig -lsharpyuv -lwebp -lwebpmux -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lxml2 -lzstd -llzma -lbz2 -lz -lm -lpthread -lgomp

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUSMT OffSMT On - Default3K6K9K12K15KSE +/- 23.35, N = 3SE +/- 14.19, N = 31543711982

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-152SMT OffSMT On - Default510152025SE +/- 0.17, N = 3SE +/- 0.07, N = 320.6720.24MIN: 17.64 / MAX: 21.34MIN: 19.69 / MAX: 20.76

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUSMT OffSMT On - Default3K6K9K12K15KSE +/- 4.36, N = 3SE +/- 3.71, N = 31293410154

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: ResNet-50SMT OffSMT On - Default306090120150SE +/- 0.80, N = 3SE +/- 0.26, N = 3152.83137.62

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-50SMT OffSMT On - Default1020304050SE +/- 0.06, N = 3SE +/- 0.27, N = 343.1943.68MIN: 39.77 / MAX: 44.9MIN: 41.7 / MAX: 44.98

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50SMT OffSMT On - Default1020304050SE +/- 0.12, N = 3SE +/- 0.13, N = 343.0143.83MIN: 40.3 / MAX: 44.35MIN: 41.6 / MAX: 45.11

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50SMT OffSMT On - Default1020304050SE +/- 0.29, N = 3SE +/- 0.29, N = 343.6843.61MIN: 40.37 / MAX: 44.97MIN: 41.9 / MAX: 45.08

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: GoogLeNetSMT OffSMT On - Default150300450600750SE +/- 2.03, N = 3SE +/- 7.40, N = 3691.93659.11

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-ThreadedSMT OffSMT On - Default100K200K300K400K500KSE +/- 94.33, N = 3SE +/- 401.79, N = 3422695.2465436.81. (CXX) g++ options: -O3 -march=native -fPIE -pie

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7SMT OffSMT On - Default40K80K120K160K200KSE +/- 1876.30, N = 3SE +/- 329.61, N = 3198070.06133016.391. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingSMT OffSMT On - Default16K32K48K64K80KSE +/- 0.00, N = 3SE +/- 438.43, N = 374970.274093.31. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingSMT OffSMT On - Default15K30K45K60K75KSE +/- 382.37, N = 3SE +/- 395.30, N = 369207.570762.81. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Pabellon Barcelona - Compute: CPU-OnlySMT OffSMT On - Default918273645SE +/- 0.08, N = 3SE +/- 0.06, N = 337.1129.75

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigSMT OffSMT On - Default612182430SE +/- 0.20, N = 6SE +/- 0.13, N = 321.2223.34

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240325Test: PDSCH Processor Benchmark, Throughput TotalSMT OffSMT On - Default10K20K30K40K50KSE +/- 1047.04, N = 15SE +/- 214.06, N = 344608.919454.11. (CXX) g++ options: -O3 -march=native -mavx2 -mavx -msse4.1 -mfma -mavx512f -mavx512cd -mavx512bw -mavx512dq -fno-trapping-math -fno-math-errno -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 512SMT OffSMT On - Default600M1200M1800M2400M3000MSE +/- 2305307.88, N = 3SE +/- 3925274.23, N = 3249703333328554666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 57SMT OffSMT On - Default2000M4000M6000M8000M10000MSE +/- 15159192.30, N = 3SE +/- 6331139.97, N = 3671106666781519000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKSMT OffSMT On - Default300K600K900K1200K1500KSE +/- 341.06, N = 3SE +/- 1201.85, N = 393977713613331. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 256 - Buffer Length: 256 - Filter Length: 32SMT OffSMT On - Default2000M4000M6000M8000M10000MSE +/- 4427565.17, N = 3SE +/- 14339494.80, N = 3548250000084947333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: BlowfishSMT OffSMT On - Default70K140K210K280K350KSE +/- 92.96, N = 3SE +/- 53.35, N = 32360943230101. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptSMT OffSMT On - Default70K140K210K280K350KSE +/- 96.62, N = 3SE +/- 45.32, N = 32361983230001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingSMT OffSMT On - Default200K400K600K800K1000KSE +/- 100.03, N = 3SE +/- 276.36, N = 35416358467361. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingSMT OffSMT On - Default200K400K600K800K1000KSE +/- 379.38, N = 3SE +/- 3530.77, N = 37766589081591. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Classroom - Compute: CPU-OnlySMT OffSMT On - Default714212835SE +/- 0.06, N = 3SE +/- 0.05, N = 329.3724.00

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2SMT OffSMT On - Default700M1400M2100M2800M3500MSE +/- 4101761.26, N = 3SE +/- 6342005.91, N = 3314554433331770820001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: FasterSMT OffSMT On - Default612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 325.6223.741. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon NanotubeSMT OffSMT On - Default612182430SE +/- 0.08, N = 3SE +/- 0.10, N = 323.2123.821. (CC) gcc options: -shared -lxc -lblas -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestSMT OffSMT On - Default48121620SE +/- 0.20, N = 3SE +/- 0.16, N = 317.1217.87

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CSMT OffSMT On - Default90K180K270K360K450KSE +/- 3233.97, N = 7SE +/- 5529.34, N = 15443291.71385558.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50SMT OffSMT On - Default1224364860SE +/- 0.39, N = 3SE +/- 0.42, N = 352.1451.20MIN: 47.46 / MAX: 54.1MIN: 48.71 / MAX: 53.18

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: GoogLeNetSMT OffSMT On - Default918273645SE +/- 0.16, N = 7SE +/- 0.16, N = 1540.9822.69

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigSMT OffSMT On - Default0.98861.97722.96583.95444.943SE +/- 0.033508, N = 9SE +/- 0.073388, N = 154.1003394.3936061. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: AlexNetSMT OffSMT On - Default6001200180024003000SE +/- 3.14, N = 3SE +/- 2.64, N = 32675.122665.92

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeaf 1.3Input: clover_bm64_shortSMT OffSMT On - Default510152025SE +/- 0.03, N = 3SE +/- 0.07, N = 321.0322.091. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4KSMT OffSMT On - Default3691215SE +/- 0.01, N = 4SE +/- 0.04, N = 411.5111.281. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: GoogLeNetSMT OffSMT On - Default90180270360450SE +/- 0.71, N = 3SE +/- 2.87, N = 3432.49361.38

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: SlowSMT OffSMT On - Default918273645SE +/- 0.02, N = 3SE +/- 0.09, N = 429.6639.14

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigSMT OffSMT On - Default0.62131.24261.86392.48523.1065SE +/- 0.039628, N = 15SE +/- 0.045208, N = 151.9619892.7612601. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240325Test: PUSCH Processor Benchmark, Throughput TotalSMT OffSMT On - Default2K4K6K8K10KSE +/- 0.03, N = 3SE +/- 0.32, N = 35264.27954.2MIN: 3398.1MIN: 5432.6 / MAX: 7954.61. (CXX) g++ options: -O3 -march=native -mavx2 -mavx -msse4.1 -mfma -mavx512f -mavx512cd -mavx512bw -mavx512dq -fno-trapping-math -fno-math-errno -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Junkshop - Compute: CPU-OnlySMT OffSMT On - Default48121620SE +/- 0.04, N = 4SE +/- 0.03, N = 415.5712.71

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.7Preset: Very ThoroughSMT OffSMT On - Default48121620SE +/- 0.00, N = 3SE +/- 0.00, N = 314.5715.721. (CXX) g++ options: -O3 -flto -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Fishy Cat - Compute: CPU-OnlySMT OffSMT On - Default48121620SE +/- 0.07, N = 4SE +/- 0.01, N = 415.3612.50

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialSMT OffSMT On - Default71421283526.9430.69

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: AlexNetSMT OffSMT On - Default5001000150020002500SE +/- 10.16, N = 4SE +/- 2.78, N = 42234.752271.95

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowSMT OffSMT On - Default1224364860SE +/- 0.04, N = 4SE +/- 0.06, N = 541.6053.421. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.7Preset: ExhaustiveSMT OffSMT On - Default3691215SE +/- 0.0052, N = 3SE +/- 0.0004, N = 38.91929.65761. (CXX) g++ options: -O3 -flto -pthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumSMT OffSMT On - Default1224364860SE +/- 0.02, N = 4SE +/- 0.01, N = 542.4153.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0b6Input: STMV with 1,066,628 AtomsSMT OffSMT On - Default1.07012.14023.21034.28045.3505SE +/- 0.00841, N = 4SE +/- 0.00697, N = 34.755814.62277

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.2Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlySMT OffSMT On - Default0.64131.28261.92392.56523.2065SE +/- 0.00, N = 5SE +/- 0.00, N = 52.832.85

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumSMT OffSMT On - Default1020304050SE +/- 0.03, N = 3SE +/- 0.04, N = 433.6643.85

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 24.0Time To CompileSMT OffSMT On - Default48121620SE +/- 0.02, N = 4SE +/- 0.07, N = 413.1313.67

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2024Implementation: MPI CPU - Input: water_GMX50_bareSMT OffSMT On - Default510152025SE +/- 0.05, N = 3SE +/- 0.04, N = 322.8222.731. (CXX) g++ options: -O3 -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: ResNet-50SMT OffSMT On - Default3691215SE +/- 0.01, N = 4SE +/- 0.01, N = 312.057.11

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon ObjSMT OffSMT On - Default4080120160200SE +/- 0.06, N = 4SE +/- 0.14, N = 5129.31191.78MIN: 127.88 / MAX: 131.29MIN: 188.36 / MAX: 196.2

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileSMT OffSMT On - Default3691215SE +/- 0.078, N = 5SE +/- 0.013, N = 59.9049.868

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4KSMT OffSMT On - Default60120180240300SE +/- 8.78, N = 15SE +/- 7.58, N = 15290.64271.921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.7Preset: ThoroughSMT OffSMT On - Default20406080100SE +/- 0.01, N = 6SE +/- 0.02, N = 6102.27110.041. (CXX) g++ options: -O3 -flto -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: AlexNetSMT OffSMT On - Default30060090012001500SE +/- 8.98, N = 6SE +/- 3.83, N = 61162.501086.48

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: BMW27 - Compute: CPU-OnlySMT OffSMT On - Default3691215SE +/- 0.02, N = 4SE +/- 0.01, N = 511.339.46

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1BSMT OffSMT On - Default246810SE +/- 0.007, N = 5SE +/- 0.015, N = 56.7508.173

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeSMT OffSMT On - Default51015202519.2220.601. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeSMT OffSMT On - Default51015202518.1921.441. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CSMT OffSMT On - Default40K80K120K160K200KSE +/- 247.78, N = 5SE +/- 878.07, N = 5199661.62194853.911. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4KSMT OffSMT On - Default306090120150SE +/- 0.24, N = 7SE +/- 1.10, N = 7126.61118.161. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveSMT OffSMT On - Default246810SE +/- 0.025, N = 6SE +/- 0.025, N = 77.8235.4921. (CXX) g++ options: -fopenmp -O2 -march=native

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastSMT OffSMT On - Default20406080100SE +/- 0.02, N = 6SE +/- 0.49, N = 793.5496.811. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: AlexNetSMT OffSMT On - Default714212835SE +/- 0.10, N = 7SE +/- 0.16, N = 727.7025.93

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0b6Input: ATPase with 327,506 AtomsSMT OffSMT On - Default48121620SE +/- 0.01, N = 8SE +/- 0.03, N = 717.9414.24

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DSMT OffSMT On - Default2K4K6K8K10KSE +/- 21.61, N = 6SE +/- 55.66, N = 68556.578647.621. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.2Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlySMT OffSMT On - Default1.3142.6283.9425.2566.57SE +/- 0.01, N = 7SE +/- 0.01, N = 75.835.84

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.2Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlySMT OffSMT On - Default1.31632.63263.94895.26526.5815SE +/- 0.01, N = 7SE +/- 0.00, N = 75.835.85

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateSMT OffSMT On - Default1326395265SE +/- 0.02, N = 7SE +/- 0.18, N = 838.0656.411. (CC) gcc options: -O3 -march=native -fopenmp

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500MSMT OffSMT On - Default1.09732.19463.29194.38925.4865SE +/- 0.004, N = 7SE +/- 0.005, N = 64.1494.877

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: CrownSMT OffSMT On - Default4080120160200SE +/- 0.04, N = 6SE +/- 0.11, N = 8119.51179.45MIN: 117.52 / MAX: 122.07MIN: 175.01 / MAX: 186.64

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUSMT OffSMT On - Default0.11550.2310.34650.4620.5775SE +/- 0.000896, N = 9SE +/- 0.004859, N = 150.4717170.513417MIN: 0.45MIN: 0.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.7Preset: MediumSMT OffSMT On - Default150300450600750SE +/- 2.71, N = 8SE +/- 3.94, N = 8691.99701.731. (CXX) g++ options: -O3 -flto -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CSMT OffSMT On - Default40K80K120K160K200KSE +/- 90.58, N = 9SE +/- 1302.88, N = 15179200.08167026.931. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian DragonSMT OffSMT On - Default50100150200250SE +/- 0.05, N = 7SE +/- 0.06, N = 8150.34223.21MIN: 148.74 / MAX: 152.35MIN: 219.93 / MAX: 227.73

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4KSMT OffSMT On - Default70140210280350SE +/- 0.48, N = 9SE +/- 2.17, N = 9299.97281.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinSMT OffSMT On - Default1632486480SE +/- 0.15, N = 10SE +/- 0.31, N = 1070.5855.791. (CXX) g++ options: -O3 -lm -ldl

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionSMT OffSMT On - Default0.25470.50940.76411.01881.2735SE +/- 0.015554, N = 15SE +/- 0.010948, N = 151.1320140.9348961. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringSMT OffSMT On - Default90180270360450Min: 22.2 / Avg: 326.39 / Max: 505.98Min: 44.19 / Avg: 324.1 / Max: 500.98

180 Results Shown

WRF
OpenVKL
BRL-CAD
NWChem
High Performance Conjugate Gradient
LuxCoreRender
Stockfish
LuxCoreRender
OpenSSL
Speedb
TensorFlow
Xcompact3d Incompact3d
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
RocksDB
OpenSSL:
  ChaCha20
  AES-256-GCM
  AES-128-GCM
  ChaCha20-Poly1305
  SHA512
  SHA256
Timed Linux Kernel Compilation
CloverLeaf
OSPRay
LuxCoreRender
Xmrig
ASKAP
PyTorch
TensorFlow
ClickHouse:
  100M Rows Hits Dataset, Third Run
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, First Run / Cold Cache
PostgreSQL:
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
LAMMPS Molecular Dynamics Simulator
Timed Gem5 Compilation
OSPRay
PyTorch:
  CPU - 64 - ResNet-152
  CPU - 512 - ResNet-152
Timed Node.js Compilation
OpenRadioss
Blender
Timed LLVM Compilation
OpenFOAM:
  drivaerFastback, Medium Mesh Size - Execution Time
  drivaerFastback, Medium Mesh Size - Mesh Time
Timed Godot Game Engine Compilation
OpenRadioss
RocksDB
OSPRay Studio
LULESH
Memcached
TensorFlow
OSPRay
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Noise Suppression Poconet-Like FP16 - CPU:
    ms
    FPS
OSPRay Studio
Coremark
OSPRay Studio
ONNX Runtime:
  Faster R-CNN R-50-FPN-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Person Re-Identification Retail FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
ONNX Runtime:
  fcn-resnet101-11 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
VVenC
OpenVINO:
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
ONNX Runtime:
  yolov4 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
ONNX Runtime:
  ArcFace ResNet-100 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
OSPRay Studio
ONNX Runtime:
  CaffeNet 12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
GraphicsMagick
ONNX Runtime:
  ResNet50 v1-12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
Speedb
GraphicsMagick:
  Enhanced
  Swirl
OSPRay Studio
PyTorch
OSPRay Studio
TensorFlow
PyTorch:
  CPU - 64 - ResNet-50
  CPU - 512 - ResNet-50
  CPU - 256 - ResNet-50
TensorFlow
QuantLib
Aircrack-ng
ASKAP:
  tConvolve MPI - Gridding
  tConvolve MPI - Degridding
Blender
Timed Linux Kernel Compilation
srsRAN Project
Liquid-DSP:
  256 - 256 - 512
  256 - 256 - 57
John The Ripper
Liquid-DSP
John The Ripper:
  Blowfish
  bcrypt
7-Zip Compression:
  Decompression Rating
  Compression Rating
Blender
Algebraic Multi-Grid Benchmark
VVenC
GPAW
OpenRadioss
NAS Parallel Benchmarks
PyTorch
TensorFlow
Pennant
TensorFlow
CloverLeaf
SVT-AV1
TensorFlow
uvg266
Pennant
srsRAN Project
Blender
ASTC Encoder
Blender
Appleseed
TensorFlow
Kvazaar
ASTC Encoder
Kvazaar
NAMD
Intel Open Image Denoise
uvg266
Timed Mesa Compilation
GROMACS
TensorFlow
Embree
Timed ImageMagick Compilation
SVT-AV1
ASTC Encoder
TensorFlow
Blender
Y-Cruncher
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
NAS Parallel Benchmarks
SVT-AV1
m-queens
Kvazaar
TensorFlow
NAMD
NAS Parallel Benchmarks
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
ACES DGEMM
Y-Cruncher
Embree
oneDNN
ASTC Encoder
NAS Parallel Benchmarks
Embree
SVT-AV1
LAMMPS Molecular Dynamics Simulator
Parallel BZIP2 Compression
CPU Power Consumption Monitor