AMD SME Benchmark Genoa

4th Gen AMD EPYC "Genoa" Secure Memory Encryption (SME) benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2212212-NE-AMDSMEBEN19
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 11 Tests
Compression Tests 2 Tests
CPU Massive 19 Tests
Creator Workloads 18 Tests
Encoding 6 Tests
Fortran Tests 5 Tests
Game Development 6 Tests
HPC - High Performance Computing 23 Tests
Java 2 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 5 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 6 Tests
Multi-Core 30 Tests
NVIDIA GPU Compute 3 Tests
Intel oneAPI 7 Tests
OpenMPI Tests 14 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 9 Tests
Raytracing 2 Tests
Renderers 4 Tests
Scientific Computing 8 Tests
Software Defined Radio 2 Tests
Server 2 Tests
Server CPU Tests 15 Tests
Texture Compression 2 Tests
Video Encoding 6 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
No SME
December 20 2022
  8 Hours, 15 Minutes
AMD SME Enabled
December 19 2022
  7 Hours, 52 Minutes
Invert Hiding All Results Option
  8 Hours, 4 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD SME Benchmark GenoaOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1002E BIOS)AMD Device 14a41520GB800GB INTEL SSDPF21Q800GBASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeUbuntu 22.106.1.0-phx (x86_64)GNOME Shell 43.0X Server 1.21.1.41.3.224GCC 12.2.0 + Clang 15.0.2-1ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionAMD SME Benchmark Genoa PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10110d - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu2)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

No SME vs. AMD SME Enabled ComparisonPhoronix Test SuiteBaseline+2%+2%+4%+4%+6%+6%+8%+8%d.S.M.S - Mesh Time8.1%tConvolve MPI - Degridding6.2%19, Long Mode - Compression Speed6%CPU - Numpy - 4194304 - Equation of State5.7%Emily5.6%Z.C.15.4%H25.1%265%Speed 10 Realtime - Bosphorus 4K4.1%tConvolve MPI - Gridding3.9%Compression Rating3.7%Bosphorus 4K3.7%263.6%63.2%Monero - 1M3.1%CPU - Numpy - 4194304 - Isoneutral Mixing3.1%vklBenchmark ISPC2.8%2.7%C.C.R.5.I - A.M.S2.7%Time To Compile2.6%C.C.R.5.I - A.M.S2.6%3 - 4K - 32 - Path Tracer2.6%Time To Compile2.6%N.T.C.B.b.u.S - A.M.S2.5%N.T.C.B.b.u.S - A.M.S2.5%Wownero - 1M2.4%2.4%5002.4%V.D.F - CPU2.3%P.D.F - CPU2.3%N.Q.A.B.b.u.S.1.P - A.M.S2.3%N.Q.A.B.b.u.S.1.P - A.M.S2.3%P.D.F - CPU2.3%V.D.F - CPU2.2%N.T.C.D.m - A.M.S2.2%N.T.C.D.m - A.M.S2.2%C.D.Y.C - A.M.S2.1%C.D.Y.C - A.M.S2.1%D.B.s - f32 - CPU2%OpenFOAMASKAPZstd CompressionPyHPC BenchmarksAppleseedKTX-Software toktxDaCapo BenchmarkGraph500AOM AV1ASKAP7-Zip Compressionx264Graph500libavif avifencXmrigPyHPC BenchmarksOpenVKLXsbenchNeural Magic DeepSparseTimed Godot Game Engine CompilationNeural Magic DeepSparseOSPRay StudioTimed Gem5 CompilationNeural Magic DeepSparseNeural Magic DeepSparseXmrigLULESHnginxOpenVINOOpenVINONeural Magic DeepSparseNeural Magic DeepSparseOpenVINOOpenVINONeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseoneDNNNo SMEAMD SME Enabled

AMD SME Benchmark Genoaquantlib: hpcg: npb: BT.Cnpb: EP.Cnpb: FT.Cnpb: SP.Cminibude: OpenMP - BM1minibude: OpenMP - BM1minibude: OpenMP - BM2minibude: OpenMP - BM2rodinia: OpenMP LavaMDrodinia: OpenMP CFD Solvernamd: ATPase Simulation - 327,506 Atomsnwchem: C240 Buckyballincompact3d: input.i3d 193 Cells Per Directionopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testopenradioss: INIVOL and Fluid Structure Interaction Drop Containerrelion: Basic - CPUlulesh: xmrig: Monero - 1Mxmrig: Wownero - 1Mdacapobench: H2renaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedsrsran: OFDM_Testsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMaom-av1: Speed 10 Realtime - Bosphorus 4Kembree: Pathtracer ISPC - Crownkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastsvt-av1: Preset 13 - Bosphorus 4Kx264: Bosphorus 4Kx265: Bosphorus 4Kmt-dgemm: Sustained Floating-Point Rateoidn: RTLightmap.hdr.4096x4096openvkl: vklBenchmark ISPCospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingavifenc: 2avifenc: 6build-gem5: Time To Compilebuild-godot: Time To Compilebuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigbuild-llvm: Ninjabuild-llvm: Unix Makefilesospray-studio: 3 - 4K - 32 - Path Tracerliquid-dsp: 256 - 256 - 57liquid-dsp: 384 - 256 - 57askap: tConvolve MPI - Degriddingaskap: tConvolve MPI - Griddingastcenc: Thoroughastcenc: Exhaustivegraph500: 26graph500: 26graph500: 26graph500: 26gromacs: MPI CPU - water_GMX50_barepgbench: 100 - 250 - Read Onlytensorflow: CPU - 64 - AlexNettoktx: Zstd Compression 9toktx: Zstd Compression 19deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamwrf: conus 2.5kmgpaw: Carbon Nanotubeblender: Classroom - CPU-Onlyblender: Barbershop - CPU-Onlyopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUxsbench: nginx: 500onnx: super-resolution-10 - CPU - Standardappleseed: Emilypyhpc: CPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUNo SMEAMD SME Enabled3052.888.3902496467.9816457.94223096.07255564.197281.284291.2518633.544345.34216.5085.9380.128311524.44.3742052725.0678722.08426479.8518.4580.88128.65559069.405105141.7126508.3480712286.34764.652.93825.0161733333415.1157.8413.9165.8445.2166.0444.0172.7139.794.434.47183.252874.3177.68248.334106.8623.4870.3720951.661322230.61743.8478917782116063234.6902.393138.63934.14125.709146.32575.329160.12922043103320000001034600000083598.393071.0106.424411.82061426480000153318000059315300083850500018.7122951147508.402.73418.86384.25001134.4234762.6984125.5335858.4729111.46141962.102448.82911204.849179.4895617.0281155.102984.27641133.31324077.18923.16720.9980.77101.90469.8143.291102.1542.761115.567437.736.44193.95246.9511180.634.289997.684.79967.9049.5419801.409.629027.725.31150792.420.55165194.420.3629806415201056.695600142.947020.8841.6910.8637263.892990.52205222.67950.9161332011.153061.387.1501494917.4416462.35220214.75253299.337290.636291.6258588.749343.55016.6696.0430.129911543.14.4242456827.09945822.1330279.9718.3280.90130.42657686.086101932.1123484.1505012347.54838.549.93837.3162633333408.5157.7415.7165.9444.8165.7445.7172.2139.194.933.12180.571773.3276.22251.441103.0723.2970.2774371.661286229.87943.3785885135116903835.2602.469142.18135.03825.303148.43576.629162.62922614103446666671035000000078718.789541.8106.556611.83791358510000152638000057251000083546700018.6232970869505.262.77619.88183.63631143.0364745.6412128.4001840.9404113.82701911.399950.10801178.999081.2258601.9341158.942883.81731143.01064116.6222.98120.9581.57101.55471.5342.331127.5142.031134.687274.986.59193.83247.2311184.764.289990.194.79963.1449.7819704.849.678993.905.33148736.040.55167545.540.3629021428196386.415583150.920710.9341.7440.8504183.923130.52662823.14290.9184822002.43OpenBenchmarking.org

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21No SMEAMD SME Enabled7001400210028003500SE +/- 6.39, N = 3SE +/- 8.14, N = 33052.83061.31. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21No SMEAMD SME Enabled5001000150020002500Min: 3041.1 / Avg: 3052.8 / Max: 3063.1Min: 3051.6 / Avg: 3061.33 / Max: 3077.51. (CXX) g++ options: -O3 -march=native -rdynamic

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1No SMEAMD SME Enabled20406080100SE +/- 0.10, N = 3SE +/- 0.01, N = 388.3987.151. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1No SMEAMD SME Enabled20406080100Min: 88.19 / Avg: 88.39 / Max: 88.52Min: 87.12 / Avg: 87.15 / Max: 87.171. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CNo SMEAMD SME Enabled110K220K330K440K550KSE +/- 529.89, N = 3SE +/- 3984.99, N = 3496467.98494917.441. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CNo SMEAMD SME Enabled90K180K270K360K450KMin: 495447.47 / Avg: 496467.98 / Max: 497225.73Min: 487621.61 / Avg: 494917.44 / Max: 501343.591. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CNo SMEAMD SME Enabled4K8K12K16K20KSE +/- 54.14, N = 3SE +/- 73.01, N = 316457.9416462.351. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CNo SMEAMD SME Enabled3K6K9K12K15KMin: 16369.23 / Avg: 16457.94 / Max: 16556.07Min: 16372.19 / Avg: 16462.35 / Max: 16606.91. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CNo SMEAMD SME Enabled50K100K150K200K250KSE +/- 2651.33, N = 4SE +/- 1868.66, N = 3223096.07220214.751. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CNo SMEAMD SME Enabled40K80K120K160K200KMin: 217794.23 / Avg: 223096.07 / Max: 228690.13Min: 217839.87 / Avg: 220214.75 / Max: 223901.321. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CNo SMEAMD SME Enabled50K100K150K200K250KSE +/- 3645.86, N = 3SE +/- 2731.53, N = 3255564.19253299.331. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CNo SMEAMD SME Enabled40K80K120K160K200KMin: 251174.75 / Avg: 255564.19 / Max: 262801.38Min: 248096.45 / Avg: 253299.33 / Max: 257343.441. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1No SMEAMD SME Enabled16003200480064008000SE +/- 16.32, N = 3SE +/- 8.37, N = 37281.287290.641. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1No SMEAMD SME Enabled13002600390052006500Min: 7250.2 / Avg: 7281.28 / Max: 7305.45Min: 7273.9 / Avg: 7290.64 / Max: 7299.51. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1No SMEAMD SME Enabled60120180240300SE +/- 0.65, N = 3SE +/- 0.33, N = 3291.25291.631. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1No SMEAMD SME Enabled50100150200250Min: 290.01 / Avg: 291.25 / Max: 292.22Min: 290.96 / Avg: 291.63 / Max: 291.981. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2No SMEAMD SME Enabled2K4K6K8K10KSE +/- 80.07, N = 3SE +/- 99.74, N = 38633.548588.751. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2No SMEAMD SME Enabled15003000450060007500Min: 8525.45 / Avg: 8633.54 / Max: 8789.91Min: 8389.3 / Avg: 8588.75 / Max: 8690.821. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2No SMEAMD SME Enabled80160240320400SE +/- 3.20, N = 3SE +/- 3.99, N = 3345.34343.551. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2No SMEAMD SME Enabled60120180240300Min: 341.02 / Avg: 345.34 / Max: 351.6Min: 335.57 / Avg: 343.55 / Max: 347.631. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDNo SMEAMD SME Enabled48121620SE +/- 0.13, N = 3SE +/- 0.05, N = 316.5116.671. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDNo SMEAMD SME Enabled48121620Min: 16.32 / Avg: 16.51 / Max: 16.75Min: 16.59 / Avg: 16.67 / Max: 16.761. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverNo SMEAMD SME Enabled246810SE +/- 0.012, N = 3SE +/- 0.030, N = 35.9386.0431. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverNo SMEAMD SME Enabled246810Min: 5.91 / Avg: 5.94 / Max: 5.95Min: 5.99 / Avg: 6.04 / Max: 6.091. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsNo SMEAMD SME Enabled0.02920.05840.08760.11680.146SE +/- 0.00031, N = 3SE +/- 0.00010, N = 30.128310.12991
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsNo SMEAMD SME Enabled12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballNo SMEAMD SME Enabled300600900120015001524.41543.11. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionNo SMEAMD SME Enabled0.99551.9912.98653.9824.9775SE +/- 0.01135008, N = 3SE +/- 0.04122391, N = 34.374205274.424245681. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionNo SMEAMD SME Enabled246810Min: 4.35 / Avg: 4.37 / Max: 4.39Min: 4.38 / Avg: 4.42 / Max: 4.511. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeNo SMEAMD SME Enabled61218243025.0727.101. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeNo SMEAMD SME Enabled51015202522.0822.131. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamNo SMEAMD SME Enabled20406080100SE +/- 0.73, N = 3SE +/- 0.15, N = 379.8579.97
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamNo SMEAMD SME Enabled1530456075Min: 78.58 / Avg: 79.85 / Max: 81.1Min: 79.77 / Avg: 79.97 / Max: 80.27

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestNo SMEAMD SME Enabled510152025SE +/- 0.02, N = 3SE +/- 0.13, N = 318.4518.32
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestNo SMEAMD SME Enabled510152025Min: 18.41 / Avg: 18.45 / Max: 18.47Min: 18.06 / Avg: 18.32 / Max: 18.45

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerNo SMEAMD SME Enabled20406080100SE +/- 0.09, N = 3SE +/- 0.15, N = 380.8880.90
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerNo SMEAMD SME Enabled1530456075Min: 80.72 / Avg: 80.88 / Max: 81.03Min: 80.61 / Avg: 80.9 / Max: 81.11

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPUNo SMEAMD SME Enabled306090120150SE +/- 1.40, N = 5SE +/- 1.42, N = 5128.66130.431. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPUNo SMEAMD SME Enabled20406080100Min: 126.72 / Avg: 128.66 / Max: 134.21Min: 128.67 / Avg: 130.43 / Max: 136.081. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3No SMEAMD SME Enabled13K26K39K52K65KSE +/- 197.53, N = 3SE +/- 360.17, N = 359069.4157686.091. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3No SMEAMD SME Enabled10K20K30K40K50KMin: 58816.26 / Avg: 59069.41 / Max: 59458.64Min: 57068.82 / Avg: 57686.09 / Max: 58316.271. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MNo SMEAMD SME Enabled20K40K60K80K100KSE +/- 111.86, N = 3SE +/- 540.08, N = 3105141.7101932.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MNo SMEAMD SME Enabled20K40K60K80K100KMin: 104953.8 / Avg: 105141.67 / Max: 105340.8Min: 101142.9 / Avg: 101932.1 / Max: 102965.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MNo SMEAMD SME Enabled30K60K90K120K150KSE +/- 211.86, N = 3SE +/- 341.44, N = 3126508.3123484.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MNo SMEAMD SME Enabled20K40K60K80K100KMin: 126214.8 / Avg: 126508.27 / Max: 126919.7Min: 122804.9 / Avg: 123484.1 / Max: 1238851. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2No SMEAMD SME Enabled11002200330044005500SE +/- 54.36, N = 20SE +/- 50.10, N = 2048075050
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2No SMEAMD SME Enabled9001800270036004500Min: 4151 / Avg: 4806.75 / Max: 5250Min: 4624 / Avg: 5049.5 / Max: 5446

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsNo SMEAMD SME Enabled3K6K9K12K15KSE +/- 88.33, N = 3SE +/- 95.54, N = 312286.312347.5MIN: 11326.41 / MAX: 12632.65MIN: 11146.33 / MAX: 12514.13
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsNo SMEAMD SME Enabled2K4K6K8K10KMin: 12192.27 / Avg: 12286.3 / Max: 12462.83Min: 12183.2 / Avg: 12347.46 / Max: 12514.13

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutNo SMEAMD SME Enabled10002000300040005000SE +/- 54.74, N = 12SE +/- 69.41, N = 34764.64838.5MIN: 4124.15 / MAX: 6577.01MIN: 4339.45 / MAX: 6109.38
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutNo SMEAMD SME Enabled8001600240032004000Min: 4513.54 / Avg: 4764.57 / Max: 5159.51Min: 4707.46 / Avg: 4838.47 / Max: 4943.72

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedNo SMEAMD SME Enabled1224364860SE +/- 1.03, N = 15SE +/- 0.70, N = 352.949.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedNo SMEAMD SME Enabled1122334455Min: 46.5 / Avg: 52.89 / Max: 57.9Min: 48.8 / Avg: 49.9 / Max: 51.21. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedNo SMEAMD SME Enabled8001600240032004000SE +/- 14.95, N = 15SE +/- 1.04, N = 33825.03837.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedNo SMEAMD SME Enabled7001400210028003500Min: 3748.3 / Avg: 3825.05 / Max: 3923.6Min: 3835.4 / Avg: 3837.27 / Max: 38391. (CC) gcc options: -O3 -pthread -lz -llzma

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestNo SMEAMD SME Enabled30M60M90M120M150MSE +/- 600925.21, N = 3SE +/- 883804.91, N = 31617333331626333331. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestNo SMEAMD SME Enabled30M60M90M120M150MMin: 160900000 / Avg: 161733333.33 / Max: 162900000Min: 160900000 / Avg: 162633333.33 / Max: 1638000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMNo SMEAMD SME Enabled90180270360450SE +/- 0.49, N = 3SE +/- 3.12, N = 3415.1408.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMNo SMEAMD SME Enabled70140210280350Min: 414.1 / Avg: 415.07 / Max: 415.7Min: 403 / Avg: 408.5 / Max: 413.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMNo SMEAMD SME Enabled306090120150SE +/- 0.25, N = 3SE +/- 0.31, N = 3157.8157.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMNo SMEAMD SME Enabled306090120150Min: 157.5 / Avg: 157.8 / Max: 158.3Min: 157.3 / Avg: 157.7 / Max: 158.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMNo SMEAMD SME Enabled90180270360450SE +/- 0.64, N = 3SE +/- 0.45, N = 3413.9415.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMNo SMEAMD SME Enabled70140210280350Min: 412.9 / Avg: 413.93 / Max: 415.1Min: 415.2 / Avg: 415.7 / Max: 416.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMNo SMEAMD SME Enabled4080120160200SE +/- 0.84, N = 3SE +/- 0.46, N = 3165.8165.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMNo SMEAMD SME Enabled306090120150Min: 164.3 / Avg: 165.8 / Max: 167.2Min: 165.1 / Avg: 165.93 / Max: 166.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMNo SMEAMD SME Enabled100200300400500SE +/- 1.49, N = 3SE +/- 1.01, N = 3445.2444.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMNo SMEAMD SME Enabled80160240320400Min: 442.2 / Avg: 445.17 / Max: 446.8Min: 442.8 / Avg: 444.8 / Max: 4461. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMNo SMEAMD SME Enabled4080120160200SE +/- 0.24, N = 3SE +/- 0.41, N = 3166.0165.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMNo SMEAMD SME Enabled306090120150Min: 165.5 / Avg: 165.97 / Max: 166.3Min: 165.1 / Avg: 165.73 / Max: 166.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMNo SMEAMD SME Enabled100200300400500SE +/- 1.15, N = 3SE +/- 0.03, N = 3444.0445.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMNo SMEAMD SME Enabled80160240320400Min: 441.8 / Avg: 443.97 / Max: 445.7Min: 445.7 / Avg: 445.73 / Max: 445.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMNo SMEAMD SME Enabled4080120160200SE +/- 0.47, N = 3SE +/- 0.09, N = 3172.7172.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMNo SMEAMD SME Enabled306090120150Min: 172 / Avg: 172.7 / Max: 173.6Min: 172.1 / Avg: 172.23 / Max: 172.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMNo SMEAMD SME Enabled306090120150SE +/- 0.32, N = 3SE +/- 0.19, N = 3139.7139.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMNo SMEAMD SME Enabled306090120150Min: 139.2 / Avg: 139.7 / Max: 140.3Min: 138.9 / Avg: 139.13 / Max: 139.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMNo SMEAMD SME Enabled20406080100SE +/- 0.22, N = 3SE +/- 0.09, N = 394.494.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMNo SMEAMD SME Enabled20406080100Min: 94 / Avg: 94.43 / Max: 94.7Min: 94.7 / Avg: 94.87 / Max: 951. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KNo SMEAMD SME Enabled816243240SE +/- 0.53, N = 15SE +/- 0.56, N = 1234.4733.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KNo SMEAMD SME Enabled714212835Min: 31.73 / Avg: 34.47 / Max: 39.8Min: 29.64 / Avg: 33.12 / Max: 37.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownNo SMEAMD SME Enabled4080120160200SE +/- 0.65, N = 3SE +/- 0.40, N = 3183.25180.57MIN: 135.3 / MAX: 213.14MIN: 129.9 / MAX: 210
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownNo SMEAMD SME Enabled306090120150Min: 182.56 / Avg: 183.25 / Max: 184.54Min: 179.78 / Avg: 180.57 / Max: 181.02

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastNo SMEAMD SME Enabled1632486480SE +/- 0.95, N = 3SE +/- 0.91, N = 374.3173.321. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastNo SMEAMD SME Enabled1428425670Min: 73.01 / Avg: 74.31 / Max: 76.16Min: 71.62 / Avg: 73.32 / Max: 74.751. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastNo SMEAMD SME Enabled20406080100SE +/- 0.77, N = 3SE +/- 0.97, N = 377.6876.221. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastNo SMEAMD SME Enabled1530456075Min: 76.76 / Avg: 77.68 / Max: 79.21Min: 74.34 / Avg: 76.22 / Max: 77.551. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KNo SMEAMD SME Enabled50100150200250SE +/- 6.22, N = 15SE +/- 4.08, N = 15248.33251.44
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KNo SMEAMD SME Enabled50100150200250Min: 176.86 / Avg: 248.33 / Max: 272.7Min: 214.88 / Avg: 251.44 / Max: 268.48

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KNo SMEAMD SME Enabled20406080100SE +/- 1.42, N = 3SE +/- 0.62, N = 3106.86103.071. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KNo SMEAMD SME Enabled20406080100Min: 104.83 / Avg: 106.86 / Max: 109.6Min: 102.01 / Avg: 103.07 / Max: 104.161. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KNo SMEAMD SME Enabled612182430SE +/- 0.29, N = 4SE +/- 0.17, N = 323.4823.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KNo SMEAMD SME Enabled510152025Min: 22.71 / Avg: 23.48 / Max: 24.11Min: 23.1 / Avg: 23.29 / Max: 23.641. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateNo SMEAMD SME Enabled1632486480SE +/- 0.11, N = 3SE +/- 0.18, N = 370.3770.281. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateNo SMEAMD SME Enabled1428425670Min: 70.22 / Avg: 70.37 / Max: 70.59Min: 70.05 / Avg: 70.28 / Max: 70.621. (CC) gcc options: -O3 -march=native -fopenmp

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x4096No SMEAMD SME Enabled0.37350.7471.12051.4941.8675SE +/- 0.00, N = 3SE +/- 0.00, N = 31.661.66
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x4096No SMEAMD SME Enabled246810Min: 1.65 / Avg: 1.66 / Max: 1.66Min: 1.66 / Avg: 1.66 / Max: 1.66

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCNo SMEAMD SME Enabled30060090012001500SE +/- 6.81, N = 3SE +/- 15.55, N = 413221286MIN: 329 / MAX: 4485MIN: 328 / MAX: 5485
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCNo SMEAMD SME Enabled2004006008001000Min: 1312 / Avg: 1322 / Max: 1335Min: 1252 / Avg: 1285.75 / Max: 1327

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeNo SMEAMD SME Enabled50100150200250SE +/- 1.25, N = 3SE +/- 1.51, N = 3230.62229.88
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeNo SMEAMD SME Enabled4080120160200Min: 228.56 / Avg: 230.62 / Max: 232.86Min: 226.94 / Avg: 229.88 / Max: 231.93

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeNo SMEAMD SME Enabled1020304050SE +/- 0.04, N = 3SE +/- 0.14, N = 343.8543.38
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeNo SMEAMD SME Enabled918273645Min: 43.78 / Avg: 43.85 / Max: 43.92Min: 43.22 / Avg: 43.38 / Max: 43.66

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingNo SMEAMD SME Enabled200K400K600K800K1000KSE +/- 9930.11, N = 3SE +/- 6113.05, N = 39177828851351. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingNo SMEAMD SME Enabled160K320K480K640K800KMin: 904922 / Avg: 917782.33 / Max: 937319Min: 876265 / Avg: 885135 / Max: 8968571. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingNo SMEAMD SME Enabled300K600K900K1200K1500KSE +/- 10921.46, N = 3SE +/- 7858.68, N = 3116063211690381. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingNo SMEAMD SME Enabled200K400K600K800K1000KMin: 1139249 / Avg: 1160632.33 / Max: 1175184Min: 1153990 / Avg: 1169038 / Max: 11804921. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2No SMEAMD SME Enabled816243240SE +/- 0.03, N = 3SE +/- 0.42, N = 434.6935.261. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2No SMEAMD SME Enabled816243240Min: 34.64 / Avg: 34.69 / Max: 34.73Min: 34.65 / Avg: 35.26 / Max: 36.461. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6No SMEAMD SME Enabled0.55551.1111.66652.2222.7775SE +/- 0.006, N = 3SE +/- 0.019, N = 32.3932.4691. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6No SMEAMD SME Enabled246810Min: 2.38 / Avg: 2.39 / Max: 2.4Min: 2.44 / Avg: 2.47 / Max: 2.51. (CXX) g++ options: -O3 -fPIC -lm

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileNo SMEAMD SME Enabled306090120150SE +/- 1.59, N = 3SE +/- 1.00, N = 3138.64142.18
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileNo SMEAMD SME Enabled306090120150Min: 135.48 / Avg: 138.64 / Max: 140.55Min: 140.18 / Avg: 142.18 / Max: 143.25

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileNo SMEAMD SME Enabled816243240SE +/- 0.48, N = 3SE +/- 0.36, N = 334.1435.04
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileNo SMEAMD SME Enabled714212835Min: 33.53 / Avg: 34.14 / Max: 35.08Min: 34.6 / Avg: 35.04 / Max: 35.75

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigNo SMEAMD SME Enabled612182430SE +/- 0.22, N = 8SE +/- 0.23, N = 725.7125.30
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigNo SMEAMD SME Enabled612182430Min: 25.23 / Avg: 25.71 / Max: 27.04Min: 25.01 / Avg: 25.3 / Max: 26.68

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigNo SMEAMD SME Enabled306090120150SE +/- 1.13, N = 3SE +/- 0.71, N = 3146.33148.44
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigNo SMEAMD SME Enabled306090120150Min: 145.16 / Avg: 146.33 / Max: 148.59Min: 147.42 / Avg: 148.43 / Max: 149.79

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaNo SMEAMD SME Enabled20406080100SE +/- 0.38, N = 3SE +/- 0.35, N = 375.3376.63
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaNo SMEAMD SME Enabled1530456075Min: 74.94 / Avg: 75.33 / Max: 76.09Min: 76.14 / Avg: 76.63 / Max: 77.31

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesNo SMEAMD SME Enabled4080120160200SE +/- 0.17, N = 3SE +/- 0.05, N = 3160.13162.63
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesNo SMEAMD SME Enabled306090120150Min: 159.86 / Avg: 160.13 / Max: 160.43Min: 162.55 / Avg: 162.63 / Max: 162.73

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerNo SMEAMD SME Enabled5K10K15K20K25KSE +/- 6.36, N = 3SE +/- 32.58, N = 322043226141. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerNo SMEAMD SME Enabled4K8K12K16K20KMin: 22036 / Avg: 22043.33 / Max: 22056Min: 22575 / Avg: 22614.33 / Max: 226791. (CXX) g++ options: -O3 -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 57No SMEAMD SME Enabled2000M4000M6000M8000M10000MSE +/- 8082903.77, N = 3SE +/- 5206833.12, N = 310332000000103446666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 57No SMEAMD SME Enabled2000M4000M6000M8000M10000MMin: 10322000000 / Avg: 10332000000 / Max: 10348000000Min: 10336000000 / Avg: 10344666666.67 / Max: 103540000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 384 - Buffer Length: 256 - Filter Length: 57No SMEAMD SME Enabled2000M4000M6000M8000M10000MSE +/- 3785938.90, N = 3SE +/- 3605551.28, N = 310346000000103500000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 384 - Buffer Length: 256 - Filter Length: 57No SMEAMD SME Enabled2000M4000M6000M8000M10000MMin: 10339000000 / Avg: 10346000000 / Max: 10352000000Min: 10345000000 / Avg: 10350000000 / Max: 103570000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingNo SMEAMD SME Enabled20K40K60K80K100KSE +/- 368.27, N = 3SE +/- 0.00, N = 383598.378718.71. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingNo SMEAMD SME Enabled14K28K42K56K70KMin: 82861.8 / Avg: 83598.33 / Max: 83966.6Min: 78718.7 / Avg: 78718.7 / Max: 78718.71. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingNo SMEAMD SME Enabled20K40K60K80K100KSE +/- 460.77, N = 3SE +/- 422.37, N = 393071.089541.81. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingNo SMEAMD SME Enabled16K32K48K64K80KMin: 92610.2 / Avg: 93070.97 / Max: 93992.5Min: 88697.1 / Avg: 89541.83 / Max: 89964.21. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughNo SMEAMD SME Enabled20406080100SE +/- 0.06, N = 3SE +/- 0.03, N = 3106.42106.561. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughNo SMEAMD SME Enabled20406080100Min: 106.31 / Avg: 106.42 / Max: 106.5Min: 106.51 / Avg: 106.56 / Max: 106.611. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveNo SMEAMD SME Enabled3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 311.8211.841. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveNo SMEAMD SME Enabled3691215Min: 11.81 / Avg: 11.82 / Max: 11.83Min: 11.83 / Avg: 11.84 / Max: 11.851. (CXX) g++ options: -O3 -flto -pthread

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbfs median_TEPS, More Is BetterGraph500 3.0Scale: 26No SMEAMD SME Enabled300M600M900M1200M1500M142648000013585100001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

OpenBenchmarking.orgbfs max_TEPS, More Is BetterGraph500 3.0Scale: 26No SMEAMD SME Enabled300M600M900M1200M1500M153318000015263800001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 26No SMEAMD SME Enabled130M260M390M520M650M5931530005725100001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

OpenBenchmarking.orgsssp max_TEPS, More Is BetterGraph500 3.0Scale: 26No SMEAMD SME Enabled200M400M600M800M1000M8385050008354670001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareNo SMEAMD SME Enabled510152025SE +/- 0.03, N = 3SE +/- 0.03, N = 318.7118.621. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareNo SMEAMD SME Enabled510152025Min: 18.68 / Avg: 18.71 / Max: 18.77Min: 18.58 / Avg: 18.62 / Max: 18.681. (CXX) g++ options: -O3

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyNo SMEAMD SME Enabled600K1200K1800K2400K3000KSE +/- 16891.69, N = 3SE +/- 40566.19, N = 3295114729708691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyNo SMEAMD SME Enabled500K1000K1500K2000K2500KMin: 2923251.98 / Avg: 2951147.15 / Max: 2981598.74Min: 2929350.84 / Avg: 2970868.89 / Max: 3051993.761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetNo SMEAMD SME Enabled110220330440550SE +/- 6.01, N = 15SE +/- 7.26, N = 15508.40505.26
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetNo SMEAMD SME Enabled90180270360450Min: 470.1 / Avg: 508.4 / Max: 536.33Min: 461.26 / Avg: 505.26 / Max: 537.42

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 9No SMEAMD SME Enabled0.62461.24921.87382.49843.123SE +/- 0.006, N = 3SE +/- 0.006, N = 32.7342.776
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 9No SMEAMD SME Enabled246810Min: 2.72 / Avg: 2.73 / Max: 2.74Min: 2.77 / Avg: 2.78 / Max: 2.79

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19No SMEAMD SME Enabled510152025SE +/- 0.08, N = 3SE +/- 0.02, N = 318.8619.88
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19No SMEAMD SME Enabled510152025Min: 18.7 / Avg: 18.86 / Max: 18.98Min: 19.86 / Avg: 19.88 / Max: 19.91

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled20406080100SE +/- 0.10, N = 3SE +/- 0.11, N = 384.2583.64
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled1632486480Min: 84.11 / Avg: 84.25 / Max: 84.43Min: 83.42 / Avg: 83.64 / Max: 83.79

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled2004006008001000SE +/- 0.43, N = 3SE +/- 0.31, N = 31134.421143.04
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled2004006008001000Min: 1133.83 / Avg: 1134.42 / Max: 1135.26Min: 1142.5 / Avg: 1143.04 / Max: 1143.57

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled160320480640800SE +/- 0.54, N = 3SE +/- 0.69, N = 3762.70745.64
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled130260390520650Min: 761.77 / Avg: 762.7 / Max: 763.65Min: 744.55 / Avg: 745.64 / Max: 746.9

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled306090120150SE +/- 0.08, N = 3SE +/- 0.14, N = 3125.53128.40
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled20406080100Min: 125.42 / Avg: 125.53 / Max: 125.68Min: 128.15 / Avg: 128.4 / Max: 128.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled2004006008001000SE +/- 1.61, N = 3SE +/- 0.37, N = 3858.47840.94
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled150300450600750Min: 855.41 / Avg: 858.47 / Max: 860.87Min: 840.2 / Avg: 840.94 / Max: 841.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled306090120150SE +/- 0.16, N = 3SE +/- 0.01, N = 3111.46113.83
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled20406080100Min: 111.2 / Avg: 111.46 / Max: 111.76Min: 113.82 / Avg: 113.83 / Max: 113.84

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled400800120016002000SE +/- 1.94, N = 3SE +/- 4.55, N = 31962.101911.40
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled30060090012001500Min: 1958.58 / Avg: 1962.1 / Max: 1965.27Min: 1902.45 / Avg: 1911.4 / Max: 1917.3

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled1122334455SE +/- 0.05, N = 3SE +/- 0.13, N = 348.8350.11
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled1020304050Min: 48.73 / Avg: 48.83 / Max: 48.92Min: 49.95 / Avg: 50.11 / Max: 50.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled30060090012001500SE +/- 2.65, N = 3SE +/- 1.22, N = 31204.851179.00
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled2004006008001000Min: 1201.89 / Avg: 1204.85 / Max: 1210.15Min: 1177.18 / Avg: 1179 / Max: 1181.32

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled20406080100SE +/- 0.15, N = 3SE +/- 0.07, N = 379.4981.23
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled1530456075Min: 79.19 / Avg: 79.49 / Max: 79.7Min: 81.09 / Avg: 81.23 / Max: 81.3

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled130260390520650SE +/- 0.96, N = 3SE +/- 0.11, N = 3617.03601.93
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled110220330440550Min: 615.6 / Avg: 617.03 / Max: 618.86Min: 601.81 / Avg: 601.93 / Max: 602.16

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled4080120160200SE +/- 0.23, N = 3SE +/- 0.02, N = 3155.10158.94
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled306090120150Min: 154.69 / Avg: 155.1 / Max: 155.48Min: 158.9 / Avg: 158.94 / Max: 158.96

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled20406080100SE +/- 0.09, N = 3SE +/- 0.05, N = 384.2883.82
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled1632486480Min: 84.14 / Avg: 84.28 / Max: 84.44Min: 83.73 / Avg: 83.82 / Max: 83.87

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled2004006008001000SE +/- 1.72, N = 3SE +/- 0.38, N = 31133.311143.01
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamNo SMEAMD SME Enabled2004006008001000Min: 1130.16 / Avg: 1133.31 / Max: 1136.07Min: 1142.46 / Avg: 1143.01 / Max: 1143.75

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmNo SMEAMD SME Enabled90018002700360045004077.194116.621. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeNo SMEAMD SME Enabled612182430SE +/- 0.16, N = 3SE +/- 0.05, N = 323.1722.981. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeNo SMEAMD SME Enabled510152025Min: 22.86 / Avg: 23.17 / Max: 23.35Min: 22.93 / Avg: 22.98 / Max: 23.081. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyNo SMEAMD SME Enabled510152025SE +/- 0.08, N = 3SE +/- 0.02, N = 320.9920.95
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyNo SMEAMD SME Enabled510152025Min: 20.89 / Avg: 20.99 / Max: 21.14Min: 20.93 / Avg: 20.95 / Max: 21

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyNo SMEAMD SME Enabled20406080100SE +/- 0.30, N = 3SE +/- 0.45, N = 380.7781.57
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyNo SMEAMD SME Enabled1632486480Min: 80.38 / Avg: 80.77 / Max: 81.36Min: 80.7 / Avg: 81.57 / Max: 82.19

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUNo SMEAMD SME Enabled20406080100SE +/- 0.24, N = 3SE +/- 0.21, N = 3101.90101.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUNo SMEAMD SME Enabled20406080100Min: 101.43 / Avg: 101.9 / Max: 102.15Min: 101.15 / Avg: 101.55 / Max: 101.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUNo SMEAMD SME Enabled100200300400500SE +/- 0.87, N = 3SE +/- 0.71, N = 3469.81471.53MIN: 402.31 / MAX: 539.43MIN: 415.73 / MAX: 547.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUNo SMEAMD SME Enabled80160240320400Min: 468.86 / Avg: 469.81 / Max: 471.54Min: 470.47 / Avg: 471.53 / Max: 472.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUNo SMEAMD SME Enabled1020304050SE +/- 0.12, N = 3SE +/- 0.21, N = 343.2942.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUNo SMEAMD SME Enabled918273645Min: 43.04 / Avg: 43.29 / Max: 43.44Min: 41.95 / Avg: 42.33 / Max: 42.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUNo SMEAMD SME Enabled2004006008001000SE +/- 3.23, N = 3SE +/- 5.54, N = 31102.151127.51MIN: 799.38 / MAX: 1782.76MIN: 802.53 / MAX: 1835.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUNo SMEAMD SME Enabled2004006008001000Min: 1098.62 / Avg: 1102.15 / Max: 1108.61Min: 1118.31 / Avg: 1127.51 / Max: 1137.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUNo SMEAMD SME Enabled1020304050SE +/- 0.23, N = 3SE +/- 0.02, N = 342.7642.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUNo SMEAMD SME Enabled918273645Min: 42.36 / Avg: 42.76 / Max: 43.15Min: 42 / Avg: 42.03 / Max: 42.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUNo SMEAMD SME Enabled2004006008001000SE +/- 5.83, N = 3SE +/- 0.32, N = 31115.561134.68MIN: 773.47 / MAX: 1806.09MIN: 842.92 / MAX: 1806.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUNo SMEAMD SME Enabled2004006008001000Min: 1106.57 / Avg: 1115.56 / Max: 1126.49Min: 1134.22 / Avg: 1134.68 / Max: 1135.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUNo SMEAMD SME Enabled16003200480064008000SE +/- 3.74, N = 3SE +/- 4.95, N = 37437.737274.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUNo SMEAMD SME Enabled13002600390052006500Min: 7430.29 / Avg: 7437.73 / Max: 7442.13Min: 7265.09 / Avg: 7274.98 / Max: 7280.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUNo SMEAMD SME Enabled246810SE +/- 0.00, N = 3SE +/- 0.01, N = 36.446.59MIN: 5.05 / MAX: 61.63MIN: 5 / MAX: 63.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUNo SMEAMD SME Enabled3691215Min: 6.44 / Avg: 6.44 / Max: 6.45Min: 6.58 / Avg: 6.59 / Max: 6.61. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled4080120160200SE +/- 0.05, N = 3SE +/- 0.05, N = 3193.95193.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled4080120160200Min: 193.85 / Avg: 193.95 / Max: 194Min: 193.73 / Avg: 193.83 / Max: 193.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled50100150200250SE +/- 0.06, N = 3SE +/- 0.05, N = 3246.95247.23MIN: 205.26 / MAX: 303.51MIN: 208.68 / MAX: 293.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled4080120160200Min: 246.84 / Avg: 246.95 / Max: 247.04Min: 247.18 / Avg: 247.23 / Max: 247.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled2K4K6K8K10KSE +/- 2.65, N = 3SE +/- 4.87, N = 311180.6311184.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled2K4K6K8K10KMin: 11175.37 / Avg: 11180.63 / Max: 11183.81Min: 11177.49 / Avg: 11184.76 / Max: 11194.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled0.9631.9262.8893.8524.815SE +/- 0.00, N = 3SE +/- 0.00, N = 34.284.28MIN: 3.51 / MAX: 38.77MIN: 3.5 / MAX: 42.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled246810Min: 4.28 / Avg: 4.28 / Max: 4.28Min: 4.28 / Avg: 4.28 / Max: 4.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUNo SMEAMD SME Enabled2K4K6K8K10KSE +/- 3.98, N = 3SE +/- 13.80, N = 39997.689990.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUNo SMEAMD SME Enabled2K4K6K8K10KMin: 9991.75 / Avg: 9997.68 / Max: 10005.25Min: 9963.81 / Avg: 9990.19 / Max: 10010.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUNo SMEAMD SME Enabled1.07782.15563.23344.31125.389SE +/- 0.00, N = 3SE +/- 0.01, N = 34.794.79MIN: 3.96 / MAX: 30.92MIN: 3.95 / MAX: 30.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUNo SMEAMD SME Enabled246810Min: 4.78 / Avg: 4.79 / Max: 4.79Min: 4.78 / Avg: 4.79 / Max: 4.81. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNo SMEAMD SME Enabled2004006008001000SE +/- 1.57, N = 3SE +/- 1.50, N = 3967.90963.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNo SMEAMD SME Enabled2004006008001000Min: 964.75 / Avg: 967.9 / Max: 969.58Min: 960.52 / Avg: 963.14 / Max: 965.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNo SMEAMD SME Enabled1122334455SE +/- 0.08, N = 3SE +/- 0.08, N = 349.5449.78MIN: 37.27 / MAX: 225.52MIN: 38.76 / MAX: 189.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUNo SMEAMD SME Enabled1020304050Min: 49.46 / Avg: 49.54 / Max: 49.69Min: 49.65 / Avg: 49.78 / Max: 49.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled4K8K12K16K20KSE +/- 18.52, N = 3SE +/- 4.31, N = 319801.4019704.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled3K6K9K12K15KMin: 19768.21 / Avg: 19801.4 / Max: 19832.25Min: 19697.64 / Avg: 19704.84 / Max: 19712.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 39.629.67MIN: 8.26 / MAX: 57.97MIN: 8.32 / MAX: 78.91. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUNo SMEAMD SME Enabled3691215Min: 9.6 / Avg: 9.62 / Max: 9.64Min: 9.66 / Avg: 9.67 / Max: 9.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNo SMEAMD SME Enabled2K4K6K8K10KSE +/- 7.89, N = 3SE +/- 6.93, N = 39027.728993.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNo SMEAMD SME Enabled16003200480064008000Min: 9011.95 / Avg: 9027.72 / Max: 9035.93Min: 8983.49 / Avg: 8993.9 / Max: 9007.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNo SMEAMD SME Enabled1.19932.39863.59794.79725.9965SE +/- 0.01, N = 3SE +/- 0.00, N = 35.315.33MIN: 4.34 / MAX: 44.16MIN: 4.45 / MAX: 44.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUNo SMEAMD SME Enabled246810Min: 5.3 / Avg: 5.31 / Max: 5.32Min: 5.32 / Avg: 5.33 / Max: 5.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNo SMEAMD SME Enabled30K60K90K120K150KSE +/- 878.85, N = 3SE +/- 1824.08, N = 4150792.42148736.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNo SMEAMD SME Enabled30K60K90K120K150KMin: 149697.48 / Avg: 150792.42 / Max: 152530.67Min: 144139.04 / Avg: 148736.04 / Max: 152086.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNo SMEAMD SME Enabled0.12380.24760.37140.49520.619SE +/- 0.00, N = 3SE +/- 0.00, N = 40.550.55MIN: 0.5 / MAX: 30.13MIN: 0.5 / MAX: 36.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUNo SMEAMD SME Enabled246810Min: 0.55 / Avg: 0.55 / Max: 0.55Min: 0.54 / Avg: 0.55 / Max: 0.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNo SMEAMD SME Enabled40K80K120K160K200KSE +/- 2225.36, N = 3SE +/- 702.31, N = 3165194.42167545.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNo SMEAMD SME Enabled30K60K90K120K150KMin: 162398.58 / Avg: 165194.42 / Max: 169591.37Min: 166620.86 / Avg: 167545.54 / Max: 168923.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNo SMEAMD SME Enabled0.0810.1620.2430.3240.405SE +/- 0.00, N = 3SE +/- 0.00, N = 30.360.36MIN: 0.34 / MAX: 40.99MIN: 0.34 / MAX: 47.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUNo SMEAMD SME Enabled12345Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.36 / Avg: 0.36 / Max: 0.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Xsbench

XSBench is a mini-app representing a key computational kernel of the Monte Carlo neutronics application OpenMC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLookups/s, More Is BetterXsbench 2017-07-06No SMEAMD SME Enabled6M12M18M24M30MSE +/- 43563.46, N = 3SE +/- 367701.15, N = 1529806415290214281. (CC) gcc options: -std=gnu99 -fopenmp -O3 -lm
OpenBenchmarking.orgLookups/s, More Is BetterXsbench 2017-07-06No SMEAMD SME Enabled5M10M15M20M25MMin: 29741607 / Avg: 29806415.33 / Max: 29889250Min: 26223648 / Avg: 29021427.93 / Max: 298324011. (CC) gcc options: -std=gnu99 -fopenmp -O3 -lm

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500No SMEAMD SME Enabled40K80K120K160K200KSE +/- 238.08, N = 3SE +/- 124.29, N = 3201056.69196386.411. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500No SMEAMD SME Enabled30K60K90K120K150KMin: 200648.55 / Avg: 201056.69 / Max: 201473.17Min: 196177.87 / Avg: 196386.41 / Max: 196607.851. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardNo SMEAMD SME Enabled12002400360048006000SE +/- 15.47, N = 3SE +/- 40.49, N = 3560055831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardNo SMEAMD SME Enabled10002000300040005000Min: 5569.5 / Avg: 5600.17 / Max: 5619Min: 5526.5 / Avg: 5583 / Max: 5661.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyNo SMEAMD SME Enabled306090120150142.95150.92

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateNo SMEAMD SME Enabled0.21020.42040.63060.84081.051SE +/- 0.002, N = 3SE +/- 0.003, N = 30.8840.934
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateNo SMEAMD SME Enabled246810Min: 0.88 / Avg: 0.88 / Max: 0.89Min: 0.93 / Avg: 0.93 / Max: 0.94

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingNo SMEAMD SME Enabled0.39240.78481.17721.56961.962SE +/- 0.006, N = 3SE +/- 0.005, N = 31.6911.744
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingNo SMEAMD SME Enabled246810Min: 1.68 / Avg: 1.69 / Max: 1.7Min: 1.74 / Avg: 1.74 / Max: 1.75

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUNo SMEAMD SME Enabled0.19430.38860.58290.77720.9715SE +/- 0.005137, N = 3SE +/- 0.004505, N = 30.8637260.850418MIN: 0.75MIN: 0.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUNo SMEAMD SME Enabled246810Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.85 / Avg: 0.85 / Max: 0.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUNo SMEAMD SME Enabled0.88271.76542.64813.53084.4135SE +/- 0.06074, N = 15SE +/- 0.05423, N = 33.892993.92313MIN: 2.77MIN: 2.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUNo SMEAMD SME Enabled246810Min: 3.53 / Avg: 3.89 / Max: 4.29Min: 3.84 / Avg: 3.92 / Max: 4.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUNo SMEAMD SME Enabled0.11850.2370.35550.4740.5925SE +/- 0.001481, N = 3SE +/- 0.001428, N = 30.5220520.526628MIN: 0.42MIN: 0.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUNo SMEAMD SME Enabled246810Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUNo SMEAMD SME Enabled612182430SE +/- 0.08, N = 3SE +/- 0.20, N = 322.6823.14MIN: 19.97MIN: 20.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUNo SMEAMD SME Enabled510152025Min: 22.57 / Avg: 22.68 / Max: 22.84Min: 22.77 / Avg: 23.14 / Max: 23.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUNo SMEAMD SME Enabled0.20670.41340.62010.82681.0335SE +/- 0.009364, N = 5SE +/- 0.004429, N = 30.9161330.918482MIN: 0.76MIN: 0.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUNo SMEAMD SME Enabled246810Min: 0.88 / Avg: 0.92 / Max: 0.93Min: 0.91 / Avg: 0.92 / Max: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUNo SMEAMD SME Enabled400800120016002000SE +/- 18.08, N = 7SE +/- 18.68, N = 62011.152002.43MIN: 1936.22MIN: 1924.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUNo SMEAMD SME Enabled400800120016002000Min: 1956.09 / Avg: 2011.15 / Max: 2108.97Min: 1939.57 / Avg: 2002.43 / Max: 2078.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

132 Results Shown

QuantLib
High Performance Conjugate Gradient
NAS Parallel Benchmarks:
  BT.C
  EP.C
  FT.C
  SP.C
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
  OpenMP - BM2:
    GFInst/s
    Billion Interactions/s
Rodinia:
  OpenMP LavaMD
  OpenMP CFD Solver
NAMD
NWChem
Xcompact3d Incompact3d
OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
OpenRadioss:
  Bumper Beam
  Cell Phone Drop Test
  INIVOL and Fluid Structure Interaction Drop Container
RELION
LULESH
Xmrig:
  Monero - 1M
  Wownero - 1M
DaCapo Benchmark
Renaissance:
  Finagle HTTP Requests
  In-Memory Database Shootout
Zstd Compression:
  19, Long Mode - Compression Speed
  19, Long Mode - Decompression Speed
srsRAN:
  OFDM_Test
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
AOM AV1
Embree
Kvazaar:
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Ultra Fast
SVT-AV1
x264
x265
ACES DGEMM
Intel Open Image Denoise
OpenVKL
OSPRay:
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
7-Zip Compression:
  Compression Rating
  Decompression Rating
libavif avifenc:
  2
  6
Timed Gem5 Compilation
Timed Godot Game Engine Compilation
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
Timed LLVM Compilation:
  Ninja
  Unix Makefiles
OSPRay Studio
Liquid-DSP:
  256 - 256 - 57
  384 - 256 - 57
ASKAP:
  tConvolve MPI - Degridding
  tConvolve MPI - Gridding
ASTC Encoder:
  Thorough
  Exhaustive
Graph500:
  26:
    bfs median_TEPS
    bfs max_TEPS
    sssp median_TEPS
    sssp max_TEPS
GROMACS
PostgreSQL
TensorFlow
KTX-Software toktx:
  Zstd Compression 9
  Zstd Compression 19
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
WRF
GPAW
Blender:
  Classroom - CPU-Only
  Barbershop - CPU-Only
OpenVINO:
  Face Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP32 - CPU:
    FPS
    ms
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms
Xsbench
nginx
ONNX Runtime
Appleseed
PyHPC Benchmarks:
  CPU - Numpy - 4194304 - Equation of State
  CPU - Numpy - 4194304 - Isoneutral Mixing
oneDNN:
  IP Shapes 3D - u8s8f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU