Google Cloud c3 Sapphire Rapids

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2303226-NE-2303218PT51
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
C/C++ Compiler Tests 8 Tests
Compression Tests 2 Tests
CPU Massive 14 Tests
Creator Workloads 9 Tests
Cryptography 2 Tests
Database Test Suite 4 Tests
Fortran Tests 2 Tests
Game Development 4 Tests
HPC - High Performance Computing 12 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 5 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 2 Tests
Multi-Core 16 Tests
NVIDIA GPU Compute 3 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 6 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Renderers 2 Tests
Scientific Computing 4 Tests
Server 6 Tests
Server CPU Tests 9 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
c3-highcpu-8 SPR
March 21 2023
  13 Hours, 29 Minutes
c2-standard-8 CLX
March 21 2023
  13 Hours, 20 Minutes
Invert Hiding All Results Option
  13 Hours, 24 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Google Cloud c3 Sapphire RapidsProcessorMotherboardChipsetMemoryDiskNetworkOSKernelVulkanCompilerFile-SystemSystem Layerc3-highcpu-8 SPRc2-standard-8 CLXIntel Xeon Platinum 8481C (4 Cores / 8 Threads)Google Compute Engine c3-highcpu-8Intel 440FX 82441FX PMC16GB322GB nvme_card-pdGoogle Compute Engine VirtualUbuntu 22.105.19.0-1015-gcp (x86_64)1.3.224GCC 12.2.0ext4KVMIntel Xeon (4 Cores / 8 Threads)Google Compute Engine c2-standard-832GB322GB PersistentDiskRed Hat Virtio deviceOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- CPU Microcode: 0xffffffffPython Details- Python 3.10.7Security Details- c3-highcpu-8 SPR: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - c2-standard-8 CLX: itlb_multihit: Not affected + l1tf: Not affected + mds: Mitigation of Clear buffers; SMT Host state unknown + meltdown: Not affected + mmio_stale_data: Vulnerable: Clear buffers attempted no microcode; SMT Host state unknown + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT Host state unknown

c3-highcpu-8 SPR vs. c2-standard-8 CLX ComparisonPhoronix Test SuiteBaseline+874.5%+874.5%+1749%+1749%+2623.5%+2623.5%169.7%20.5%20.5%16.6%12.6%11.6%8.5%7.2%C.B.S.A - bf16bf16bf16 - CPU718.5%M.M.B.S.T - bf16bf16bf16 - CPU667%D.B.s - bf16bf16bf16 - CPU3498%SHA256225%AES-256-GCM183.3%VideoAES-128-GCM147.9%IP Shapes 1D - bf16bf16bf16 - CPU1201.9%D.B.s - bf16bf16bf16 - CPU911.9%100 - 800 - Read Only80%100 - 800 - Read Only - Average Latency80%RSA409678.3%100 - 1000 - Read Only73.1%100 - 1000 - Read Only - Average Latency73.1%Core63.4%50057.9%KV, 95% Reads - 12854.2%400052%Church Facade51.4%IP Shapes 3D - bf16bf16bf16 - CPU50.2%100049.8%Object Detection48.9%Pathtracer ISPC - Crown48.6%1 - 4K - 1 - Path Tracer47.5%1:10046.8%3 - 4K - 1 - Path Tracer46.8%ChaCha20-Poly130546.3%1:1046%10044.4%20044.2%Pathtracer ISPC - Asian Dragon42.9%R.O.R.S.I42.6%V.P.M41.3%KV, 50% Reads - 12841.2%vklBenchmark ISPC40%BLAS39.9%d.S.M.S - Mesh Time37.8%Eigen35.4%MPI CPU - water_GMX50_bare34.2%204833.9%409633.8%C.P.D.T32.7%d.S.M.S - Execution Time32.6%ATPase Simulation - 327,506 Atoms32.4%N.D.C.o.b.u.o.I - A.M.S29.2%N.D.C.o.b.u.o.I - A.M.S29.2%19 - D.S29.1%Bumper Beam28.8%N.T.C.B.b.u.c - A.M.S28.4%N.T.C.B.b.u.c - A.M.S28.4%R.N.N.I - bf16bf16bf16 - CPU28.3%19, Long Mode - D.S27.1%Bosphorus 1080p - Super Fast24.5%Bosphorus 4K - Super Fast24%R.N.N.T - bf16bf16bf16 - CPU23.8%OpenMP - BM123.5%OpenMP - BM123.5%Bosphorus 1080p - Very Fast23.2%Bosphorus 4K - Very Fast22.8%Bosphorus 1080p - Ultra Fast22.5%Bosphorus 4K - Ultra Fast22.4%TurboPipe Periodic22.2%B.S.o.W21.8%N.S.A.8.P.Q.B.B.U - A.M.SN.S.A.8.P.Q.B.B.U - A.M.SN.Q.A.B.b.u.S.1.P - A.M.S20.4%N.Q.A.B.b.u.S.1.P - A.M.S20.3%Lion19%defconfig18.3%Stitching16.8%MoVR - 128i.i.1.C.P.D16.6%N.T.C.B.b.u.S - A.M.S15.9%N.T.C.B.b.u.S - A.M.S15.9%Time To Compile15.4%Image Processing14.9%Compression Rating13.9%N.T.C.D.m - A.M.S13.5%N.T.C.D.m - A.M.S13.4%RSA4096MD512%D.R19 - Compression Speed11.5%19, Long Mode - Compression Speed10.9%RT.hdr_alb_nrm.3840x21609.1%RTLightmap.hdr.4096x40969.1%WPA PSKW.l.H8.1%C.C.R.5.I - A.M.S8.1%C.C.R.5.I - A.M.S8.1%Graph API7.4%HMAC-SHA512SHA5127%CPU - 16 - ResNet-506.5%H.H6.3%CPU - 64 - ResNet-506.1%CPU - 32 - ResNet-505.8%C.S.9.P.Y.P - A.M.S5.6%C.S.9.P.Y.P - A.M.S5.6%Tomographic Model4.8%Mount St. Helens4.2%Blowfish3.7%bcrypt3.7%ChaCha203.5%oneDNNoneDNNoneDNNOpenSSLOpenSSLOpenCVOpenSSLoneDNNoneDNNPostgreSQLPostgreSQLOpenSSLPostgreSQLPostgreSQLOpenCVnginxCockroachDBnginxGoogle DracooneDNNnginxOpenCVEmbreeOSPRay StudioMemcachedOSPRay StudioOpenSSLMemcachednginxnginxEmbreeOpenRadiossBRL-CADCockroachDBOpenVKLLeelaChessZeroOpenFOAMLeelaChessZeroGROMACSMariaDBMariaDBOpenRadiossOpenFOAMNAMDNeural Magic DeepSparseNeural Magic DeepSparseZstd CompressionOpenRadiossNeural Magic DeepSparseNeural Magic DeepSparseoneDNNZstd Compressionuvg266uvg266oneDNNminiBUDEminiBUDEuvg266uvg266uvg266uvg266nekRSOpenRadiossNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseGoogle DracoTimed Linux Kernel CompilationOpenCVCockroachDBXcompact3d Incompact3dNeural Magic DeepSparseNeural Magic DeepSparseTimed FFmpeg CompilationOpenCV7-Zip CompressionNeural Magic DeepSparseNeural Magic DeepSparseOpenSSLJohn The Ripper7-Zip CompressionZstd CompressionZstd CompressionIntel Open Image DenoiseIntel Open Image DenoiseJohn The RipperSPECFEM3DNeural Magic DeepSparseNeural Magic DeepSparseOpenCVJohn The RipperOpenSSLTensorFlowSPECFEM3DTensorFlowTensorFlowNeural Magic DeepSparseNeural Magic DeepSparseSPECFEM3DSPECFEM3DJohn The RipperJohn The RipperOpenSSLc3-highcpu-8 SPRc2-standard-8 CLX

Google Cloud c3 Sapphire Rapidslczero: BLASlczero: Eigenminibude: OpenMP - BM1minibude: OpenMP - BM1namd: ATPase Simulation - 327,506 Atomsnekrs: TurboPipe Periodicincompact3d: input.i3d 129 Cells Per Directionopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationspecfem3d: Mount St. Helensspecfem3d: Layered Halfspacespecfem3d: Tomographic Modelspecfem3d: Homogeneous Halfspacespecfem3d: Water-layered Halfspacecompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedjohn-the-ripper: bcryptjohn-the-ripper: WPA PSKjohn-the-ripper: Blowfishjohn-the-ripper: HMAC-SHA512john-the-ripper: MD5embree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonuvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 1080p - Very Fastuvg266: Bosphorus 1080p - Super Fastuvg266: Bosphorus 1080p - Ultra Fastoidn: RT.hdr_alb_nrm.3840x2160oidn: RTLightmap.hdr.4096x4096openvkl: vklBenchmark ISPCcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingbuild-ffmpeg: Time To Compilebuild-linux-kernel: defconfigonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 3 - 4K - 1 - Path Traceropenssl: SHA256openssl: SHA512openssl: RSA4096openssl: RSA4096openssl: ChaCha20openssl: AES-128-GCMopenssl: AES-256-GCMopenssl: ChaCha20-Poly1305cockroach: MoVR - 128cockroach: KV, 50% Reads - 128cockroach: KV, 95% Reads - 128memcached: 1:10memcached: 1:100gromacs: MPI CPU - water_GMX50_baremysqlslap: 2048mysqlslap: 4096pgbench: 100 - 800 - Read Onlypgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlypgbench: 100 - 1000 - Read Only - Average Latencytensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdraco: Liondraco: Church Facadeblender: BMW27 - CPU-Onlynginx: 100nginx: 200nginx: 500nginx: 1000nginx: 4000brl-cad: VGR Performance Metricopencv: Coreopencv: Videoopencv: Graph APIopencv: Stitchingopencv: Image Processingopencv: Object Detectionc3-highcpu-8 SPRc2-standard-8 CLX12721221188.6517.5463.357793066790000032.394316362.044277422.67836303.38219.73595.14367.00139.524469614372.089480857143.937022875179.545901627321.62525166810.3905.26.5907.26932288186930375090007657135.84757.36426.997.489.1232.3934.5042.240.240.12983530620468120.438244.8001.500045.342184.177071.471453.549444660.702337.310.9689862095225819428387398715685729202062.767857.722091557637575940778234800836157315970781140458.819321.624960.11044947.131030937.280.7773323173119422.5652937253.40514.2014.9315.693.7873528.019351.892638.512019.1677104.310264.874730.795233.106460.38586.5372305.899116.2194123.29313.7693530.610062507573315.2436310.3535602.1034672.6532118.5832814.7571072873723165421993121476012816338999909902152.7986.1124.445342510076666737.757567185.518264560.60744390.61291.49724.80523.45145.379328519374.756877204150.915292633190.794140810347.8128140709.24701.15.86713.86687312786684401970006835463.93405.15425.696.037.4526.3027.7234.470.220.11703098922852139.039289.66219.52878.0241634.189352.942135.91645767.702998.037.432603091137892131819353014658152271156.676402.321346811813232373126031694500063010919151843534.913684.516184.2715723.53702291.930.5792482371733174.6161696435.89513.3314.1114.792.9308682.374862.507331.958315.9179125.474659.991433.292429.178368.50766.1893323.082813.9932142.88922.9352681.344974371146225148.2824695.9121957.1721446.2721594.94503141427701173723618625083314723458056OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASc3-highcpu-8 SPRc2-standard-8 CLX30060090012001500SE +/- 16.59, N = 3SE +/- 7.36, N = 312729091. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASc3-highcpu-8 SPRc2-standard-8 CLX2004006008001000Min: 1239 / Avg: 1271.67 / Max: 1293Min: 894 / Avg: 908.67 / Max: 9171. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigenc3-highcpu-8 SPRc2-standard-8 CLX30060090012001500SE +/- 7.69, N = 3SE +/- 9.00, N = 312219021. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigenc3-highcpu-8 SPRc2-standard-8 CLX2004006008001000Min: 1206 / Avg: 1220.67 / Max: 1232Min: 884 / Avg: 902 / Max: 9111. (CXX) g++ options: -flto -pthread

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1c3-highcpu-8 SPRc2-standard-8 CLX4080120160200SE +/- 0.03, N = 3SE +/- 0.02, N = 3188.65152.801. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1c3-highcpu-8 SPRc2-standard-8 CLX306090120150Min: 188.6 / Avg: 188.65 / Max: 188.68Min: 152.76 / Avg: 152.8 / Max: 152.821. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1c3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.001, N = 3SE +/- 0.001, N = 37.5466.1121. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1c3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 7.54 / Avg: 7.55 / Max: 7.55Min: 6.11 / Avg: 6.11 / Max: 6.111. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsc3-highcpu-8 SPRc2-standard-8 CLX1.00022.00043.00064.00085.001SE +/- 0.00254, N = 3SE +/- 0.00771, N = 33.357794.44534
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsc3-highcpu-8 SPRc2-standard-8 CLX246810Min: 3.35 / Avg: 3.36 / Max: 3.36Min: 4.44 / Avg: 4.45 / Max: 4.46

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodicc3-highcpu-8 SPRc2-standard-8 CLX7000M14000M21000M28000M35000MSE +/- 92013205.57, N = 3SE +/- 56849518.71, N = 330667900000251007666671. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodicc3-highcpu-8 SPRc2-standard-8 CLX5000M10000M15000M20000M25000MMin: 30503900000 / Avg: 30667900000 / Max: 30822200000Min: 25024400000 / Avg: 25100766666.67 / Max: 252119000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionc3-highcpu-8 SPRc2-standard-8 CLX918273645SE +/- 0.02, N = 3SE +/- 0.02, N = 332.3937.761. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionc3-highcpu-8 SPRc2-standard-8 CLX816243240Min: 32.37 / Avg: 32.39 / Max: 32.43Min: 37.72 / Avg: 37.76 / Max: 37.81. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timec3-highcpu-8 SPRc2-standard-8 CLX2040608010062.0485.52-lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling-ldynamicMesh1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timec3-highcpu-8 SPRc2-standard-8 CLX120240360480600422.68560.61-lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lmeshTools -lsampling-ldynamicMesh1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beamc3-highcpu-8 SPRc2-standard-8 CLX80160240320400SE +/- 0.51, N = 3SE +/- 0.90, N = 3303.38390.61
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beamc3-highcpu-8 SPRc2-standard-8 CLX70140210280350Min: 302.36 / Avg: 303.38 / Max: 303.9Min: 389.39 / Avg: 390.61 / Max: 392.37

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testc3-highcpu-8 SPRc2-standard-8 CLX60120180240300SE +/- 0.38, N = 3SE +/- 0.39, N = 3219.73291.49
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testc3-highcpu-8 SPRc2-standard-8 CLX50100150200250Min: 219.08 / Avg: 219.73 / Max: 220.41Min: 290.79 / Avg: 291.49 / Max: 292.12

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldc3-highcpu-8 SPRc2-standard-8 CLX160320480640800SE +/- 0.30, N = 3SE +/- 1.51, N = 3595.14724.80
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldc3-highcpu-8 SPRc2-standard-8 CLX130260390520650Min: 594.67 / Avg: 595.14 / Max: 595.69Min: 722.57 / Avg: 724.8 / Max: 727.69

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationc3-highcpu-8 SPRc2-standard-8 CLX110220330440550SE +/- 0.20, N = 3SE +/- 0.45, N = 3367.00523.45
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationc3-highcpu-8 SPRc2-standard-8 CLX90180270360450Min: 366.62 / Avg: 367 / Max: 367.31Min: 522.65 / Avg: 523.45 / Max: 524.2

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensc3-highcpu-8 SPRc2-standard-8 CLX306090120150SE +/- 0.10, N = 3SE +/- 0.56, N = 3139.52145.381. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensc3-highcpu-8 SPRc2-standard-8 CLX306090120150Min: 139.39 / Avg: 139.52 / Max: 139.71Min: 144.6 / Avg: 145.38 / Max: 146.461. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacec3-highcpu-8 SPRc2-standard-8 CLX80160240320400SE +/- 0.64, N = 3SE +/- 2.51, N = 3372.09374.761. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacec3-highcpu-8 SPRc2-standard-8 CLX70140210280350Min: 371.1 / Avg: 372.09 / Max: 373.28Min: 370.91 / Avg: 374.76 / Max: 379.481. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelc3-highcpu-8 SPRc2-standard-8 CLX306090120150SE +/- 1.65, N = 3SE +/- 1.30, N = 3143.94150.921. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelc3-highcpu-8 SPRc2-standard-8 CLX306090120150Min: 142.24 / Avg: 143.94 / Max: 147.23Min: 149.47 / Avg: 150.92 / Max: 153.511. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacec3-highcpu-8 SPRc2-standard-8 CLX4080120160200SE +/- 0.09, N = 3SE +/- 1.87, N = 3179.55190.791. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacec3-highcpu-8 SPRc2-standard-8 CLX306090120150Min: 179.4 / Avg: 179.55 / Max: 179.72Min: 187.06 / Avg: 190.79 / Max: 192.921. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacec3-highcpu-8 SPRc2-standard-8 CLX80160240320400SE +/- 0.31, N = 3SE +/- 0.85, N = 3321.63347.811. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacec3-highcpu-8 SPRc2-standard-8 CLX60120180240300Min: 321.11 / Avg: 321.63 / Max: 322.18Min: 346.14 / Avg: 347.81 / Max: 348.921. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedc3-highcpu-8 SPRc2-standard-8 CLX3691215SE +/- 0.12, N = 3SE +/- 0.08, N = 310.309.24-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 10.1 / Avg: 10.27 / Max: 10.5Min: 9.13 / Avg: 9.24 / Max: 9.391. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedc3-highcpu-8 SPRc2-standard-8 CLX2004006008001000SE +/- 1.50, N = 3SE +/- 1.45, N = 3905.2701.1-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedc3-highcpu-8 SPRc2-standard-8 CLX160320480640800Min: 902.6 / Avg: 905.2 / Max: 907.8Min: 698.8 / Avg: 701.13 / Max: 703.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedc3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.00, N = 3SE +/- 0.00, N = 36.505.86-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 6.5 / Avg: 6.5 / Max: 6.5Min: 5.86 / Avg: 5.86 / Max: 5.871. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedc3-highcpu-8 SPRc2-standard-8 CLX2004006008001000SE +/- 1.48, N = 3SE +/- 2.02, N = 3907.2713.8-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedc3-highcpu-8 SPRc2-standard-8 CLX160320480640800Min: 904.5 / Avg: 907.2 / Max: 909.6Min: 710.7 / Avg: 713.83 / Max: 717.61. (CC) gcc options: -O3 -pthread -lz

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptc3-highcpu-8 SPRc2-standard-8 CLX15003000450060007500SE +/- 0.67, N = 3SE +/- 0.67, N = 3693266871. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptc3-highcpu-8 SPRc2-standard-8 CLX12002400360048006000Min: 6931 / Avg: 6931.67 / Max: 6933Min: 6686 / Avg: 6686.67 / Max: 66881. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKc3-highcpu-8 SPRc2-standard-8 CLX7K14K21K28K35KSE +/- 27.15, N = 3SE +/- 11.37, N = 328818312781. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKc3-highcpu-8 SPRc2-standard-8 CLX5K10K15K20K25KMin: 28764 / Avg: 28818 / Max: 28850Min: 31262 / Avg: 31278 / Max: 313001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishc3-highcpu-8 SPRc2-standard-8 CLX15003000450060007500SE +/- 2.08, N = 3SE +/- 2.73, N = 3693066841. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishc3-highcpu-8 SPRc2-standard-8 CLX12002400360048006000Min: 6926 / Avg: 6930 / Max: 6933Min: 6679 / Avg: 6684.33 / Max: 66881. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512c3-highcpu-8 SPRc2-standard-8 CLX9M18M27M36M45MSE +/- 21071.31, N = 3SE +/- 32331.62, N = 337509000401970001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512c3-highcpu-8 SPRc2-standard-8 CLX7M14M21M28M35MMin: 37485000 / Avg: 37509000 / Max: 37551000Min: 40157000 / Avg: 40197000 / Max: 402610001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5c3-highcpu-8 SPRc2-standard-8 CLX160K320K480K640K800KSE +/- 1472.13, N = 3SE +/- 162.53, N = 37657136835461. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5c3-highcpu-8 SPRc2-standard-8 CLX130K260K390K520K650KMin: 763648 / Avg: 765713 / Max: 768563Min: 683264 / Avg: 683546.33 / Max: 6838271. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownc3-highcpu-8 SPRc2-standard-8 CLX1.31572.63143.94715.26286.5785SE +/- 0.0120, N = 3SE +/- 0.0030, N = 35.84753.9340MIN: 5.81 / MAX: 5.92MIN: 3.91 / MAX: 3.99
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownc3-highcpu-8 SPRc2-standard-8 CLX246810Min: 5.83 / Avg: 5.85 / Max: 5.87Min: 3.93 / Avg: 3.93 / Max: 3.94

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonc3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.0079, N = 3SE +/- 0.0119, N = 37.36425.1542MIN: 7.33 / MAX: 7.44MIN: 5.12 / MAX: 5.22
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 7.35 / Avg: 7.36 / Max: 7.37Min: 5.14 / Avg: 5.15 / Max: 5.18

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastc3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.03, N = 3SE +/- 0.02, N = 36.995.69
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 6.94 / Avg: 6.99 / Max: 7.02Min: 5.65 / Avg: 5.69 / Max: 5.71

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastc3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.00, N = 3SE +/- 0.00, N = 37.486.03
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 7.48 / Avg: 7.48 / Max: 7.48Min: 6.03 / Avg: 6.03 / Max: 6.03

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastc3-highcpu-8 SPRc2-standard-8 CLX3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 39.127.45
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 9.11 / Avg: 9.12 / Max: 9.13Min: 7.44 / Avg: 7.45 / Max: 7.46

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastc3-highcpu-8 SPRc2-standard-8 CLX816243240SE +/- 0.25, N = 3SE +/- 0.16, N = 332.3926.30
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastc3-highcpu-8 SPRc2-standard-8 CLX714212835Min: 31.89 / Avg: 32.39 / Max: 32.65Min: 25.99 / Avg: 26.3 / Max: 26.51

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastc3-highcpu-8 SPRc2-standard-8 CLX816243240SE +/- 0.03, N = 3SE +/- 0.01, N = 334.5027.72
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastc3-highcpu-8 SPRc2-standard-8 CLX714212835Min: 34.44 / Avg: 34.5 / Max: 34.54Min: 27.7 / Avg: 27.72 / Max: 27.73

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastc3-highcpu-8 SPRc2-standard-8 CLX1020304050SE +/- 0.01, N = 3SE +/- 0.01, N = 342.2434.47
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastc3-highcpu-8 SPRc2-standard-8 CLX918273645Min: 42.22 / Avg: 42.24 / Max: 42.25Min: 34.46 / Avg: 34.47 / Max: 34.48

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastc3-highcpu-8 SPR0.35010.70021.05031.40041.7505SE +/- 0.005, N = 31.5561. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterc3-highcpu-8 SPR0.79291.58582.37873.17163.9645SE +/- 0.000, N = 33.5241. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastc3-highcpu-8 SPR1.19362.38723.58084.77445.968SE +/- 0.018, N = 35.3051. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterc3-highcpu-8 SPR3691215SE +/- 0.01, N = 312.921. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160c3-highcpu-8 SPRc2-standard-8 CLX0.0540.1080.1620.2160.27SE +/- 0.00, N = 3SE +/- 0.00, N = 30.240.22
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160c3-highcpu-8 SPRc2-standard-8 CLX12345Min: 0.24 / Avg: 0.24 / Max: 0.24Min: 0.22 / Avg: 0.22 / Max: 0.22

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x4096c3-highcpu-8 SPRc2-standard-8 CLX0.0270.0540.0810.1080.135SE +/- 0.00, N = 3SE +/- 0.00, N = 30.120.11
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x4096c3-highcpu-8 SPRc2-standard-8 CLX12345Min: 0.11 / Avg: 0.12 / Max: 0.12Min: 0.11 / Avg: 0.11 / Max: 0.11

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCc3-highcpu-8 SPRc2-standard-8 CLX20406080100SE +/- 0.33, N = 3SE +/- 0.00, N = 39870MIN: 11 / MAX: 1579MIN: 8 / MAX: 1119
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCc3-highcpu-8 SPRc2-standard-8 CLX20406080100Min: 98 / Avg: 98.33 / Max: 99Min: 70 / Avg: 70 / Max: 70

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingc3-highcpu-8 SPRc2-standard-8 CLX8K16K24K32K40KSE +/- 246.17, N = 15SE +/- 248.15, N = 335306309891. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingc3-highcpu-8 SPRc2-standard-8 CLX6K12K18K24K30KMin: 33196 / Avg: 35306.27 / Max: 36117Min: 30524 / Avg: 30988.67 / Max: 313721. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingc3-highcpu-8 SPRc2-standard-8 CLX5K10K15K20K25KSE +/- 183.58, N = 15SE +/- 52.92, N = 320468228521. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingc3-highcpu-8 SPRc2-standard-8 CLX4K8K12K16K20KMin: 18773 / Avg: 20468.4 / Max: 20818Min: 22747 / Avg: 22852.33 / Max: 229141. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilec3-highcpu-8 SPRc2-standard-8 CLX306090120150SE +/- 0.10, N = 3SE +/- 0.03, N = 3120.44139.04
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilec3-highcpu-8 SPRc2-standard-8 CLX306090120150Min: 120.26 / Avg: 120.44 / Max: 120.59Min: 138.98 / Avg: 139.04 / Max: 139.08

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigc3-highcpu-8 SPRc2-standard-8 CLX60120180240300SE +/- 0.64, N = 3SE +/- 0.85, N = 3244.80289.66
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigc3-highcpu-8 SPRc2-standard-8 CLX50100150200250Min: 243.96 / Avg: 244.8 / Max: 246.05Min: 288.72 / Avg: 289.66 / Max: 291.36

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX510152025SE +/- 0.00340, N = 3SE +/- 0.01518, N = 31.5000419.52870MIN: 18.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX510152025Min: 1.49 / Avg: 1.5 / Max: 1.5Min: 19.51 / Avg: 19.53 / Max: 19.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.00535, N = 3SE +/- 0.01190, N = 35.342188.02416MIN: 4.94MIN: 7.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 5.33 / Avg: 5.34 / Max: 5.35Min: 8.01 / Avg: 8.02 / Max: 8.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX816243240SE +/- 0.00870, N = 3SE +/- 0.00489, N = 34.1770734.18930MIN: 33.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX714212835Min: 4.16 / Avg: 4.18 / Max: 4.19Min: 34.18 / Avg: 34.19 / Max: 34.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX1224364860SE +/- 0.00472, N = 3SE +/- 0.00209, N = 31.4714552.94210MIN: 52.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX1122334455Min: 1.46 / Avg: 1.47 / Max: 1.48Min: 52.94 / Avg: 52.94 / Max: 52.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX816243240SE +/- 0.01072, N = 3SE +/- 0.01009, N = 33.5494435.91640MIN: 35.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX816243240Min: 3.53 / Avg: 3.55 / Max: 3.56Min: 35.9 / Avg: 35.92 / Max: 35.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX12002400360048006000SE +/- 0.86, N = 3SE +/- 8.38, N = 34660.705767.70MIN: 4648.25MIN: 5732.121. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX10002000300040005000Min: 4659.61 / Avg: 4660.7 / Max: 4662.39Min: 5756.22 / Avg: 5767.7 / Max: 5784.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX6001200180024003000SE +/- 1.76, N = 3SE +/- 0.83, N = 32337.312998.03MIN: 2326.77MIN: 2977.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX5001000150020002500Min: 2334.3 / Avg: 2337.31 / Max: 2340.41Min: 2996.49 / Avg: 2998.03 / Max: 2999.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.013114, N = 3SE +/- 0.007542, N = 30.9689867.432600MIN: 7.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 0.95 / Avg: 0.97 / Max: 1Min: 7.42 / Avg: 7.43 / Max: 7.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracerc3-highcpu-8 SPRc2-standard-8 CLX7K14K21K28K35KSE +/- 4.91, N = 3SE +/- 32.33, N = 320952309111. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracerc3-highcpu-8 SPRc2-standard-8 CLX5K10K15K20K25KMin: 20942 / Avg: 20951.67 / Max: 20958Min: 30879 / Avg: 30911.33 / Max: 309761. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracerc3-highcpu-8 SPRc2-standard-8 CLX8K16K24K32K40KSE +/- 364.36, N = 3SE +/- 39.94, N = 325819378921. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracerc3-highcpu-8 SPRc2-standard-8 CLX7K14K21K28K35KMin: 25090 / Avg: 25818.67 / Max: 26191Min: 37841 / Avg: 37892.33 / Max: 379711. (CXX) g++ options: -O3 -lm -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256c3-highcpu-8 SPRc2-standard-8 CLX900M1800M2700M3600M4500MSE +/- 2722265.85, N = 3SE +/- 28303.98, N = 3428387398713181935301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256c3-highcpu-8 SPRc2-standard-8 CLX700M1400M2100M2800M3500MMin: 4280427040 / Avg: 4283873986.67 / Max: 4289247270Min: 1318139990 / Avg: 1318193530 / Max: 13182362201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512c3-highcpu-8 SPRc2-standard-8 CLX300M600M900M1200M1500MSE +/- 1152941.98, N = 3SE +/- 2965153.70, N = 3156857292014658152271. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512c3-highcpu-8 SPRc2-standard-8 CLX300M600M900M1200M1500MMin: 1566293190 / Avg: 1568572920 / Max: 1570012700Min: 1461883580 / Avg: 1465815226.67 / Max: 14716259201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096c3-highcpu-8 SPRc2-standard-8 CLX400800120016002000SE +/- 1.17, N = 3SE +/- 2.05, N = 32062.71156.61. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096c3-highcpu-8 SPRc2-standard-8 CLX400800120016002000Min: 2061.4 / Avg: 2062.67 / Max: 2065Min: 1152.5 / Avg: 1156.57 / Max: 11591. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096c3-highcpu-8 SPRc2-standard-8 CLX16K32K48K64K80KSE +/- 8.53, N = 3SE +/- 47.93, N = 367857.776402.31. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096c3-highcpu-8 SPRc2-standard-8 CLX13K26K39K52K65KMin: 67845.9 / Avg: 67857.73 / Max: 67874.3Min: 76318.5 / Avg: 76402.3 / Max: 76484.51. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20c3-highcpu-8 SPRc2-standard-8 CLX5000M10000M15000M20000M25000MSE +/- 35781789.46, N = 3SE +/- 1497684.67, N = 322091557637213468118131. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20c3-highcpu-8 SPRc2-standard-8 CLX4000M8000M12000M16000M20000MMin: 22025612490 / Avg: 22091557636.67 / Max: 22148601720Min: 21343870980 / Avg: 21346811813.33 / Max: 213487749801. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMc3-highcpu-8 SPRc2-standard-8 CLX12000M24000M36000M48000M60000MSE +/- 33921495.47, N = 3SE +/- 6165180.91, N = 357594077823232373126031. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMc3-highcpu-8 SPRc2-standard-8 CLX10000M20000M30000M40000M50000MMin: 57542828310 / Avg: 57594077823.33 / Max: 57658200880Min: 23228925540 / Avg: 23237312603.33 / Max: 232493337301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMc3-highcpu-8 SPRc2-standard-8 CLX10000M20000M30000M40000M50000MSE +/- 48074910.93, N = 3SE +/- 3855504.36, N = 348008361573169450006301. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMc3-highcpu-8 SPRc2-standard-8 CLX8000M16000M24000M32000M40000MMin: 47913384620 / Avg: 48008361573.33 / Max: 48068816350Min: 16937586410 / Avg: 16945000630 / Max: 169505426101. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305c3-highcpu-8 SPRc2-standard-8 CLX3000M6000M9000M12000M15000MSE +/- 21990039.72, N = 3SE +/- 1592104.52, N = 315970781140109191518431. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305c3-highcpu-8 SPRc2-standard-8 CLX3000M6000M9000M12000M15000MMin: 15927249310 / Avg: 15970781140 / Max: 15997971110Min: 10916123720 / Avg: 10919151843.33 / Max: 109215186901. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 128c3-highcpu-8 SPRc2-standard-8 CLX120240360480600SE +/- 5.62, N = 15SE +/- 1.90, N = 3458.8534.9
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 128c3-highcpu-8 SPRc2-standard-8 CLX90180270360450Min: 422 / Avg: 458.76 / Max: 491.4Min: 532 / Avg: 534.93 / Max: 538.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128c3-highcpu-8 SPRc2-standard-8 CLX4K8K12K16K20KSE +/- 40.86, N = 3SE +/- 68.65, N = 319321.613684.5
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128c3-highcpu-8 SPRc2-standard-8 CLX3K6K9K12K15KMin: 19239.9 / Avg: 19321.6 / Max: 19364.1Min: 13565.7 / Avg: 13684.53 / Max: 13803.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128c3-highcpu-8 SPRc2-standard-8 CLX5K10K15K20K25KSE +/- 127.78, N = 3SE +/- 79.71, N = 324960.116184.2
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128c3-highcpu-8 SPRc2-standard-8 CLX4K8K12K16K20KMin: 24711.9 / Avg: 24960.13 / Max: 25136.9Min: 16028.4 / Avg: 16184.23 / Max: 16291.3

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10c3-highcpu-8 SPRc2-standard-8 CLX200K400K600K800K1000KSE +/- 3280.78, N = 3SE +/- 2213.84, N = 31044947.13715723.531. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10c3-highcpu-8 SPRc2-standard-8 CLX200K400K600K800K1000KMin: 1038407.67 / Avg: 1044947.13 / Max: 1048682.86Min: 711311.59 / Avg: 715723.53 / Max: 718252.631. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100c3-highcpu-8 SPRc2-standard-8 CLX200K400K600K800K1000KSE +/- 11157.73, N = 3SE +/- 3476.80, N = 31030937.28702291.931. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100c3-highcpu-8 SPRc2-standard-8 CLX200K400K600K800K1000KMin: 1017311.17 / Avg: 1030937.28 / Max: 1053054.96Min: 695357.89 / Avg: 702291.93 / Max: 706210.341. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barec3-highcpu-8 SPRc2-standard-8 CLX0.17480.34960.52440.69920.874SE +/- 0.001, N = 3SE +/- 0.001, N = 30.7770.5791. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barec3-highcpu-8 SPRc2-standard-8 CLX246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.58 / Avg: 0.58 / Max: 0.581. (CXX) g++ options: -O3

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 2048c3-highcpu-8 SPRc2-standard-8 CLX70140210280350SE +/- 3.01, N = 3SE +/- 3.50, N = 33322481. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lcrypt -lz -lm -lssl -lcrypto -lpthread -ldl
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 2048c3-highcpu-8 SPRc2-standard-8 CLX60120180240300Min: 328.3 / Avg: 332.13 / Max: 338.07Min: 241.08 / Avg: 248.07 / Max: 251.71. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lcrypt -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 4096c3-highcpu-8 SPRc2-standard-8 CLX70140210280350SE +/- 2.62, N = 3SE +/- 2.55, N = 33172371. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lcrypt -lz -lm -lssl -lcrypto -lpthread -ldl
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 4096c3-highcpu-8 SPRc2-standard-8 CLX60120180240300Min: 311.72 / Avg: 316.7 / Max: 320.62Min: 233.1 / Avg: 236.83 / Max: 241.721. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lcrypt -lz -lm -lssl -lcrypto -lpthread -ldl

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Onlyc3-highcpu-8 SPRc2-standard-8 CLX70K140K210K280K350KSE +/- 3369.27, N = 3SE +/- 822.93, N = 33119421733171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Onlyc3-highcpu-8 SPRc2-standard-8 CLX50K100K150K200K250KMin: 305738.61 / Avg: 311942.1 / Max: 317322.77Min: 171759.43 / Avg: 173317.23 / Max: 174556.151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencyc3-highcpu-8 SPRc2-standard-8 CLX1.03862.07723.11584.15445.193SE +/- 0.028, N = 3SE +/- 0.022, N = 32.5654.6161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencyc3-highcpu-8 SPRc2-standard-8 CLX246810Min: 2.52 / Avg: 2.57 / Max: 2.62Min: 4.58 / Avg: 4.62 / Max: 4.661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Onlyc3-highcpu-8 SPRc2-standard-8 CLX60K120K180K240K300KSE +/- 2414.33, N = 3SE +/- 1014.66, N = 32937251696431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Onlyc3-highcpu-8 SPRc2-standard-8 CLX50K100K150K200K250KMin: 289539.14 / Avg: 293725.29 / Max: 297902.59Min: 167660.31 / Avg: 169642.83 / Max: 171009.361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latencyc3-highcpu-8 SPRc2-standard-8 CLX1.32642.65283.97925.30566.632SE +/- 0.028, N = 3SE +/- 0.035, N = 33.4055.8951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latencyc3-highcpu-8 SPRc2-standard-8 CLX246810Min: 3.36 / Avg: 3.41 / Max: 3.45Min: 5.85 / Avg: 5.9 / Max: 5.961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50c3-highcpu-8 SPRc2-standard-8 CLX48121620SE +/- 0.04, N = 3SE +/- 0.02, N = 314.2013.33
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50c3-highcpu-8 SPRc2-standard-8 CLX48121620Min: 14.16 / Avg: 14.2 / Max: 14.27Min: 13.3 / Avg: 13.33 / Max: 13.35

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50c3-highcpu-8 SPRc2-standard-8 CLX48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 314.9314.11
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50c3-highcpu-8 SPRc2-standard-8 CLX48121620Min: 14.92 / Avg: 14.93 / Max: 14.94Min: 14.09 / Avg: 14.11 / Max: 14.14

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50c3-highcpu-8 SPRc2-standard-8 CLX48121620SE +/- 0.01, N = 3SE +/- 0.18, N = 315.6914.79
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50c3-highcpu-8 SPRc2-standard-8 CLX48121620Min: 15.67 / Avg: 15.69 / Max: 15.71Min: 14.44 / Avg: 14.79 / Max: 15

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX0.85211.70422.55633.40844.2605SE +/- 0.0130, N = 3SE +/- 0.0001, N = 33.78732.9308
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX246810Min: 3.77 / Avg: 3.79 / Max: 3.81Min: 2.93 / Avg: 2.93 / Max: 2.93

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX150300450600750SE +/- 1.78, N = 3SE +/- 0.03, N = 3528.02682.37
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX120240360480600Min: 524.47 / Avg: 528.02 / Max: 530.1Min: 682.32 / Avg: 682.37 / Max: 682.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX1428425670SE +/- 0.05, N = 3SE +/- 0.07, N = 351.8962.51
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX1224364860Min: 51.83 / Avg: 51.89 / Max: 51.99Min: 62.42 / Avg: 62.51 / Max: 62.63

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX918273645SE +/- 0.04, N = 3SE +/- 0.03, N = 338.5131.96
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX816243240Min: 38.44 / Avg: 38.51 / Max: 38.56Min: 31.89 / Avg: 31.96 / Max: 32

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX510152025SE +/- 0.03, N = 3SE +/- 0.05, N = 319.1715.92
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX510152025Min: 19.12 / Avg: 19.17 / Max: 19.21Min: 15.81 / Avg: 15.92 / Max: 15.98

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX306090120150SE +/- 0.14, N = 3SE +/- 0.40, N = 3104.31125.47
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX20406080100Min: 104.1 / Avg: 104.31 / Max: 104.56Min: 124.93 / Avg: 125.47 / Max: 126.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX1428425670SE +/- 0.04, N = 3SE +/- 0.02, N = 364.8759.99
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX1326395265Min: 64.81 / Avg: 64.87 / Max: 64.96Min: 59.97 / Avg: 59.99 / Max: 60.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX816243240SE +/- 0.02, N = 3SE +/- 0.01, N = 330.8033.29
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX714212835Min: 30.75 / Avg: 30.8 / Max: 30.83Min: 33.27 / Avg: 33.29 / Max: 33.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX816243240SE +/- 0.15, N = 3SE +/- 0.03, N = 333.1129.18
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX714212835Min: 32.81 / Avg: 33.11 / Max: 33.33Min: 29.12 / Avg: 29.18 / Max: 29.22

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX1530456075SE +/- 0.28, N = 3SE +/- 0.08, N = 360.3968.51
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX1326395265Min: 59.97 / Avg: 60.39 / Max: 60.93Min: 68.41 / Avg: 68.51 / Max: 68.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX246810SE +/- 0.0078, N = 3SE +/- 0.0034, N = 36.53726.1893
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX3691215Min: 6.52 / Avg: 6.54 / Max: 6.55Min: 6.18 / Avg: 6.19 / Max: 6.2

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX70140210280350SE +/- 0.37, N = 3SE +/- 0.18, N = 3305.90323.08
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX60120180240300Min: 305.43 / Avg: 305.9 / Max: 306.62Min: 322.78 / Avg: 323.08 / Max: 323.39

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX48121620SE +/- 0.13, N = 3SE +/- 0.03, N = 316.2213.99
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX48121620Min: 16.06 / Avg: 16.22 / Max: 16.47Min: 13.94 / Avg: 13.99 / Max: 14.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX306090120150SE +/- 0.95, N = 3SE +/- 0.29, N = 3123.29142.89
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX306090120150Min: 121.43 / Avg: 123.29 / Max: 124.53Min: 142.4 / Avg: 142.89 / Max: 143.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX0.84811.69622.54433.39244.2405SE +/- 0.0239, N = 3SE +/- 0.0038, N = 33.76932.9352
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX246810Min: 3.74 / Avg: 3.77 / Max: 3.82Min: 2.93 / Avg: 2.94 / Max: 2.94

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX150300450600750SE +/- 3.34, N = 3SE +/- 0.89, N = 3530.61681.34
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamc3-highcpu-8 SPRc2-standard-8 CLX120240360480600Min: 524.16 / Avg: 530.61 / Max: 535.35Min: 680.19 / Avg: 681.34 / Max: 683.09

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionc3-highcpu-8 SPRc2-standard-8 CLX16003200480064008000SE +/- 80.23, N = 15SE +/- 18.52, N = 3625074371. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionc3-highcpu-8 SPRc2-standard-8 CLX13002600390052006500Min: 5923 / Avg: 6250.13 / Max: 7161Min: 7417 / Avg: 7437 / Max: 74741. (CXX) g++ options: -O3

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadec3-highcpu-8 SPRc2-standard-8 CLX2K4K6K8K10KSE +/- 10.48, N = 3SE +/- 16.76, N = 37573114621. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadec3-highcpu-8 SPRc2-standard-8 CLX2K4K6K8K10KMin: 7552 / Avg: 7572.67 / Max: 7586Min: 11437 / Avg: 11462.33 / Max: 114941. (CXX) g++ options: -O3

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Onlyc3-highcpu-8 SPR70140210280350SE +/- 1.13, N = 3315.24
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Onlyc3-highcpu-8 SPR60120180240300Min: 314.02 / Avg: 315.24 / Max: 317.5

Blend File: BMW27 - Compute: CPU-Only

c2-standard-8 CLX: The test quit with a non-zero exit status.

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100c3-highcpu-8 SPRc2-standard-8 CLX8K16K24K32K40KSE +/- 23.95, N = 3SE +/- 33.32, N = 336310.3525148.281. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100c3-highcpu-8 SPRc2-standard-8 CLX6K12K18K24K30KMin: 36279.99 / Avg: 36310.35 / Max: 36357.61Min: 25082.19 / Avg: 25148.28 / Max: 25188.771. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200c3-highcpu-8 SPRc2-standard-8 CLX8K16K24K32K40KSE +/- 83.52, N = 3SE +/- 37.62, N = 335602.1024695.911. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200c3-highcpu-8 SPRc2-standard-8 CLX6K12K18K24K30KMin: 35456.96 / Avg: 35602.1 / Max: 35746.26Min: 24621.95 / Avg: 24695.91 / Max: 24744.891. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500c3-highcpu-8 SPRc2-standard-8 CLX7K14K21K28K35KSE +/- 321.85, N = 3SE +/- 17.44, N = 334672.6521957.171. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500c3-highcpu-8 SPRc2-standard-8 CLX6K12K18K24K30KMin: 34037.25 / Avg: 34672.65 / Max: 35079.56Min: 21923.1 / Avg: 21957.17 / Max: 21980.641. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000c3-highcpu-8 SPRc2-standard-8 CLX7K14K21K28K35KSE +/- 22.42, N = 3SE +/- 84.22, N = 332118.5821446.271. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000c3-highcpu-8 SPRc2-standard-8 CLX6K12K18K24K30KMin: 32092.98 / Avg: 32118.58 / Max: 32163.26Min: 21328.62 / Avg: 21446.27 / Max: 21609.491. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 4000c3-highcpu-8 SPRc2-standard-8 CLX7K14K21K28K35KSE +/- 27.93, N = 3SE +/- 11.52, N = 332814.7521594.941. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 4000c3-highcpu-8 SPRc2-standard-8 CLX6K12K18K24K30KMin: 32760.05 / Avg: 32814.75 / Max: 32851.88Min: 21578.93 / Avg: 21594.94 / Max: 21617.31. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.34VGR Performance Metricc3-highcpu-8 SPRc2-standard-8 CLX15K30K45K60K75K71072503141. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Corec3-highcpu-8 SPRc2-standard-8 CLX30K60K90K120K150KSE +/- 280.31, N = 3SE +/- 2578.21, N = 12873721427701. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Corec3-highcpu-8 SPRc2-standard-8 CLX20K40K60K80K100KMin: 86863 / Avg: 87372 / Max: 87830Min: 133524 / Avg: 142770.17 / Max: 1591011. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Videoc3-highcpu-8 SPRc2-standard-8 CLX7K14K21K28K35KSE +/- 198.80, N = 3SE +/- 50.28, N = 331654117371. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Videoc3-highcpu-8 SPRc2-standard-8 CLX5K10K15K20K25KMin: 31332 / Avg: 31654 / Max: 32017Min: 11648 / Avg: 11737.33 / Max: 118221. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Graph APIc3-highcpu-8 SPRc2-standard-8 CLX50K100K150K200K250KSE +/- 931.36, N = 3SE +/- 1570.24, N = 32199312361861. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Graph APIc3-highcpu-8 SPRc2-standard-8 CLX40K80K120K160K200KMin: 218103 / Avg: 219931 / Max: 221155Min: 233048 / Avg: 236186 / Max: 2378631. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Stitchingc3-highcpu-8 SPRc2-standard-8 CLX50K100K150K200K250KSE +/- 1973.06, N = 7SE +/- 1856.06, N = 32147602508331. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Stitchingc3-highcpu-8 SPRc2-standard-8 CLX40K80K120K160K200KMin: 205423 / Avg: 214760 / Max: 219545Min: 247593 / Avg: 250832.67 / Max: 2540221. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Image Processingc3-highcpu-8 SPRc2-standard-8 CLX30K60K90K120K150KSE +/- 1624.35, N = 12SE +/- 1527.37, N = 41281631472341. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Image Processingc3-highcpu-8 SPRc2-standard-8 CLX30K60K90K120K150KMin: 120763 / Avg: 128163.42 / Max: 137305Min: 142821 / Avg: 147233.5 / Max: 1494461. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Object Detectionc3-highcpu-8 SPRc2-standard-8 CLX12K24K36K48K60KSE +/- 384.74, N = 5SE +/- 751.87, N = 338999580561. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Object Detectionc3-highcpu-8 SPRc2-standard-8 CLX10K20K30K40K50KMin: 38190 / Avg: 38999.4 / Max: 39999Min: 56579 / Avg: 58056 / Max: 590391. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

110 Results Shown

LeelaChessZero:
  BLAS
  Eigen
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
NAMD
nekRS
Xcompact3d Incompact3d
OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
OpenRadioss:
  Bumper Beam
  Cell Phone Drop Test
  Bird Strike on Windshield
  Rubber O-Ring Seal Installation
SPECFEM3D:
  Mount St. Helens
  Layered Halfspace
  Tomographic Model
  Homogeneous Halfspace
  Water-layered Halfspace
Zstd Compression:
  19 - Compression Speed
  19 - Decompression Speed
  19, Long Mode - Compression Speed
  19, Long Mode - Decompression Speed
John The Ripper:
  bcrypt
  WPA PSK
  Blowfish
  HMAC-SHA512
  MD5
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon
uvg266:
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
  Bosphorus 1080p - Very Fast
  Bosphorus 1080p - Super Fast
  Bosphorus 1080p - Ultra Fast
VVenC:
  Bosphorus 4K - Fast
  Bosphorus 4K - Faster
  Bosphorus 1080p - Fast
  Bosphorus 1080p - Faster
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160
  RTLightmap.hdr.4096x4096
OpenVKL
7-Zip Compression:
  Compression Rating
  Decompression Rating
Timed FFmpeg Compilation
Timed Linux Kernel Compilation
oneDNN:
  IP Shapes 1D - bf16bf16bf16 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
OSPRay Studio:
  1 - 4K - 1 - Path Tracer
  3 - 4K - 1 - Path Tracer
OpenSSL:
  SHA256
  SHA512
  RSA4096
  RSA4096
  ChaCha20
  AES-128-GCM
  AES-256-GCM
  ChaCha20-Poly1305
CockroachDB:
  MoVR - 128
  KV, 50% Reads - 128
  KV, 95% Reads - 128
Memcached:
  1:10
  1:100
GROMACS
MariaDB:
  2048
  4096
PostgreSQL:
  100 - 800 - Read Only
  100 - 800 - Read Only - Average Latency
  100 - 1000 - Read Only
  100 - 1000 - Read Only - Average Latency
TensorFlow:
  CPU - 16 - ResNet-50
  CPU - 32 - ResNet-50
  CPU - 64 - ResNet-50
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
Google Draco:
  Lion
  Church Facade
Blender
nginx:
  100
  200
  500
  1000
  4000
BRL-CAD
OpenCV:
  Core
  Video
  Graph API
  Stitching
  Image Processing
  Object Detection