New Tests

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Clear Linux OS 36990 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209025-NE-2209017NE69
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 23 Tests
Creator Workloads 16 Tests
Database Test Suite 8 Tests
Encoding 6 Tests
Fortran Tests 2 Tests
Game Development 2 Tests
Go Language Tests 3 Tests
HPC - High Performance Computing 11 Tests
Imaging 4 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 23 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 5 Tests
Raytracing 2 Tests
Renderers 4 Tests
Scientific Computing 3 Tests
Server 13 Tests
Server CPU Tests 15 Tests
Single-Threaded 3 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
CentOS Stream 9
August 31 2022
  1 Day, 46 Minutes
Clear Linux 36990
September 01 2022
  19 Hours, 22 Minutes
Invert Hiding All Results Option
  22 Hours, 4 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


New TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionCentOS Stream 9Clear Linux 369902 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPCentOS Stream 95.14.0-148.el9.x86_64 (x86_64)GNOME Shell 40.10X ServerGCC 11.3.1 20220421xfs1920x1080Clear Linux OS 369905.19.6-1185.native (x86_64)GNOME Shell 42.4X Server 1.21.1.3GCC 12.2.1 20220831 releases/gcc-12.2.0-35-g63997f2223 + Clang 14.0.6 + LLVM 14.0.6ext4OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- CentOS Stream 9: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Clear Linux 36990: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=skylake-avx512 --with-zstd Disk Details- CentOS Stream 9: NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096- Clear Linux 36990: MQ-DEADLINE / relatime,rw,stripe=256 / Block Size: 4096Processor Details- CentOS Stream 9: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363- Clear Linux 36990: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375Java Details- CentOS Stream 9: OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS)- Clear Linux 36990: OpenJDK Runtime Environment (build 18.0.1-internal+0-adhoc.mockbuild.corretto-18-18.0.1.10.1)Python Details- CentOS Stream 9: Python 3.9.13- Clear Linux 36990: Python 3.10.6Security Details- CentOS Stream 9: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Clear Linux 36990: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected Environment Details- Clear Linux 36990: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags" FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx""

CentOS Stream 9 vs. Clear Linux 36990 ComparisonPhoronix Test SuiteBaseline+345.4%+345.4%+690.8%+690.8%+1036.2%+1036.2%358.5%358.5%339.5%321.1%321.1%301.4%264.2%240.2%209.3%166.1%163.2%157%146.2%139.4%138.4%124.5%121.3%117.7%111.1%108.2%102.5%102.3%1381.6%99.8%89.2%77.2%74.9%74.4%70.3%69.6%67.1%64.1%63.9%60.9%53.8%52.6%52%50%50%46.1%42.7%41.9%40.8%37%36.8%36.2%35.6%34.7%31.5%30.3%30.1%28.5%28.3%22.4%17.6%17.1%16.8%16.6%16.5%16.1%14.6%14.6%14.6%14.6%14.3%14.3%13.8%13.6%13.1%12.5%12.4%11.7%11.2%10.4%9.6%9.4%9.3%9.2%8.8%8.6%8.6%7.9%7.6%7.3%7.2%7.2%5.9%5.7%5.2%4.8%4.8%4.4%3.7%3.7%3.6%3.6%3.5%3.5%3.4%3.3%2.9%2.3%2.1%2.1%2.1%2%100 - 500 - Read Write - Average Latency100 - 500 - Read Write40000000 - 500 - S.5.B.T100 - 250 - Read Write100 - 250 - Read Write - Average LatencyH2blosclz shuffleblosclz bitshuffleM.M.B.S.T - bf16bf16bf16 - CPUSqueezeNetSpaceshipSavina Reactors.IOsuper-resolution-10 - CPU - ParallelContext SwitchingBosphorus 4KApache Spark BayesSemaphoresRand ForestPreset 12 - Bosphorus 4KALS Movie LensPreset 10 - Bosphorus 4KSpeed 0 - Bosphorus 4KSocket ActivityNUMA99%Ninja98%Preset 8 - Bosphorus 4KPreset 4 - Bosphorus 4K6Inception V47 - Bosphorus 4K10 - Bosphorus 4K1.R.W.A.D.F.R.C.C1.R.W.A.D.T.R1.R.W.A.D.S.Rparticle_volume/pathtracer/real_timeGET - 50HWB Color SpaceJythonV.Q.O - Bosphorus 4K6, LosslessF.H.RRedis - 50 - 1:10P.S.O - Bosphorus 4K10, LosslessGET - 50040000000 - 500 - C.P.B.U.DVMAF Optimized - Bosphorus 4KSharpenNoise-GaussianRedis - 50 - 5:18 - Compression SpeedDefaultAtomic29.5%CPU - Numpy - 4194304 - Isoneutral MixingS.V.M.Psuper-resolution-10 - CPU - Standard20.8%fcn-resnet101-11 - CPU - Standardlinux-5.19.tar.xzArcFace ResNet-100 - CPU - ParallelRhodopsin ProteinIP Shapes 1D - bf16bf16bf16 - CPUQ.1.LMatrix MathQuality 100100 - 250 - Read OnlyMobilenet FloatCrypto2Memory Copying13.9%Medium100 - 250 - Read Only - Average LatencyGET - 1000SET - 100019, Long Mode - Compression SpeedMMAP12.3%MallocMobilenet Quant100011.2%ArcFace ResNet-100 - CPU - Standardbertsquad-12 - CPU - Standard10.2%40000000 - 500 - C.P.BSENDFILE9.5%Q.1.H.CMobileNetV2_2249.4%SwirlCPU - DenseNetyolov4 - CPU - Standard9.1%R.N.N.I - bf16bf16bf16 - CPU8.8%ThoroughCPU - MobileNet v2ExhaustiveMEMFD8.3%GPT-2 - CPU - Standard8.2%SET - 500Q.1.L.H.Csqueezenetv1.17.3%SqueezeNetV1.07.3%1000SET - 50mobilenetV36.2%nasnet5.9%MPI CPU - water_GMX50_bare19 - Compression SpeedCPU - SqueezeNet v2FutexFastG.Q.D.S4.5%CPU - Numpy - 4194304 - Equation of StateR.N.N.T - bf16bf16bf16 - CPU4.4%Vector Math4.4%I.R.V4.3%mobilenet-v1-1.04.3%ResizingTotal Timebertsquad-12 - CPU - ParallelIP Shapes 3D - bf16bf16bf16 - CPUCPU StressPartialTweetsEnhancedBMW27 - CPU-Only3 - Compression Speed3.2%Fishy Cat - CPU-OnlyForking2.7%CPU - SqueezeNet v1.1TopTweetfcn-resnet101-11 - CPU - ParallelLargeRandG.C.S.FPostgreSQL pgbenchPostgreSQL pgbenchApache SparkPostgreSQL pgbenchPostgreSQL pgbenchDaCapo BenchmarkC-BloscC-BlosconeDNNTensorFlow LiteNatronRenaissanceONNX RuntimeStress-NGx264RenaissanceStress-NGRenaissanceSVT-AV1RenaissanceSVT-AV1VP9 libvpx EncodingStress-NGNode.js Express HTTP Load TestStress-NGTimed LLVM CompilationSVT-AV1SVT-AV1libavif avifencTensorFlow LiteSVT-HEVCSVT-HEVCClickHouseClickHouseClickHouseOSPRayRedisGraphicsMagickDaCapo BenchmarkSVT-VP9libavif avifencRenaissancememtier_benchmarkSVT-VP9libavif avifencRedisApache SparkSVT-VP9GraphicsMagickGraphicsMagickmemtier_benchmarkZstd CompressionWebP Image EncodeStress-NGNode.js V8 Web Tooling BenchmarkPyHPC BenchmarksStress-NGONNX RuntimeONNX RuntimeUnpacking The Linux KernelONNX RuntimeLAMMPS Molecular Dynamics SimulatoroneDNNWebP Image EncodeStress-NGWebP Image EncodePostgreSQL pgbenchTensorFlow LiteStress-NGlibavif avifencStress-NGASTC EncoderPostgreSQL pgbenchRedisRedisZstd CompressionStress-NGStress-NGTensorFlow LiteApache HTTP ServerONNX RuntimeONNX RuntimeApache SparkStress-NGWebP Image EncodeMobile Neural NetworkGraphicsMagickTNNONNX RuntimeoneDNNASTC EncoderTNNASTC EncoderStress-NGONNX RuntimeRedisWebP Image EncodeMobile Neural Networklibavif avifencMobile Neural NetworknginxRedisMobile Neural NetworkMobile Neural NetworkGROMACSZstd CompressionTNNStress-NGASTC EncoderStress-NGPyHPC BenchmarksoneDNNStress-NGTensorFlow LiteMobile Neural NetworkGraphicsMagickStockfishONNX RuntimeoneDNNStress-NGsimdjsonGraphicsMagickBlenderZstd CompressionBlenderStress-NGTNNsimdjsonONNX RuntimesimdjsonStress-NGCentOS Stream 9Clear Linux 36990

New Testspgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlystockfish: Total Timeonnx: ArcFace ResNet-100 - CPU - Standardspark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Group By Test Timerenaissance: Savina Reactors.IOonnx: GPT-2 - CPU - Standardonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUspark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - SHA-512 Benchmark Timepgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlyospray: particle_volume/scivis/real_timemnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUtensorflow-lite: Inception V4onnx: fcn-resnet101-11 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Standardospray: particle_volume/pathtracer/real_timememtier-benchmark: Redis - 50 - 1:10apache: 1000renaissance: Finagle HTTP Requestsrenaissance: ALS Movie Lensmemtier-benchmark: Redis - 50 - 5:1tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattnn: CPU - DenseNetblender: Barbershop - CPU-Onlyospray: particle_volume/ao/real_timelammps: 20k Atomsopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUtensorflow-lite: SqueezeNetblosc: blosclz shuffleclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachebuild-llvm: Ninjahpcg: tensorflow-lite: Inception ResNet V2graphics-magick: Rotatecompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedospray-studio: 3 - 4K - 32 - Path Tracerrenaissance: In-Memory Database Shootoutblosc: blosclz bitshufflestress-ng: Atomicstress-ng: Futexgraphics-magick: HWB Color Spacevpxenc: Speed 0 - Bosphorus 4Knatron: Spaceshipredis: SET - 500ospray: gravity_spheres_volume/dim_512/scivis/real_timepgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writeospray: gravity_spheres_volume/dim_512/ao/real_timepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writeospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray-studio: 1 - 4K - 16 - Path Traceronnx: fcn-resnet101-11 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingwebp2: Quality 95, Compression Effort 7ospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 2 - 4K - 16 - Path Tracersvt-av1: Preset 4 - Bosphorus 4Kredis: SET - 1000redis: GET - 500tensorflow-lite: Mobilenet Quantstress-ng: Socket Activitynginx: 1000stress-ng: Context Switchingcompress-zstd: 3, Long Mode - Compression Speedblender: Pabellon Barcelona - CPU-Onlyavifenc: 0influxdb: 4 - 10000 - 2,5000,1 - 10000stress-ng: CPU Cacheonednn: IP Shapes 1D - bf16bf16bf16 - CPUnode-web-tooling: simdjson: PartialTweetssimdjson: DistinctUserIDpyhpc: CPU - Aesara - 4194304 - Isoneutral Mixingsimdjson: TopTweetgraphics-magick: Resizingdragonflydb: 50 - 5:1dragonflydb: 50 - 1:5dragonflydb: 50 - 1:1openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUbuild-linux-kernel: defconfigblender: Classroom - CPU-Onlypyhpc: CPU - Numpy - 4194304 - Equation of Stateopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUstress-ng: Glibc C String Functionsgraphics-magick: Enhancedgraphics-magick: Sharpengraphics-magick: Noise-Gaussiangraphics-magick: Swirlsimdjson: Kostyacompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speednode-express-loadtest: webp2: Quality 75, Compression Effort 7keydb: renaissance: Rand Forestonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUpyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixingsimdjson: LargeRandbuild-gdb: Time To Compilex264: Bosphorus 4Konednn: IP Shapes 3D - bf16bf16bf16 - CPUavifenc: 2compress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingpyhpc: CPU - JAX - 4194304 - Isoneutral Mixingpyhpc: CPU - Numba - 4194304 - Isoneutral Mixingwebp: Quality 100, Lossless, Highest Compressionrenaissance: Apache Spark Bayescompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedredis: GET - 1000compress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedstress-ng: System V Message Passingblender: Fishy Cat - CPU-Onlynamd: ATPase Simulation - 327,506 Atomsunpack-linux: linux-5.19.tar.xzstress-ng: NUMAstress-ng: x86_64 RdRandtnn: CPU - MobileNet v2stress-ng: IO_uringstress-ng: Mallocstress-ng: Forkingstress-ng: Memory Copyingstress-ng: MMAPstress-ng: MEMFDstress-ng: CPU Stressstress-ng: Semaphoresstress-ng: SENDFILEstress-ng: Glibc Qsort Data Sortingstress-ng: Cryptostress-ng: Matrix Mathstress-ng: Vector Mathgromacs: MPI CPU - water_GMX50_bareredis: SET - 50webp: Quality 100, Highest Compressionavifenc: 6, Losslesstnn: CPU - SqueezeNet v1.1blender: BMW27 - CPU-Onlydacapobench: Jythonredis: GET - 50openssl: openssl: onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUwebp: Quality 100, Losslessavifenc: 10, Losslessastcenc: Mediumdacapobench: Tradebeanspyhpc: CPU - PyTorch - 4194304 - Equation of Stateastcenc: Exhaustivesvt-av1: Preset 8 - Bosphorus 4Kpyhpc: CPU - TensorFlow - 4194304 - Equation of Statedacapobench: H2svt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Kpyhpc: CPU - JAX - 4194304 - Equation of Statesvt-vp9: VMAF Optimized - Bosphorus 4Ktnn: CPU - SqueezeNet v2astcenc: Thoroughsvt-hevc: 7 - Bosphorus 4Kwebp: Quality 100svt-av1: Preset 10 - Bosphorus 4Kpyhpc: CPU - Aesara - 4194304 - Equation of Stateonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUastcenc: Fastsvt-hevc: 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kwebp2: Defaultwebp: Defaultpyhpc: CPU - Numba - 4194304 - Equation of Stateonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUavifenc: 6lammps: Rhodopsin Proteinpyhpc: CPU - TensorFlow - 4194304 - Isoneutral MixingCentOS Stream 9Clear Linux 369900.2701855656179473129188121219.411045447.6162.7936.2188.830.150166938824.295120.0912.0902.6633.9568.6632.3561.75312.097697.28213.601478.641.5042731.9373896.544369412260100.66961398073.70131349.608693.917123.91339297.9168713.04240.413955.046257.1524.352735.12318.671071.7016614.414916.7243.95244.38231.48135.01940.281247297.810303017.51244.04831917787.23704.1187775.771088788.9211383.041.91931278.6222.021526.7241871022.417412.05120745408524058025.58522015223616935269799630109332592.878233.75023967202611.3271847194.122018201.099540.862460.37200945.496233126.45281.082.8984.316666008.716.265.4060310.554.855.772.0785.6227481424.5713.92819.6324.291451.6213.67239.8683.2629.67064.821.93685.47233.3332.002478.961.3647224.778.279657.994.524414.949473078.17115364173823402.912635.743.44910111.9201455.237.957782.0600.9695.41534.572.3856348.7102571.386.63711314678660.8641.37541.2081075.33201.0307.52406986.653208.03022.97026.17093379.7333.370.281389.19410.37667284.36378.879306750258.8463484.4512812.453747.584098.84135517.467186364.511271967.05934.2683808.91286293.40322923.098.9962189377.088.8029.260366.49325.0456002284227.21112427.216866.13.8115521.1156.605316.3743160700.1094.505438.6750.2229847115.5099.270.031112.9375.88046.384886.673.04465.6200.3033.68404799.1120113.2392.8322.6672.1630.2642.159386.05630.8700.2751831665186079628207715.1715.5913.0734.458256.610211487.1162.0433.0520.210.132191311524.715920.4482.1792.9124.2448.5632.5291.86212.813728.04042370.352163610151161.9301994991.48118161.835950.58225.41760873.0768566.73701.273620.857253.2724.590335.0466244.1917905.8400.32400.44386.79267.38740.861049331.910293007.51621.34850612603.1145035.861140633.7217376.155.02083152.8122.22475.8288579222.51682.86287357409324050325.6467201802411978525982864099280242.24324102204042.3512078925.132765192.678576.9536452.69215488.1014924848.44965.381.9778.59116.404.6408613.565.025.715.7428513826220.784236085.834003419.3164.231.8549658490.04119286999425582.942613.248.89808485587.54668.312.27230.9882.432.3025142.6252522.591.536420947302638.303479.03163.1993.12722264.502985.26807.88684030.9032.430.280537.8535.21669368.18348.81510692920.63342573199.6961843.0111244.943336.533783.49140290.2115903941.881161464.61893.8495806.71328235.14309299.869.5252346688.38.0466.174358.15324.2536833512867.331119831.817031.13.8008018.1864.690359.89804.891673.1882453163.87148.91153.7772.12850.4720147.632.656132.8713.64846837.1477192.06195.9341.6622.126373.46335.995OpenBenchmarking.org

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 369900.06190.12380.18570.24760.3095SE +/- 0.005, N = 12SE +/- 0.007, N = 120.2700.275-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 3699012345Min: 0.25 / Avg: 0.27 / Max: 0.29Min: 0.25 / Avg: 0.27 / Max: 0.31. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 30425.21, N = 12SE +/- 42947.24, N = 1218556561831665-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990300K600K900K1200K1500KMin: 1702189.9 / Avg: 1855656.11 / Max: 1970416.52Min: 1643669.33 / Avg: 1831665.25 / Max: 2003918.941. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeCentOS Stream 9Clear Linux 3699040M80M120M160M200MSE +/- 2364357.21, N = 15SE +/- 3123886.35, N = 15179473129186079628-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeCentOS Stream 9Clear Linux 3699030M60M90M120M150MMin: 164540890 / Avg: 179473129.13 / Max: 195156974Min: 166800438 / Avg: 186079627.93 / Max: 2091295011. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990400800120016002000SE +/- 16.82, N = 12SE +/- 24.49, N = 1218812077-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990400800120016002000Min: 1791.5 / Avg: 1880.54 / Max: 1942Min: 1978.5 / Avg: 2076.63 / Max: 2237.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeClear Linux 3699048121620SE +/- 0.83, N = 215.17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeClear Linux 3699048121620SE +/- 0.23, N = 215.59

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeClear Linux 369903691215SE +/- 0.21, N = 213.07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeClear Linux 36990816243240SE +/- 5.10, N = 234.45

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOCentOS Stream 9Clear Linux 369905K10K15K20K25KSE +/- 296.93, N = 3SE +/- 56.26, N = 1521219.48256.6MIN: 20627.9 / MAX: 32602.9MIN: 7799.33 / MAX: 12715.24
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOCentOS Stream 9Clear Linux 369904K8K12K16K20KMin: 20627.9 / Avg: 21219.38 / Max: 21561.04Min: 7799.33 / Avg: 8256.64 / Max: 8639

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 388.59, N = 12SE +/- 346.22, N = 91104510211-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 9069 / Avg: 11045.42 / Max: 12021.5Min: 9378.5 / Avg: 10210.94 / Max: 12014.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990110220330440550SE +/- 7.22, N = 15SE +/- 4.67, N = 15447.62487.12MIN: 376.51-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 431.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 3699090180270360450Min: 393.11 / Avg: 447.62 / Max: 486.98Min: 454.04 / Avg: 487.12 / Max: 533.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCentOS Stream 9Clear Linux 369900.62781.25561.88342.51123.139SE +/- 0.09, N = 3SE +/- 0.04, N = 152.792.04
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCentOS Stream 9Clear Linux 36990246810Min: 2.62 / Avg: 2.79 / Max: 2.92Min: 1.86 / Avg: 2.04 / Max: 2.3

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990816243240SE +/- 0.12, N = 3SE +/- 0.06, N = 1536.2133.05
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990816243240Min: 35.99 / Avg: 36.21 / Max: 36.41Min: 32.72 / Avg: 33.05 / Max: 33.71

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeCentOS Stream 9Clear Linux 3699020406080100SE +/- 0.53, N = 3SE +/- 0.19, N = 1588.8320.21
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeCentOS Stream 9Clear Linux 3699020406080100Min: 87.98 / Avg: 88.83 / Max: 89.81Min: 18.6 / Avg: 20.21 / Max: 21.72

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 369900.03380.06760.10140.13520.169SE +/- 0.001, N = 3SE +/- 0.004, N = 120.1500.132-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 3699012345Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.12 / Avg: 0.13 / Max: 0.151. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 9583.19, N = 3SE +/- 57458.70, N = 1216693881913115-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990300K600K900K1200K1500KMin: 1655270.14 / Avg: 1669387.76 / Max: 1687672.99Min: 1661379.78 / Avg: 1913115.4 / Max: 2124955.91. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990612182430SE +/- 0.29, N = 3SE +/- 0.05, N = 324.3024.72
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990612182430Min: 23.8 / Avg: 24.3 / Max: 24.8Min: 24.63 / Avg: 24.72 / Max: 24.77

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CentOS Stream 9Clear Linux 36990510152025SE +/- 0.19, N = 15SE +/- 0.03, N = 320.0920.45MIN: 17.31 / MAX: 37.29-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 19.73 / MAX: 33.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CentOS Stream 9Clear Linux 36990510152025Min: 18.66 / Avg: 20.09 / Max: 21.34Min: 20.38 / Avg: 20.45 / Max: 20.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CentOS Stream 9Clear Linux 369900.49030.98061.47091.96122.4515SE +/- 0.047, N = 15SE +/- 0.016, N = 32.0902.179MIN: 1.76 / MAX: 3.93-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.08 / MAX: 2.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CentOS Stream 9Clear Linux 36990246810Min: 1.78 / Avg: 2.09 / Max: 2.24Min: 2.15 / Avg: 2.18 / Max: 2.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CentOS Stream 9Clear Linux 369900.65521.31041.96562.62083.276SE +/- 0.014, N = 15SE +/- 0.013, N = 32.6632.912MIN: 2.48 / MAX: 5.57-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.72 / MAX: 5.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CentOS Stream 9Clear Linux 36990246810Min: 2.6 / Avg: 2.66 / Max: 2.75Min: 2.89 / Avg: 2.91 / Max: 2.931. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CentOS Stream 9Clear Linux 369900.95491.90982.86473.81964.7745SE +/- 0.075, N = 15SE +/- 0.058, N = 33.9564.244MIN: 3.51 / MAX: 9.33-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.95 / MAX: 8.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CentOS Stream 9Clear Linux 36990246810Min: 3.67 / Avg: 3.96 / Max: 4.46Min: 4.17 / Avg: 4.24 / Max: 4.361. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Clear Linux 36990246810SE +/- 0.088, N = 15SE +/- 0.071, N = 38.6638.563MIN: 7.71 / MAX: 20.48-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 8.16 / MAX: 9.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Clear Linux 369903691215Min: 7.94 / Avg: 8.66 / Max: 9.23Min: 8.42 / Avg: 8.56 / Max: 8.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CentOS Stream 9Clear Linux 369900.5691.1381.7072.2762.845SE +/- 0.050, N = 15SE +/- 0.014, N = 32.3562.529MIN: 2.03 / MAX: 5.76-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.48 / MAX: 2.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CentOS Stream 9Clear Linux 36990246810Min: 2.07 / Avg: 2.36 / Max: 2.67Min: 2.51 / Avg: 2.53 / Max: 2.561. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CentOS Stream 9Clear Linux 369900.4190.8381.2571.6762.095SE +/- 0.020, N = 15SE +/- 0.017, N = 31.7531.862MIN: 1.61 / MAX: 4.19-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 1.81 / MAX: 2.511. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CentOS Stream 9Clear Linux 36990246810Min: 1.63 / Avg: 1.75 / Max: 1.86Min: 1.83 / Avg: 1.86 / Max: 1.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCentOS Stream 9Clear Linux 369903691215SE +/- 0.23, N = 15SE +/- 0.09, N = 312.1012.81MIN: 10.54 / MAX: 23.03-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 12.47 / MAX: 16.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCentOS Stream 9Clear Linux 3699048121620Min: 10.69 / Avg: 12.1 / Max: 13.11Min: 12.65 / Avg: 12.81 / Max: 12.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990160320480640800SE +/- 6.94, N = 12SE +/- 6.52, N = 15697.28728.04MIN: 605.85-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 651.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990130260390520650Min: 627.38 / Avg: 697.28 / Max: 719.91Min: 682.09 / Avg: 728.04 / Max: 791.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 93691215SE +/- 0.30, N = 1513.60MIN: 8.57 / MAX: 68.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 930060090012001500SE +/- 39.85, N = 151478.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 90.33750.6751.01251.351.6875SE +/- 0.05, N = 151.50MIN: 0.34 / MAX: 29.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 99K18K27K36K45KSE +/- 1567.95, N = 1542731.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 3699016K32K48K64K80KSE +/- 21727.22, N = 15SE +/- 2536.95, N = 1573896.542370.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 3699013K26K39K52K65KMin: 35453.9 / Avg: 73896.46 / Max: 362681Min: 35507 / Avg: 42370.31 / Max: 62938.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990110220330440550SE +/- 1.17, N = 3SE +/- 17.65, N = 12443521-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 3699090180270360450Min: 441.5 / Avg: 443.33 / Max: 445.5Min: 436.5 / Avg: 521.33 / Max: 5711. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990150300450600750SE +/- 1.17, N = 3SE +/- 12.00, N = 12694636-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990120240360480600Min: 692 / Avg: 694.17 / Max: 696Min: 604.5 / Avg: 635.67 / Max: 6991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 369903K6K9K12K15KSE +/- 43.63, N = 3SE +/- 690.46, N = 121226010151-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 12190.5 / Avg: 12260.17 / Max: 12340.5Min: 6882 / Avg: 10151.38 / Max: 11950.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCentOS Stream 9Clear Linux 369904080120160200SE +/- 0.72, N = 3SE +/- 0.68, N = 3100.67161.93
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCentOS Stream 9Clear Linux 36990306090120150Min: 99.77 / Avg: 100.67 / Max: 102.09Min: 160.97 / Avg: 161.93 / Max: 163.24

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 67672.39, N = 12SE +/- 83568.62, N = 131398073.701994991.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Clear Linux 36990300K600K900K1200K1500KMin: 1126743.33 / Avg: 1398073.7 / Max: 1952770.48Min: 1190376.76 / Avg: 1994991.48 / Max: 2288695.611. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000CentOS Stream 9Clear Linux 3699030K60K90K120K150KSE +/- 1558.40, N = 15SE +/- 760.56, N = 3131349.60118161.83-O2-O3 -m64 -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -shared -fPIC
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000CentOS Stream 9Clear Linux 3699020K40K60K80K100KMin: 113593.7 / Avg: 131349.6 / Max: 135605.05Min: 116952.09 / Avg: 118161.83 / Max: 119565.291. (CC) gcc options: -shared -fPIC

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 154.17, N = 12SE +/- 56.01, N = 38693.95950.5MIN: 6648.05 / MAX: 15659.82MIN: 5353.91 / MAX: 6140.39
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Clear Linux 3699015003000450060007500Min: 8107.68 / Avg: 8693.92 / Max: 9648.75Min: 5840.72 / Avg: 5950.5 / Max: 6024.72

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 73.46, N = 3SE +/- 28.18, N = 317123.98225.4MIN: 16240.16 / MAX: 19195.87MIN: 8194.03 / MAX: 9046.37
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensCentOS Stream 9Clear Linux 369903K6K9K12K15KMin: 16999.98 / Avg: 17123.88 / Max: 17254.2Min: 8194.03 / Avg: 8225.41 / Max: 8281.65

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 63962.41, N = 12SE +/- 66661.35, N = 121339297.911760873.071. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Clear Linux 36990300K600K900K1200K1500KMin: 966119.24 / Avg: 1339297.91 / Max: 1668140.37Min: 1097948.3 / Avg: 1760873.07 / Max: 1949244.221. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCentOS Stream 9Clear Linux 3699015K30K45K60K75KSE +/- 3728.57, N = 12SE +/- 935.16, N = 1568713.068566.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCentOS Stream 9Clear Linux 3699012K24K36K48K60KMin: 62607.9 / Avg: 68712.96 / Max: 109269Min: 64020.3 / Avg: 68566.65 / Max: 79083.9

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 369909001800270036004500SE +/- 499.70, N = 12SE +/- 75.96, N = 154240.413701.27
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 369907001400210028003500Min: 3338.31 / Avg: 4240.41 / Max: 9146.21Min: 3362.9 / Avg: 3701.27 / Max: 4201.56

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCentOS Stream 9Clear Linux 369908001600240032004000SE +/- 27.70, N = 3SE +/- 1.46, N = 33955.053620.86MIN: 3833.99 / MAX: 5510.15-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3599.42 / MAX: 3730.481. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCentOS Stream 9Clear Linux 369907001400210028003500Min: 3909.45 / Avg: 3955.05 / Max: 4005.08Min: 3618.21 / Avg: 3620.86 / Max: 3623.251. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCentOS Stream 9Clear Linux 3699060120180240300SE +/- 0.55, N = 3SE +/- 0.37, N = 3257.15253.27
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCentOS Stream 9Clear Linux 3699050100150200250Min: 256.22 / Avg: 257.15 / Max: 258.13Min: 252.82 / Avg: 253.27 / Max: 254.01

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990612182430SE +/- 0.07, N = 3SE +/- 0.14, N = 324.3524.59
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990612182430Min: 24.26 / Avg: 24.35 / Max: 24.49Min: 24.32 / Avg: 24.59 / Max: 24.78

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCentOS Stream 9Clear Linux 36990816243240SE +/- 0.05, N = 3SE +/- 0.06, N = 335.1235.05-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCentOS Stream 9Clear Linux 36990816243240Min: 35.04 / Avg: 35.12 / Max: 35.21Min: 34.94 / Avg: 35.05 / Max: 35.131. (CXX) g++ options: -O3 -lm -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9510152025SE +/- 0.29, N = 1218.67MIN: 11.54 / MAX: 79.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 92004006008001000SE +/- 14.44, N = 121071.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 5506.54, N = 12SE +/- 370.01, N = 1216614.416244.19
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 369903K6K9K12K15KMin: 5327.65 / Avg: 16614.41 / Max: 68861Min: 5465.21 / Avg: 6244.19 / Max: 10090.6

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 23.45, N = 3SE +/- 153.72, N = 154916.717905.81. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCentOS Stream 9Clear Linux 369903K6K9K12K15KMin: 4886.1 / Avg: 4916.73 / Max: 4962.8Min: 17321.5 / Avg: 17905.79 / Max: 19245.51. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCentOS Stream 9Clear Linux 3699090180270360450SE +/- 1.95, N = 15SE +/- 5.35, N = 12243.95400.32MIN: 42.11 / MAX: 6000MIN: 54.25 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCentOS Stream 9Clear Linux 3699070140210280350Min: 233.04 / Avg: 243.95 / Max: 257.32Min: 374.42 / Avg: 400.32 / Max: 434.041. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCentOS Stream 9Clear Linux 3699090180270360450SE +/- 1.48, N = 15SE +/- 5.40, N = 12244.38400.44MIN: 44.09 / MAX: 5454.55MIN: 53.29 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCentOS Stream 9Clear Linux 3699070140210280350Min: 236.34 / Avg: 244.38 / Max: 255.26Min: 375.33 / Avg: 400.44 / Max: 433.411. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCentOS Stream 9Clear Linux 3699080160240320400SE +/- 2.21, N = 15SE +/- 5.14, N = 12231.48386.79MIN: 41.47 / MAX: 5454.55MIN: 51.15 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCentOS Stream 9Clear Linux 3699070140210280350Min: 212.68 / Avg: 231.48 / Max: 244.77Min: 363.65 / Avg: 386.79 / Max: 416.951. ClickHouse server version 22.5.4.19 (official build).

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCentOS Stream 9Clear Linux 3699060120180240300SE +/- 0.23, N = 3SE +/- 0.26, N = 3135.02267.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCentOS Stream 9Clear Linux 3699050100150200250Min: 134.59 / Avg: 135.02 / Max: 135.39Min: 266.87 / Avg: 267.39 / Max: 267.76

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1CentOS Stream 9Clear Linux 36990918273645SE +/- 0.08, N = 3SE +/- 0.05, N = 340.2840.86-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1CentOS Stream 9Clear Linux 36990816243240Min: 40.14 / Avg: 40.28 / Max: 40.39Min: 40.8 / Avg: 40.86 / Max: 40.961. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CentOS Stream 9Clear Linux 3699011K22K33K44K55KSE +/- 268.62, N = 3SE +/- 583.51, N = 1547297.849331.9
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CentOS Stream 9Clear Linux 369909K18K27K36K45KMin: 46787.4 / Avg: 47297.83 / Max: 47698.2Min: 47180.5 / Avg: 49331.85 / Max: 54041.2

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCentOS Stream 9Clear Linux 369902004006008001000SE +/- 7.69, N = 15SE +/- 10.17, N = 310301029-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCentOS Stream 9Clear Linux 369902004006008001000Min: 948 / Avg: 1030.47 / Max: 1062Min: 1009 / Avg: 1028.67 / Max: 10431. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedCentOS Stream 9Clear Linux 369906001200180024003000SE +/- 6.19, N = 12SE +/- 4.36, N = 153017.53007.51. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedCentOS Stream 9Clear Linux 369905001000150020002500Min: 2973.2 / Avg: 3017.48 / Max: 3043.8Min: 2978 / Avg: 3007.53 / Max: 3033.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 3699030060090012001500SE +/- 18.11, N = 12SE +/- 15.83, N = 151244.01621.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 3699030060090012001500Min: 1141.3 / Avg: 1244.02 / Max: 1347.8Min: 1521.9 / Avg: 1621.27 / Max: 1702.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 3699010K20K30K40K50KSE +/- 81.93, N = 3SE +/- 111.95, N = 34831948506-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 369908K16K24K32K40KMin: 48156 / Avg: 48319 / Max: 48415Min: 48361 / Avg: 48505.67 / Max: 487261. (CXX) g++ options: -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCentOS Stream 94K8K12K16K20KSE +/- 197.42, N = 317787.2MIN: 17444.33 / MAX: 21383.13
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCentOS Stream 93K6K9K12K15KMin: 17444.33 / Avg: 17787.24 / Max: 18128.2

Test: In-Memory Database Shootout

Clear Linux 36990: The test run did not produce a result.

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCentOS Stream 9Clear Linux 369903K6K9K12K15KSE +/- 6.11, N = 3SE +/- 20.65, N = 33704.112603.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 3692.6 / Avg: 3704.13 / Max: 3713.4Min: 12564.5 / Avg: 12603.1 / Max: 12635.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCentOS Stream 9Clear Linux 3699040K80K120K160K200KSE +/- 3961.98, N = 15SE +/- 3304.16, N = 15187775.77145035.86-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCentOS Stream 9Clear Linux 3699030K60K90K120K150KMin: 164376.47 / Avg: 187775.77 / Max: 204483.19Min: 127421.65 / Avg: 145035.86 / Max: 159671.651. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCentOS Stream 9Clear Linux 36990200K400K600K800K1000KSE +/- 73263.26, N = 15SE +/- 60890.67, N = 151088788.921140633.72-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCentOS Stream 9Clear Linux 36990200K400K600K800K1000KMin: 696500.15 / Avg: 1088788.92 / Max: 1490893.83Min: 846766.74 / Avg: 1140633.72 / Max: 1364896.251. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCentOS Stream 9Clear Linux 36990400800120016002000SE +/- 28.49, N = 12SE +/- 4.16, N = 311381737-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCentOS Stream 9Clear Linux 3699030060090012001500Min: 938 / Avg: 1138.17 / Max: 1302Min: 1729 / Avg: 1737 / Max: 17431. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990246810SE +/- 0.02, N = 3SE +/- 0.02, N = 33.046.15-U_FORTIFY_SOURCE-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990246810Min: 3.02 / Avg: 3.04 / Max: 3.07Min: 6.13 / Avg: 6.15 / Max: 6.21. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCentOS Stream 9Clear Linux 369901.1252.253.3754.55.625SE +/- 0.01, N = 15SE +/- 0.03, N = 31.95.0
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCentOS Stream 9Clear Linux 36990246810Min: 1.8 / Avg: 1.92 / Max: 2Min: 5 / Avg: 5.03 / Max: 5.1

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 47157.16, N = 12SE +/- 26407.67, N = 151931278.622083152.81-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KMin: 1439413.25 / Avg: 1931278.62 / Max: 2022608.12Min: 1853257.88 / Avg: 2083152.81 / Max: 2209886.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCentOS Stream 9Clear Linux 36990510152025SE +/- 0.15, N = 3SE +/- 0.07, N = 322.0222.22
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCentOS Stream 9Clear Linux 36990510152025Min: 21.75 / Avg: 22.02 / Max: 22.26Min: 22.1 / Avg: 22.22 / Max: 22.34

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990612182430SE +/- 0.047, N = 3SE +/- 0.008, N = 326.7245.828-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990612182430Min: 26.66 / Avg: 26.72 / Max: 26.82Min: 5.81 / Avg: 5.83 / Max: 5.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Clear Linux 3699020K40K60K80K100KSE +/- 32.56, N = 3SE +/- 119.89, N = 31871085792-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Clear Linux 3699015K30K45K60K75KMin: 18646.54 / Avg: 18709.95 / Max: 18754.52Min: 85631.08 / Avg: 85791.56 / Max: 86026.081. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCentOS Stream 9Clear Linux 36990510152025SE +/- 0.06, N = 3SE +/- 0.05, N = 322.4222.52
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCentOS Stream 9Clear Linux 36990510152025Min: 22.3 / Avg: 22.42 / Max: 22.5Min: 22.47 / Avg: 22.52 / Max: 22.62

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 369903691215SE +/- 0.012, N = 3SE +/- 0.005, N = 312.0512.862-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 3699048121620Min: 12.03 / Avg: 12.05 / Max: 12.07Min: 2.85 / Avg: 2.86 / Max: 2.871. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Clear Linux 3699020K40K60K80K100KSE +/- 21.44, N = 3SE +/- 149.93, N = 32074587357-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Clear Linux 3699015K30K45K60K75KMin: 20713.98 / Avg: 20744.88 / Max: 20786.09Min: 87081.55 / Avg: 87357.07 / Max: 87597.321. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 369909K18K27K36K45KSE +/- 38.89, N = 3SE +/- 84.10, N = 34085240932-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 369907K14K21K28K35KMin: 40775 / Avg: 40852 / Max: 40900Min: 40775 / Avg: 40931.67 / Max: 410631. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 369909K18K27K36K45KSE +/- 74.23, N = 3SE +/- 180.17, N = 34058040503-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 369907K14K21K28K35KMin: 40445 / Avg: 40580 / Max: 40701Min: 40214 / Avg: 40503.33 / Max: 408341. (CXX) g++ options: -O3 -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCentOS Stream 9Clear Linux 36990612182430SE +/- 0.04, N = 3SE +/- 0.03, N = 325.5925.65
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCentOS Stream 9Clear Linux 36990612182430Min: 25.52 / Avg: 25.59 / Max: 25.63Min: 25.58 / Avg: 25.65 / Max: 25.69

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 58.89, N = 3SE +/- 13.09, N = 32015220180-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 369903K6K9K12K15KMin: 20050 / Avg: 20152.33 / Max: 20254Min: 20163 / Avg: 20180.33 / Max: 202061. (CXX) g++ options: -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 3699050100150200250SE +/- 0.17, N = 3SE +/- 0.17, N = 3236241-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 369904080120160200Min: 235.5 / Avg: 235.67 / Max: 236Min: 240.5 / Avg: 240.67 / Max: 2411. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990400800120016002000SE +/- 3.09, N = 3SE +/- 3.21, N = 316931978-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 3699030060090012001500Min: 1687 / Avg: 1693.17 / Max: 1696.5Min: 1972 / Avg: 1978 / Max: 19831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 3699011002200330044005500SE +/- 32.87, N = 3SE +/- 26.07, N = 352695259-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 369909001800270036004500Min: 5205.5 / Avg: 5268.67 / Max: 5316Min: 5222 / Avg: 5258.5 / Max: 53091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 369902004006008001000SE +/- 2.02, N = 3SE +/- 0.29, N = 3799828-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990150300450600750Min: 796 / Avg: 798.5 / Max: 802.5Min: 827 / Avg: 827.5 / Max: 8281. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990140280420560700SE +/- 1.04, N = 3SE +/- 0.29, N = 3630640-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990110220330440550Min: 628 / Avg: 630 / Max: 631.5Min: 639.5 / Avg: 640 / Max: 640.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 369902004006008001000SE +/- 0.50, N = 3SE +/- 5.77, N = 31093992-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 369902004006008001000Min: 1091.5 / Avg: 1092.5 / Max: 1093Min: 983.5 / Avg: 992 / Max: 10031. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 4.91, N = 3SE +/- 8.23, N = 332598024-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 3699014002800420056007000Min: 3250.5 / Avg: 3259 / Max: 3267.5Min: 8009.5 / Avg: 8023.83 / Max: 80381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 9Clear Linux 369900.64761.29521.94282.59043.238SE +/- 0.033, N = 3SE +/- 0.001, N = 32.8782.243
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 9Clear Linux 36990246810Min: 2.84 / Avg: 2.88 / Max: 2.95Min: 2.24 / Avg: 2.24 / Max: 2.24

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7CentOS Stream 950100150200250SE +/- 0.09, N = 3233.751. (CXX) g++ options: -fno-rtti -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7CentOS Stream 94080120160200Min: 233.63 / Avg: 233.75 / Max: 233.921. (CXX) g++ options: -fno-rtti -O3

Encode Settings: Quality 95, Compression Effort 7

Clear Linux 36990: The test quit with a non-zero exit status. E: webp2: line 2: ./libwebp2-master/build/cwp2: No such file or directory

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 369905K10K15K20K25KSE +/- 79.25, N = 3SE +/- 16.17, N = 32396724102-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 369904K8K12K16K20KMin: 23809 / Avg: 23967 / Max: 24057Min: 24070 / Avg: 24102.33 / Max: 241191. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 49.21, N = 3SE +/- 50.10, N = 32026120404-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 369904K8K12K16K20KMin: 20164 / Avg: 20260.67 / Max: 20325Min: 20325 / Avg: 20404.33 / Max: 204971. (CXX) g++ options: -O3 -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369900.5291.0581.5872.1162.645SE +/- 0.001, N = 3SE +/- 0.003, N = 31.3272.351-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990246810Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 2.35 / Avg: 2.35 / Max: 2.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 55692.46, N = 12SE +/- 21642.05, N = 51847194.122078925.13-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KMin: 1444277.75 / Avg: 1847194.12 / Max: 2020238.25Min: 1993126.88 / Avg: 2078925.13 / Max: 2110900.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990600K1200K1800K2400K3000KSE +/- 89203.76, N = 15SE +/- 22655.99, N = 32018201.092765192.67-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990500K1000K1500K2000K2500KMin: 1472476.75 / Avg: 2018201.09 / Max: 2357865.5Min: 2724096.25 / Avg: 2765192.67 / Max: 2802269.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 100.45, N = 3SE +/- 81.76, N = 69540.868576.95
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCentOS Stream 9Clear Linux 3699017003400510068008500Min: 9372.16 / Avg: 9540.86 / Max: 9719.7Min: 8277.81 / Avg: 8576.95 / Max: 8873.84

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 369908K16K24K32K40KSE +/- 900.65, N = 15SE +/- 324.72, N = 32460.3736452.69-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 369906K12K18K24K30KMin: 6 / Avg: 2460.37 / Max: 8954.24Min: 35834.48 / Avg: 36452.69 / Max: 36934.111. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Clear Linux 3699050K100K150K200K250KSE +/- 1519.57, N = 3SE +/- 255.31, N = 3200945.49215488.10-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Clear Linux 3699040K80K120K160K200KMin: 197912.72 / Avg: 200945.49 / Max: 202632.18Min: 215069.04 / Avg: 215488.1 / Max: 215950.311. (CC) gcc options: -lcrypt -lz -O3 -march=native

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingCentOS Stream 9Clear Linux 369903M6M9M12M15MSE +/- 78706.86, N = 3SE +/- 96811.45, N = 156233126.4514924848.44-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingCentOS Stream 9Clear Linux 369903M6M9M12M15MMin: 6075749.87 / Avg: 6233126.45 / Max: 6314776.19Min: 14406871.62 / Avg: 14924848.44 / Max: 15897148.71. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

Build: allmodconfig

CentOS Stream 9: The test quit with a non-zero exit status.

Clear Linux 36990: The test quit with a non-zero exit status. E: /usr/lib64/gcc/x86_64-generic-linux/12/plugin/include/builtins.h:23:10: fatal error: mpc.h: No such file or directory

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 369902004006008001000SE +/- 4.03, N = 3SE +/- 7.62, N = 15281.0969.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 369902004006008001000Min: 273 / Avg: 281.03 / Max: 285.7Min: 919.4 / Avg: 969.07 / Max: 1029.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCentOS Stream 9Clear Linux 3699020406080100SE +/- 0.02, N = 3SE +/- 0.10, N = 382.8981.97
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCentOS Stream 9Clear Linux 369901632486480Min: 82.85 / Avg: 82.89 / Max: 82.91Min: 81.79 / Avg: 81.97 / Max: 82.13

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CentOS Stream 9Clear Linux 3699020406080100SE +/- 0.66, N = 3SE +/- 0.45, N = 384.3278.59-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CentOS Stream 9Clear Linux 369901632486480Min: 83 / Avg: 84.32 / Max: 85.05Min: 77.94 / Avg: 78.59 / Max: 79.451. (CXX) g++ options: -O3 -fPIC -lm

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9140K280K420K560K700KSE +/- 2481.53, N = 3666008.7
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9120K240K360K480K600KMin: 661392.3 / Avg: 666008.67 / Max: 669895.1

Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000

Clear Linux 36990: The test quit with a non-zero exit status.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheCentOS Stream 9Clear Linux 3699048121620SE +/- 0.13, N = 10SE +/- 0.18, N = 516.2616.40-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheCentOS Stream 9Clear Linux 3699048121620Min: 15.95 / Avg: 16.26 / Max: 16.81Min: 15.96 / Avg: 16.4 / Max: 16.761. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 369901.21642.43283.64924.86566.082SE +/- 0.32475, N = 15SE +/- 0.07618, N = 155.406034.64086MIN: 3.28-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990246810Min: 3.69 / Avg: 5.41 / Max: 8.91Min: 4.23 / Avg: 4.64 / Max: 5.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Clear Linux 369903691215SE +/- 0.06, N = 3SE +/- 0.03, N = 310.5513.56
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Clear Linux 3699048121620Min: 10.44 / Avg: 10.55 / Max: 10.66Min: 13.51 / Avg: 13.56 / Max: 13.59

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsCentOS Stream 9Clear Linux 369901.12952.2593.38854.5185.6475SE +/- 0.01, N = 3SE +/- 0.01, N = 34.855.02-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsCentOS Stream 9Clear Linux 36990246810Min: 4.84 / Avg: 4.85 / Max: 4.86Min: 5.01 / Avg: 5.02 / Max: 5.031. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCentOS Stream 9Clear Linux 369901.29832.59663.89495.19326.4915SE +/- 0.01, N = 3SE +/- 0.00, N = 35.775.71-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCentOS Stream 9Clear Linux 36990246810Min: 5.76 / Avg: 5.77 / Max: 5.79Min: 5.71 / Avg: 5.71 / Max: 5.721. (CXX) g++ options: -O3

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.46760.93521.40281.87042.338SE +/- 0.024, N = 32.078
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 9246810Min: 2.05 / Avg: 2.08 / Max: 2.13

Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing

Clear Linux 36990: The test run did not produce a result.

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetCentOS Stream 9Clear Linux 369901.29152.5833.87455.1666.4575SE +/- 0.01, N = 3SE +/- 0.00, N = 35.625.74-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetCentOS Stream 9Clear Linux 36990246810Min: 5.61 / Avg: 5.62 / Max: 5.64Min: 5.73 / Avg: 5.74 / Max: 5.741. (CXX) g++ options: -O3

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCentOS Stream 9Clear Linux 369906001200180024003000SE +/- 27.10, N = 3SE +/- 35.23, N = 427482851-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCentOS Stream 9Clear Linux 369905001000150020002500Min: 2717 / Avg: 2748 / Max: 2802Min: 2753 / Avg: 2851.25 / Max: 29131. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1Clear Linux 36990800K1600K2400K3200K4000KSE +/- 16274.57, N = 33826220.781. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5Clear Linux 36990900K1800K2700K3600K4500KSE +/- 12346.26, N = 34236085.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1Clear Linux 36990900K1800K2700K3600K4500KSE +/- 10042.07, N = 34003419.311. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 930060090012001500SE +/- 0.41, N = 31424.57MIN: 1046.08 / MAX: 1657.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 948121620SE +/- 0.00, N = 313.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 92004006008001000SE +/- 0.67, N = 3819.63MIN: 519.3 / MAX: 967.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9612182430SE +/- 0.02, N = 324.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 930060090012001500SE +/- 1.03, N = 31451.62MIN: 1039.96 / MAX: 1708.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 948121620SE +/- 0.02, N = 313.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 950100150200250SE +/- 0.22, N = 3239.86MIN: 178.86 / MAX: 348.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 920406080100SE +/- 0.07, N = 383.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9714212835SE +/- 0.39, N = 1329.67
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9714212835Min: 28.98 / Avg: 29.67 / Max: 34.27

Build: defconfig

Clear Linux 36990: The test quit with a non-zero exit status. E: linux-5.18/tools/objtool/include/objtool/elf.h:10:10: fatal error: gelf.h: No such file or directory

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCentOS Stream 9Clear Linux 369901428425670SE +/- 0.04, N = 3SE +/- 0.20, N = 364.8264.23
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCentOS Stream 9Clear Linux 369901326395265Min: 64.78 / Avg: 64.82 / Max: 64.91Min: 63.9 / Avg: 64.23 / Max: 64.58

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 9Clear Linux 369900.43560.87121.30681.74242.178SE +/- 0.001, N = 3SE +/- 0.001, N = 31.9361.854
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 9Clear Linux 36990246810Min: 1.93 / Avg: 1.94 / Max: 1.94Min: 1.85 / Avg: 1.85 / Max: 1.86

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 920406080100SE +/- 0.18, N = 385.47MIN: 76.11 / MAX: 195.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 950100150200250SE +/- 0.51, N = 3233.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 9714212835SE +/- 0.01, N = 332.00MIN: 21.78 / MAX: 67.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 95001000150020002500SE +/- 1.20, N = 32478.961. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 90.3060.6120.9181.2241.53SE +/- 0.00, N = 31.36MIN: 0.99 / MAX: 13.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 910K20K30K40K50KSE +/- 99.47, N = 347224.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 9246810SE +/- 0.01, N = 38.27MIN: 7.23 / MAX: 27.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 92K4K6K8K10KSE +/- 8.29, N = 39657.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 91.0172.0343.0514.0685.085SE +/- 0.00, N = 34.52MIN: 4.11 / MAX: 44.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 99001800270036004500SE +/- 1.33, N = 34414.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsCentOS Stream 9Clear Linux 369902M4M6M8M10MSE +/- 103735.47, N = 4SE +/- 83362.05, N = 89473078.179658490.04-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsCentOS Stream 9Clear Linux 369902M4M6M8M10MMin: 9162429.01 / Avg: 9473078.17 / Max: 9591018.69Min: 9392447.16 / Avg: 9658490.04 / Max: 10016352.521. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCentOS Stream 9Clear Linux 3699030060090012001500SE +/- 1.76, N = 3SE +/- 2.08, N = 311531192-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCentOS Stream 9Clear Linux 369902004006008001000Min: 1150 / Avg: 1152.67 / Max: 1156Min: 1189 / Avg: 1192 / Max: 11961. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCentOS Stream 9Clear Linux 369902004006008001000SE +/- 1.76, N = 3SE +/- 2.31, N = 3641869-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCentOS Stream 9Clear Linux 36990150300450600750Min: 638 / Avg: 641.33 / Max: 644Min: 865 / Avg: 869 / Max: 8731. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCentOS Stream 9Clear Linux 369902004006008001000SE +/- 0.88, N = 3SE +/- 3.93, N = 3738994-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCentOS Stream 9Clear Linux 369902004006008001000Min: 736 / Avg: 737.67 / Max: 739Min: 986 / Avg: 993.67 / Max: 9991. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlCentOS Stream 9Clear Linux 369905001000150020002500SE +/- 10.48, N = 3SE +/- 14.50, N = 323402558-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlCentOS Stream 9Clear Linux 36990400800120016002000Min: 2327 / Avg: 2340.33 / Max: 2361Min: 2543 / Avg: 2558 / Max: 25871. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCentOS Stream 9Clear Linux 369900.66151.3231.98452.6463.3075SE +/- 0.00, N = 3SE +/- 0.00, N = 32.912.94-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCentOS Stream 9Clear Linux 36990246810Min: 2.91 / Avg: 2.91 / Max: 2.92Min: 2.94 / Avg: 2.94 / Max: 2.951. (CXX) g++ options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 369906001200180024003000SE +/- 4.30, N = 5SE +/- 3.36, N = 32635.72613.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 369905001000150020002500Min: 2619.1 / Avg: 2635.68 / Max: 2643.6Min: 2609.5 / Avg: 2613.2 / Max: 2619.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 369901122334455SE +/- 0.45, N = 5SE +/- 0.37, N = 343.448.81. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 369901020304050Min: 42 / Avg: 43.38 / Max: 44.4Min: 48.1 / Avg: 48.83 / Max: 49.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestCentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 73.75, N = 15SE +/- 47.88, N = 3491098081. Nodejs
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestCentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 4291 / Avg: 4910.27 / Max: 5367Min: 9730 / Avg: 9807.67 / Max: 98951. Nodejs

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7CentOS Stream 9306090120150SE +/- 0.09, N = 3111.921. (CXX) g++ options: -fno-rtti -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7CentOS Stream 920406080100Min: 111.73 / Avg: 111.92 / Max: 112.021. (CXX) g++ options: -fno-rtti -O3

Encode Settings: Quality 75, Compression Effort 7

Clear Linux 36990: The test quit with a non-zero exit status. E: webp2: line 2: ./libwebp2-master/build/cwp2: No such file or directory

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0Clear Linux 36990100K200K300K400K500KSE +/- 1855.84, N = 3485587.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0Clear Linux 3699080K160K240K320K400KMin: 481904.84 / Avg: 485587.54 / Max: 487829.791. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

CentOS Stream 9: The test run did not produce a result.

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestCentOS Stream 9Clear Linux 3699030060090012001500SE +/- 6.85, N = 3SE +/- 6.83, N = 31455.2668.3MIN: 1315.52 / MAX: 1806.24MIN: 620.85 / MAX: 782.18
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestCentOS Stream 9Clear Linux 3699030060090012001500Min: 1446.76 / Avg: 1455.16 / Max: 1468.73Min: 659.48 / Avg: 668.33 / Max: 681.76

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990918273645SE +/- 5.68, N = 15SE +/- 0.41, N = 1237.9612.27MIN: 3.48-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 9.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990816243240Min: 4.73 / Avg: 37.96 / Max: 66.09Min: 10.57 / Avg: 12.27 / Max: 14.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.46350.9271.39051.8542.3175SE +/- 0.004, N = 32.060
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 9246810Min: 2.05 / Avg: 2.06 / Max: 2.07

Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing

Clear Linux 36990: The test run did not produce a result.

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCentOS Stream 9Clear Linux 369900.22050.4410.66150.8821.1025SE +/- 0.00, N = 3SE +/- 0.00, N = 30.960.98-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCentOS Stream 9Clear Linux 36990246810Min: 0.96 / Avg: 0.96 / Max: 0.96Min: 0.98 / Avg: 0.98 / Max: 0.981. (CXX) g++ options: -O3

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCentOS Stream 920406080100SE +/- 0.27, N = 395.42
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCentOS Stream 920406080100Min: 94.99 / Avg: 95.42 / Max: 95.92

Time To Compile

Clear Linux 36990: The test quit with a non-zero exit status. E: configure: error: C compiler cannot create executables

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KCentOS Stream 9Clear Linux 3699020406080100SE +/- 0.49, N = 15SE +/- 1.11, N = 334.5782.431. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KCentOS Stream 9Clear Linux 369901632486480Min: 31.96 / Avg: 34.57 / Max: 38.56Min: 81.02 / Avg: 82.43 / Max: 84.611. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 369900.53681.07361.61042.14722.684SE +/- 0.07640, N = 15SE +/- 0.02882, N = 152.385632.30251MIN: 1.7-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 1.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990246810Min: 1.87 / Avg: 2.39 / Max: 3.19Min: 2.1 / Avg: 2.3 / Max: 2.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2CentOS Stream 9Clear Linux 369901122334455SE +/- 0.53, N = 3SE +/- 0.10, N = 348.7142.63-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2CentOS Stream 9Clear Linux 369901020304050Min: 48.05 / Avg: 48.71 / Max: 49.77Min: 42.46 / Avg: 42.62 / Max: 42.811. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedCentOS Stream 9Clear Linux 369906001200180024003000SE +/- 6.30, N = 3SE +/- 2.24, N = 32571.32522.51. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedCentOS Stream 9Clear Linux 36990400800120016002000Min: 2559.2 / Avg: 2571.3 / Max: 2580.4Min: 2518 / Avg: 2522.47 / Max: 25251. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedCentOS Stream 9Clear Linux 3699020406080100SE +/- 0.52, N = 3SE +/- 0.48, N = 386.691.51. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedCentOS Stream 9Clear Linux 3699020406080100Min: 85.9 / Avg: 86.57 / Max: 87.6Min: 91 / Avg: 91.53 / Max: 92.51. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCentOS Stream 9Clear Linux 3699080K160K240K320K400KSE +/- 2273.35, N = 3SE +/- 953.58, N = 33711313642091. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCentOS Stream 9Clear Linux 3699060K120K180K240K300KMin: 366618 / Avg: 371131 / Max: 373866Min: 362506 / Avg: 364209 / Max: 3658041. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingCentOS Stream 9Clear Linux 36990100K200K300K400K500KSE +/- 5624.44, N = 3SE +/- 507.78, N = 34678664730261. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingCentOS Stream 9Clear Linux 3699080K160K240K320K400KMin: 456744 / Avg: 467866 / Max: 474886Min: 472060 / Avg: 473026.33 / Max: 4737801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.19440.38880.58320.77760.972SE +/- 0.004, N = 30.864
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 9246810Min: 0.86 / Avg: 0.86 / Max: 0.87

Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing

Clear Linux 36990: The test run did not produce a result.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.30940.61880.92821.23761.547SE +/- 0.001, N = 31.375
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 9246810Min: 1.37 / Avg: 1.38 / Max: 1.38

Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing

Clear Linux 36990: The test run did not produce a result.

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCentOS Stream 9Clear Linux 36990918273645SE +/- 0.22, N = 3SE +/- 0.04, N = 341.2138.30-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCentOS Stream 9Clear Linux 36990918273645Min: 40.76 / Avg: 41.21 / Max: 41.45Min: 38.25 / Avg: 38.3 / Max: 38.391. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCentOS Stream 9Clear Linux 369902004006008001000SE +/- 11.23, N = 3SE +/- 1.10, N = 31075.3479.0MIN: 628.33 / MAX: 1551.11MIN: 329.34 / MAX: 719.74
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCentOS Stream 9Clear Linux 369902004006008001000Min: 1052.97 / Avg: 1075.26 / Max: 1088.85Min: 477.72 / Avg: 479.04 / Max: 481.21

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 369907001400210028003500SE +/- 10.41, N = 33201.03163.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 369906001200180024003000Min: 3188.2 / Avg: 3200.97 / Max: 3221.61. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 369902004006008001000SE +/- 0.78, N = 3SE +/- 3.84, N = 3307.5995.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 369902004006008001000Min: 306.2 / Avg: 307.53 / Max: 308.9Min: 990.4 / Avg: 995.93 / Max: 1003.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990600K1200K1800K2400K3000KSE +/- 26860.41, N = 5SE +/- 23267.85, N = 32406986.652722264.50-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990500K1000K1500K2000K2500KMin: 2307812.75 / Avg: 2406986.65 / Max: 2470034Min: 2675991.25 / Avg: 2722264.5 / Max: 2749675.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Decompression SpeedCentOS Stream 97001400210028003500SE +/- 13.60, N = 33208.01. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedCentOS Stream 9Clear Linux 369906001200180024003000SE +/- 0.65, N = 2SE +/- 1.35, N = 33022.92985.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedCentOS Stream 9Clear Linux 369905001000150020002500Min: 3022.2 / Avg: 3022.85 / Max: 3023.5Min: 2983.3 / Avg: 2985.17 / Max: 2987.81. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedCentOS Stream 9Clear Linux 3699015003000450060007500SE +/- 78.16, N = 3SE +/- 4.17, N = 37026.16807.81. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedCentOS Stream 9Clear Linux 3699012002400360048006000Min: 6873.2 / Avg: 7026.13 / Max: 7130.6Min: 6803.2 / Avg: 6807.77 / Max: 6816.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCentOS Stream 9Clear Linux 369902M4M6M8M10MSE +/- 85352.98, N = 4SE +/- 1671.17, N = 37093379.738684030.90-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCentOS Stream 9Clear Linux 369901.5M3M4.5M6M7.5MMin: 6837619.62 / Avg: 7093379.73 / Max: 7189613.49Min: 8680900.29 / Avg: 8684030.9 / Max: 8686610.061. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990816243240SE +/- 0.07, N = 3SE +/- 0.02, N = 333.3732.43
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990714212835Min: 33.24 / Avg: 33.37 / Max: 33.47Min: 32.39 / Avg: 32.43 / Max: 32.46

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCentOS Stream 9Clear Linux 369900.06330.12660.18990.25320.3165SE +/- 0.00094, N = 3SE +/- 0.00064, N = 30.281380.28053
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCentOS Stream 9Clear Linux 3699012345Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.28 / Avg: 0.28 / Max: 0.28

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCentOS Stream 9Clear Linux 369903691215SE +/- 0.116, N = 17SE +/- 0.002, N = 49.1947.853
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCentOS Stream 9Clear Linux 369903691215Min: 8.77 / Avg: 9.19 / Max: 10.78Min: 7.85 / Avg: 7.85 / Max: 7.86

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACentOS Stream 9Clear Linux 369903691215SE +/- 0.02, N = 3SE +/- 0.01, N = 310.375.21-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACentOS Stream 9Clear Linux 369903691215Min: 10.34 / Avg: 10.37 / Max: 10.41Min: 5.19 / Avg: 5.21 / Max: 5.221. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandCentOS Stream 9Clear Linux 36990140K280K420K560K700KSE +/- 2562.02, N = 3SE +/- 30.50, N = 3667284.36669368.18-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandCentOS Stream 9Clear Linux 36990120K240K360K480K600KMin: 662164.34 / Avg: 667284.36 / Max: 670020.11Min: 669330.88 / Avg: 669368.18 / Max: 669428.641. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2CentOS Stream 9Clear Linux 3699080160240320400SE +/- 4.68, N = 4SE +/- 0.42, N = 3378.88348.82MIN: 371.88 / MAX: 634.44-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 346.87 / MAX: 355.441. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2CentOS Stream 9Clear Linux 3699070140210280350Min: 373.95 / Avg: 378.88 / Max: 392.9Min: 348.16 / Avg: 348.81 / Max: 349.611. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringClear Linux 369902M4M6M8M10MSE +/- 6299.40, N = 310692920.631. (CC) gcc options: -O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -O2 -std=gnu99 -lm -laio -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocCentOS Stream 9Clear Linux 3699070M140M210M280M350MSE +/- 452266.97, N = 3SE +/- 1194169.90, N = 3306750258.84342573199.69-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocCentOS Stream 9Clear Linux 3699060M120M180M240M300MMin: 306128368.17 / Avg: 306750258.84 / Max: 307630040.92Min: 340555237.75 / Avg: 342573199.69 / Max: 344688524.841. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingCentOS Stream 9Clear Linux 3699014K28K42K56K70KSE +/- 123.25, N = 3SE +/- 173.91, N = 363484.4561843.01-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingCentOS Stream 9Clear Linux 3699011K22K33K44K55KMin: 63238.25 / Avg: 63484.45 / Max: 63618.02Min: 61495.27 / Avg: 61843.01 / Max: 62023.041. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingCentOS Stream 9Clear Linux 369903K6K9K12K15KSE +/- 5.23, N = 3SE +/- 141.11, N = 312812.4511244.94-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingCentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 12802 / Avg: 12812.45 / Max: 12817.98Min: 11047.19 / Avg: 11244.94 / Max: 11518.191. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPCentOS Stream 9Clear Linux 369908001600240032004000SE +/- 34.11, N = 3SE +/- 1.76, N = 33747.583336.53-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPCentOS Stream 9Clear Linux 369907001400210028003500Min: 3679.42 / Avg: 3747.58 / Max: 3784.23Min: 3333.95 / Avg: 3336.53 / Max: 3339.891. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDCentOS Stream 9Clear Linux 369909001800270036004500SE +/- 35.16, N = 3SE +/- 5.40, N = 34098.843783.49-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDCentOS Stream 9Clear Linux 369907001400210028003500Min: 4028.56 / Avg: 4098.84 / Max: 4135.99Min: 3772.7 / Avg: 3783.49 / Max: 3788.971. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressCentOS Stream 9Clear Linux 3699030K60K90K120K150KSE +/- 758.69, N = 3SE +/- 387.90, N = 3135517.46140290.21-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressCentOS Stream 9Clear Linux 3699020K40K60K80K100KMin: 134525.36 / Avg: 135517.46 / Max: 137007.81Min: 139889.06 / Avg: 140290.21 / Max: 141065.871. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresCentOS Stream 9Clear Linux 369903M6M9M12M15MSE +/- 27158.37, N = 3SE +/- 5152.27, N = 37186364.5115903941.88-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresCentOS Stream 9Clear Linux 369903M6M9M12M15MMin: 7157718.18 / Avg: 7186364.51 / Max: 7240653.56Min: 15897756.79 / Avg: 15903941.88 / Max: 15914172.091. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILECentOS Stream 9Clear Linux 36990300K600K900K1200K1500KSE +/- 2669.03, N = 3SE +/- 713.22, N = 31271967.051161464.61-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILECentOS Stream 9Clear Linux 36990200K400K600K800K1000KMin: 1268777.74 / Avg: 1271967.05 / Max: 1277268.78Min: 1160038.17 / Avg: 1161464.61 / Max: 1162181.061. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingCentOS Stream 9Clear Linux 369902004006008001000SE +/- 2.69, N = 3SE +/- 1.09, N = 3934.26893.84-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingCentOS Stream 9Clear Linux 36990160320480640800Min: 930.36 / Avg: 934.26 / Max: 939.43Min: 892.33 / Avg: 893.84 / Max: 895.961. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoCentOS Stream 9Clear Linux 3699020K40K60K80K100KSE +/- 289.31, N = 3SE +/- 22.86, N = 383808.9195806.71-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoCentOS Stream 9Clear Linux 3699017K34K51K68K85KMin: 83236.24 / Avg: 83808.91 / Max: 84166.97Min: 95769.44 / Avg: 95806.71 / Max: 95848.271. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathCentOS Stream 9Clear Linux 3699070K140K210K280K350KSE +/- 512.51, N = 3SE +/- 272.45, N = 3286293.40328235.14-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathCentOS Stream 9Clear Linux 3699060K120K180K240K300KMin: 285369.07 / Avg: 286293.4 / Max: 287139.23Min: 327954.2 / Avg: 328235.14 / Max: 328779.951. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathCentOS Stream 9Clear Linux 3699070K140K210K280K350KSE +/- 944.66, N = 3SE +/- 79.17, N = 3322923.09309299.86-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathCentOS Stream 9Clear Linux 3699060K120K180K240K300KMin: 321061.83 / Avg: 322923.09 / Max: 324134.71Min: 309155.65 / Avg: 309299.86 / Max: 309428.581. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCentOS Stream 9Clear Linux 369903691215SE +/- 0.002, N = 3SE +/- 0.033, N = 38.9969.525-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCentOS Stream 9Clear Linux 369903691215Min: 8.99 / Avg: 9 / Max: 9Min: 9.47 / Avg: 9.53 / Max: 9.581. (CXX) g++ options: -O3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990500K1000K1500K2000K2500KSE +/- 29696.58, N = 3SE +/- 9831.09, N = 32189377.082346688.30-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KMin: 2136607.75 / Avg: 2189377.08 / Max: 2239367Min: 2327138.5 / Avg: 2346688.33 / Max: 23582811. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.

Clear Linux 36990: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.

Clear Linux 36990: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCentOS Stream 9Clear Linux 36990246810SE +/- 0.061, N = 15SE +/- 0.002, N = 38.8028.046-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCentOS Stream 9Clear Linux 369903691215Min: 8.44 / Avg: 8.8 / Max: 9.01Min: 8.04 / Avg: 8.05 / Max: 8.051. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCentOS Stream 9Clear Linux 369903691215SE +/- 0.070, N = 15SE +/- 0.021, N = 39.2606.174-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCentOS Stream 9Clear Linux 369903691215Min: 8.96 / Avg: 9.26 / Max: 9.79Min: 6.14 / Avg: 6.17 / Max: 6.211. (CXX) g++ options: -O3 -fPIC -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1CentOS Stream 9Clear Linux 3699080160240320400SE +/- 0.03, N = 3SE +/- 0.09, N = 3366.49358.15MIN: 366.26 / MAX: 366.87-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 357.28 / MAX: 366.821. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1CentOS Stream 9Clear Linux 3699070140210280350Min: 366.45 / Avg: 366.49 / Max: 366.54Min: 357.98 / Avg: 358.15 / Max: 358.291. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990612182430SE +/- 0.03, N = 3SE +/- 0.06, N = 325.0424.25
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990612182430Min: 24.98 / Avg: 25.04 / Max: 25.07Min: 24.17 / Avg: 24.25 / Max: 24.36

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCentOS Stream 9Clear Linux 3699012002400360048006000SE +/- 189.31, N = 16SE +/- 7.13, N = 456003683
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCentOS Stream 9Clear Linux 3699010002000300040005000Min: 4259 / Avg: 5600.38 / Max: 6538Min: 3662 / Avg: 3683.25 / Max: 3692

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990800K1600K2400K3200K4000KSE +/- 2019.92, N = 3SE +/- 3266.10, N = 32284227.203512867.33-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990600K1200K1800K2400K3000KMin: 2280646.5 / Avg: 2284227.17 / Max: 2287637.5Min: 3506374.25 / Avg: 3512867.33 / Max: 35167321. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 36990200K400K600K800K1000KSE +/- 4686.71, N = 4SE +/- 706.24, N = 31112427.21119831.81. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 2022
OpenBenchmarking.orgverify/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 36990200K400K600K800K1000KMin: 1098843.1 / Avg: 1112427.23 / Max: 1119271.3Min: 1118951.2 / Avg: 1119831.77 / Max: 1121228.51. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 2022

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 205.54, N = 4SE +/- 134.27, N = 316866.117031.11. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 2022
OpenBenchmarking.orgsign/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 369903K6K9K12K15KMin: 16252.7 / Avg: 16866.08 / Max: 17111.2Min: 16763 / Avg: 17031.1 / Max: 17178.61. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 2022

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 369900.85761.71522.57283.43044.288SE +/- 0.01173, N = 3SE +/- 0.00397, N = 33.811553.80080MIN: 3.53-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990246810Min: 3.79 / Avg: 3.81 / Max: 3.83Min: 3.8 / Avg: 3.8 / Max: 3.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCentOS Stream 9Clear Linux 36990510152025SE +/- 0.17, N = 3SE +/- 0.08, N = 321.1218.19-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCentOS Stream 9Clear Linux 36990510152025Min: 20.82 / Avg: 21.12 / Max: 21.39Min: 18.06 / Avg: 18.19 / Max: 18.351. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCentOS Stream 9Clear Linux 36990246810SE +/- 0.073, N = 15SE +/- 0.012, N = 36.6054.690-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCentOS Stream 9Clear Linux 369903691215Min: 6.11 / Avg: 6.6 / Max: 6.91Min: 4.67 / Avg: 4.69 / Max: 4.711. (CXX) g++ options: -O3 -fPIC -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCentOS Stream 9Clear Linux 3699080160240320400SE +/- 2.47, N = 15SE +/- 1.16, N = 3316.37359.90-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCentOS Stream 9Clear Linux 3699060120180240300Min: 294.95 / Avg: 316.37 / Max: 327.35Min: 358.7 / Avg: 359.9 / Max: 362.231. (CXX) g++ options: -O3 -flto -pthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansCentOS Stream 93K6K9K12K15KSE +/- 116.64, N = 416070
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansCentOS Stream 93K6K9K12K15KMin: 15816 / Avg: 16070 / Max: 16351

Java Test: Tradebeans

Clear Linux 36990: The test quit with a non-zero exit status. E: Caused by: java.lang.ExceptionInInitializerError: Exception java.lang.ExceptionInInitializerError [in thread "main"]

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.02450.0490.07350.0980.1225SE +/- 0.001, N = 30.109
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 912345Min: 0.11 / Avg: 0.11 / Max: 0.11

Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State

Clear Linux 36990: The test run did not produce a result.

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCentOS Stream 9Clear Linux 369901.10062.20123.30184.40245.503SE +/- 0.0017, N = 3SE +/- 0.0032, N = 34.50544.8916-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCentOS Stream 9Clear Linux 36990246810Min: 4.5 / Avg: 4.51 / Max: 4.51Min: 4.89 / Avg: 4.89 / Max: 4.91. (CXX) g++ options: -O3 -flto -pthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369901632486480SE +/- 0.30, N = 3SE +/- 0.40, N = 338.6873.19-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369901428425670Min: 38.2 / Avg: 38.67 / Max: 39.23Min: 72.78 / Avg: 73.19 / Max: 73.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.050.10.150.20.25SE +/- 0.003, N = 40.222
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 912345Min: 0.22 / Avg: 0.22 / Max: 0.23

Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State

Clear Linux 36990: The test run did not produce a result.

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2CentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 54.95, N = 4SE +/- 29.40, N = 498472453
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2CentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 9761 / Avg: 9847.25 / Max: 10006Min: 2416 / Avg: 2453.25 / Max: 2541

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369904080120160200SE +/- 1.37, N = 4SE +/- 0.92, N = 3115.50163.87-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150Min: 113.24 / Avg: 115.5 / Max: 119.42Min: 162.6 / Avg: 163.87 / Max: 165.651. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150SE +/- 1.23, N = 3SE +/- 1.00, N = 399.27148.91-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150Min: 96.81 / Avg: 99.27 / Max: 100.63Min: 147.87 / Avg: 148.91 / Max: 150.91. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.0070.0140.0210.0280.035SE +/- 0.000, N = 30.031
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 912345Min: 0.03 / Avg: 0.03 / Max: 0.03

Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State

Clear Linux 36990: The test run did not produce a result.

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150SE +/- 0.07, N = 3SE +/- 2.02, N = 3112.93153.77-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150Min: 112.85 / Avg: 112.93 / Max: 113.07Min: 150 / Avg: 153.77 / Max: 156.911. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2CentOS Stream 9Clear Linux 3699020406080100SE +/- 0.78, N = 3SE +/- 0.57, N = 1075.8872.13MIN: 74.63 / MAX: 111.7-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 71.03 / MAX: 78.391. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2CentOS Stream 9Clear Linux 369901530456075Min: 74.81 / Avg: 75.88 / Max: 77.39Min: 71.19 / Avg: 72.13 / Max: 77.081. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCentOS Stream 9Clear Linux 369901122334455SE +/- 0.05, N = 3SE +/- 0.06, N = 346.3850.47-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCentOS Stream 9Clear Linux 369901020304050Min: 46.29 / Avg: 46.38 / Max: 46.44Min: 50.38 / Avg: 50.47 / Max: 50.591. (CXX) g++ options: -O3 -flto -pthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150SE +/- 1.08, N = 4SE +/- 0.82, N = 386.67147.63-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150Min: 83.67 / Avg: 86.67 / Max: 88.78Min: 146.16 / Avg: 147.63 / Max: 148.991. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100CentOS Stream 9Clear Linux 369900.68491.36982.05472.73963.4245SE +/- 0.065, N = 15SE +/- 0.002, N = 33.0442.656-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100CentOS Stream 9Clear Linux 36990246810Min: 2.8 / Avg: 3.04 / Max: 3.4Min: 2.65 / Avg: 2.66 / Max: 2.661. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990306090120150SE +/- 0.26, N = 3SE +/- 0.37, N = 365.62132.87-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 3699020406080100Min: 65.2 / Avg: 65.62 / Max: 66.09Min: 132.14 / Avg: 132.87 / Max: 133.381. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.06820.13640.20460.27280.341SE +/- 0.001, N = 30.303
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 912345Min: 0.3 / Avg: 0.3 / Max: 0.31

Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State

Clear Linux 36990: The test run did not produce a result.

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 369900.82891.65782.48673.31564.1445SE +/- 0.03477, N = 14SE +/- 0.00647, N = 33.684043.64846MIN: 3.54-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990246810Min: 3.64 / Avg: 3.68 / Max: 4.13Min: 3.64 / Avg: 3.65 / Max: 3.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastCentOS Stream 9Clear Linux 369902004006008001000SE +/- 3.69, N = 3SE +/- 1.80, N = 3799.11837.15-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastCentOS Stream 9Clear Linux 36990150300450600750Min: 792 / Avg: 799.11 / Max: 804.37Min: 834.02 / Avg: 837.15 / Max: 840.251. (CXX) g++ options: -O3 -flto -pthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369904080120160200SE +/- 1.63, N = 3SE +/- 2.32, N = 4113.23192.06-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369904080120160200Min: 110.27 / Avg: 113.23 / Max: 115.9Min: 187.09 / Avg: 192.06 / Max: 198.221. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Clients: 200 - Set To Get Ratio: 1:1

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Clients: 200 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Clients: 200 - Set To Get Ratio: 1:5

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369904080120160200SE +/- 0.83, N = 3SE +/- 1.35, N = 392.83195.93-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 369904080120160200Min: 91.74 / Avg: 92.83 / Max: 94.47Min: 193.59 / Avg: 195.93 / Max: 198.251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultCentOS Stream 90.60011.20021.80032.40043.0005SE +/- 0.033, N = 152.6671. (CXX) g++ options: -fno-rtti -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultCentOS Stream 9246810Min: 2.49 / Avg: 2.67 / Max: 2.921. (CXX) g++ options: -fno-rtti -O3

Encode Settings: Default

Clear Linux 36990: The test quit with a non-zero exit status. E: webp2: line 2: ./libwebp2-master/build/cwp2: No such file or directory

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCentOS Stream 9Clear Linux 369900.48670.97341.46011.94682.4335SE +/- 0.069, N = 15SE +/- 0.003, N = 32.1631.662-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCentOS Stream 9Clear Linux 36990246810Min: 1.83 / Avg: 2.16 / Max: 2.4Min: 1.66 / Avg: 1.66 / Max: 1.671. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Clients: 50 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Clients: 50 - Set To Get Ratio: 1:5

CentOS Stream 9: The test run did not produce a result.

Clients: 50 - Set To Get Ratio: 1:1

CentOS Stream 9: The test run did not produce a result.

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.05940.11880.17820.23760.297SE +/- 0.002, N = 30.264
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 912345Min: 0.26 / Avg: 0.26 / Max: 0.27

Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State

Clear Linux 36990: The test run did not produce a result.

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 369900.48590.97181.45771.94362.4295SE +/- 0.01538, N = 3SE +/- 0.00145, N = 32.159382.12637MIN: 2.04-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990246810Min: 2.14 / Avg: 2.16 / Max: 2.19Min: 2.12 / Avg: 2.13 / Max: 2.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6CentOS Stream 9Clear Linux 36990246810SE +/- 0.037, N = 3SE +/- 0.026, N = 36.0563.463-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6CentOS Stream 9Clear Linux 36990246810Min: 6.01 / Avg: 6.06 / Max: 6.13Min: 3.44 / Avg: 3.46 / Max: 3.521. (CXX) g++ options: -O3 -fPIC -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCentOS Stream 9Clear Linux 36990816243240SE +/- 0.06, N = 3SE +/- 0.08, N = 330.8736.00-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCentOS Stream 9Clear Linux 36990816243240Min: 30.81 / Avg: 30.87 / Max: 30.98Min: 35.87 / Avg: 36 / Max: 36.141. (CXX) g++ options: -O3 -lm -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

CentOS Stream 9: The test quit with a non-zero exit status.

Clear Linux 36990: The test quit with a non-zero exit status.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

Test: IO_uring

CentOS Stream 9: The test run did not produce a result.

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Tradesoap

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

225 Results Shown

PostgreSQL pgbench:
  100 - 500 - Read Only - Average Latency
  100 - 500 - Read Only
Stockfish
ONNX Runtime
Apache Spark:
  40000000 - 500 - Broadcast Inner Join Test Time
  40000000 - 500 - Inner Join Test Time
  40000000 - 500 - Repartition Test Time
  40000000 - 500 - Group By Test Time
Renaissance
ONNX Runtime
oneDNN
Apache Spark:
  40000000 - 500 - Calculate Pi Benchmark Using Dataframe
  40000000 - 500 - Calculate Pi Benchmark
  40000000 - 500 - SHA-512 Benchmark Time
PostgreSQL pgbench:
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
OSPRay
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
  nasnet
oneDNN
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
TensorFlow Lite
ONNX Runtime:
  fcn-resnet101-11 - CPU - Standard
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Standard
OSPRay
memtier_benchmark
Apache HTTP Server
Renaissance:
  Finagle HTTP Requests
  ALS Movie Lens
memtier_benchmark
TensorFlow Lite:
  NASNet Mobile
  Mobilenet Float
TNN
Blender
OSPRay
LAMMPS Molecular Dynamics Simulator
OpenVINO:
  Vehicle Detection FP16 - CPU:
    ms
    FPS
TensorFlow Lite
C-Blosc
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
Timed LLVM Compilation
High Performance Conjugate Gradient
TensorFlow Lite
GraphicsMagick
Zstd Compression:
  8 - Decompression Speed
  8 - Compression Speed
OSPRay Studio
Renaissance
C-Blosc
Stress-NG:
  Atomic
  Futex
GraphicsMagick
VP9 libvpx Encoding
Natron
Redis
OSPRay
PostgreSQL pgbench:
  100 - 500 - Read Write - Average Latency
  100 - 500 - Read Write
OSPRay
PostgreSQL pgbench:
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
OSPRay Studio:
  2 - 4K - 32 - Path Tracer
  1 - 4K - 32 - Path Tracer
OSPRay
OSPRay Studio
ONNX Runtime:
  fcn-resnet101-11 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Parallel
  GPT-2 - CPU - Parallel
  bertsquad-12 - CPU - Parallel
  yolov4 - CPU - Parallel
  bertsquad-12 - CPU - Standard
  super-resolution-10 - CPU - Parallel
PyHPC Benchmarks
WebP2 Image Encode
OSPRay Studio:
  3 - 4K - 16 - Path Tracer
  2 - 4K - 16 - Path Tracer
SVT-AV1
Redis:
  SET - 1000
  GET - 500
TensorFlow Lite
Stress-NG
nginx
Stress-NG
Zstd Compression
Blender
libavif avifenc
InfluxDB
Stress-NG
oneDNN
Node.js V8 Web Tooling Benchmark
simdjson:
  PartialTweets
  DistinctUserID
PyHPC Benchmarks
simdjson
GraphicsMagick
Dragonflydb:
  50 - 5:1
  50 - 1:5
  50 - 1:1
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
Timed Linux Kernel Compilation
Blender
PyHPC Benchmarks
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
Stress-NG
GraphicsMagick:
  Enhanced
  Sharpen
  Noise-Gaussian
  Swirl
simdjson
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
Node.js Express HTTP Load Test
WebP2 Image Encode
KeyDB
Renaissance
oneDNN
PyHPC Benchmarks
simdjson
Timed GDB GNU Debugger Compilation
x264
oneDNN
libavif avifenc
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
7-Zip Compression:
  Decompression Rating
  Compression Rating
PyHPC Benchmarks:
  CPU - JAX - 4194304 - Isoneutral Mixing
  CPU - Numba - 4194304 - Isoneutral Mixing
WebP Image Encode
Renaissance
Zstd Compression:
  8, Long Mode - Decompression Speed
  8, Long Mode - Compression Speed
Redis
Zstd Compression:
  3, Long Mode - Decompression Speed
  3 - Decompression Speed
  3 - Compression Speed
Stress-NG
Blender
NAMD
Unpacking The Linux Kernel
Stress-NG:
  NUMA
  x86_64 RdRand
TNN
Stress-NG:
  IO_uring
  Malloc
  Forking
  Memory Copying
  MMAP
  MEMFD
  CPU Stress
  Semaphores
  SENDFILE
  Glibc Qsort Data Sorting
  Crypto
  Matrix Math
  Vector Math
GROMACS
Redis
WebP Image Encode
libavif avifenc
TNN
Blender
DaCapo Benchmark
Redis
OpenSSL:
 
 
oneDNN
WebP Image Encode
libavif avifenc
ASTC Encoder
DaCapo Benchmark
PyHPC Benchmarks
ASTC Encoder
SVT-AV1
PyHPC Benchmarks
DaCapo Benchmark
SVT-VP9:
  PSNR/SSIM Optimized - Bosphorus 4K
  Visual Quality Optimized - Bosphorus 4K
PyHPC Benchmarks
SVT-VP9
TNN
ASTC Encoder
SVT-HEVC
WebP Image Encode
SVT-AV1
PyHPC Benchmarks
oneDNN
ASTC Encoder
SVT-HEVC
SVT-AV1
WebP2 Image Encode
WebP Image Encode
PyHPC Benchmarks
oneDNN
libavif avifenc
LAMMPS Molecular Dynamics Simulator